Clearview is the newest company in the surveillance space we love to hate.
The app, an “after-the-fact research tool,” allows thousands of government and corporate agencies to match photos of suspected criminals against a catalog of 3 billion images culled from the internet. A New York Times report found more than 600 police agencies have started using Clearview last year, and Buzzfeed expanded that list to over 2,000 clients, including such firms as Macy’s and Walmart, as well as organizations like Interpol.
“Clearview is not a surveillance system and is not built like one,” according to the company website, which claims the firm only scrapes images from public websites. Still, questions around the Clearview’s cybersecurity have been raised and validated.
Last week, it came to light that an unknown actor gained “unauthorized access” to the facial recognition firm’s entire client list. While the hacker’s location and motivations are unknown, if found, it’s likely the individual will be indicted under the Computer Fraud and Abuse Act (CFAA), Tor Ekeland, Clearview’s legal representative, said in a phone call.
The CFAA is a federal statute used to prosecute computer hackers. Enacted two years after Apple Macintosh hit the shelves but before the internet, the law prohibits accessing a computer without permission as well as the unauthorized deletion, alteration or blocking of privately stored data.
Ekeland rose to prominence in the past decade as an outspoken critic of the CFAA. He’s called the law vague and problematic, and said it could be used as a cudgel to stifle political speech.
A former corporate lawyer specializing in securities law, Ekeland has made a career defending controversial computer criminals. His first client was Andrew “Weev” Auernheimer, a self-described neo-Nazi troll, who Ekeland took on pro bono. Auernheimer exploited a flaw in AT&T’s security to collect user’s personal information left exposed on a public website.
Wired dubbed Ekeland, a reformed alcoholic and ex-experimental theater producer, “The Troll’s Lawyer.”
The heart of Ekeland’s defense of Auernheimer was based on constitutional principle: The CFAA’s language is so broad and has been amended so frequently it fails to meet the reasonable standard of defining what’s prohibited.
Sentenced to more than three years in prison, Auernheimer’s case was overturned on appeal, though the CFAA went unamended. In the years since, Ekeland has become the go-to lawyer for hackers indicted under this ill-defined law.
He represented Matthew Keys, a former Reuters social media editor, accused of assisting Anonymous hackers access the Los Angeles Times website without permission. He has spoken publicly in Julian Assange’s defense, writing that “prosecuting Assange for a computer crime sidesteps the elephant in the room: This is the prosecution of a publisher of information of interest and importance to the public about our government.”
“Unfortunately, data breaches are part of life in the 21st century,” Ekeland told the Daily Beast, following the Clearview hack. Nothing if not consistent, Ekeland is still amenable to hackers, even if standing on the other side of the conversation.
Likewise, he defends Clearview’s controversial business practice of scraping images from social media and third-party vendor websites, as protected under the First Amendment. It’s all publicly available information, he said.
“I mean, first of all, the common law has never recognized a right to privacy for your face,” Ekeland said. “It’s kind of a bizarre argument to make because [your face is the] most public thing out there.”
Ekeland’s philosophical consistency sidelines the facts. Clearview’s security protocols are untested, unregulated and now proven unreliable. The company houses three billion images to feed an AI-powered surveillance tool used by corporate and state actors; now its client list has been published, showing once more it can’t be trusted to maintain user privacy. It even has Congress’s alarm bells ringing.
He would've been better off taking a lead pipe and beating the shit out of his boss. He would've faced less time.
Still Ekeland is willing to defend his client, as he has defended many controversial figures before. What follows is an edited and condensed transcript of our phone conversation.
What’s your beef with the CFAA?
Well, the central offense of the CFAA is that it doesn’t define its central prohibitions, right? It doesn’t define what constitutes unauthorized access to a computer or what exceeding authorized access to a protected computer is. Saying that exceeding unauthorized access to a protected computer is exceeding your permission, that’s a circular definition.
When you have squishy statutory terms that are left to the courts to determine, you get conflicting interpretations made by judges who know nothing about computer science but think they do.
There are clerks who think they understand network computers because they’ve got a smartphone or they type on a computer. Those definitions often shock people who work professionally in information security. One of the biggest problems is people go to physical-world concepts to come up with definitions of digital networks, but the analogy breaks down in security concerns. Our common law didn’t evolve based on a series of networked nodes whose primary purpose was the transmission of communications and the search and retrieval of information.
[These definitions] are highly contingent on people’s perceptions and paradigms. And it’s just not at all black and white. It’s obvious if I adopt your emotional and moral and legal presuppositions, there’s a conceptual, definitional incoherence for central prohibitions in the CFAA.
The problem with that is it seems to criminalize de minimis behavior. It can be read to criminalize temporarily deleting a letter from a Word document. So it’s like this really draconian statute that has really draconian penalties that quite often are not proportionate to the harm inflicted.
Like Keys’ case, where he was alleged to have provided login information to access the Tribune Media Company websites. In my opinion, Tribune was totally negligent in their infrastructure and security. The Federal government came up with, like, a five year initial sentencing recommendation. He got sentenced to two years, for what started out as an employment dispute. He would’ve been better off taking a lead pipe and beating the shit out of his boss. He would’ve faced less time.
It’s a law that’s written first in 1984 and has been modified a little bit since, but it’s before Facebook or Google, before smartphones, and it’s very antiquated.
You’ve made the argument in the past it could be used as a political tool to control and silence speech.
It certainly can be used for that.
Am I right in my assessment that the Clearview hacker would be charged under the CFAA?
Oh yeah. In my opinion, he committed a straight-up felony under it. He had [unauthorized] access to a protected computer. But here’s the key difference, and I think this is where there’s some confusion for you.
The argument that the public should have access to public data on the internet. Right? In the Weev case, he downloads 114,000 email addresses from a publicly facing server without any security on it. That, in my opinion, is completely legal because the public has a First Amendment right to access public information on the public internet that’s not marked private.
If the government came in and told you what books you could check out of the library or what art you could look at at the art museum, you’d say that’s censorship. But distinguish that from somebody hacking in and getting my private data. The argument that information should be free and that the public should have public access to public data is not an argument that says there should be no privacy.
You could argue the public has a right to know who is on the client list of Clearview. Right?
Why? Submit the argument, make the argument, what’s the argument?
Because they’ve scraped three billion images from millions of people. And we don’t exactly know how they are being used or stored.
Do you know exactly the images that Clearview indexed? They just indexed the public internet. You have complete access to the same dataset that Clearview indexed.
Your argument is that because you don’t like a particular use of information, public information on the public internet should be restricted. Do you know what that propositional structure is? It’s censorship. Censorship is when the state comes in and dictates whether or not somebody can read or hear something or use information because the state deems it morally or legally harmful in some way.
This is Weev’s case. And I’ve been consistent across the board in every one of my fucking cases. Now people say that we can’t use, say photos, that are publicly posted on the internet.
I mean, first of all the common law has never recognized a right to privacy in your face. To argue that your face is private is kind of a bizarre argument to make because [it’s] really the most public thing out there. A lot of the people are now making arguments about privacy in terms of faces, but were silent on the issue of revenge porn or non-consensual sexual images of women. What they said was that the women had no property rights and they had no privacy rights and their recourse was the fucking copyright law, thanks to CDA [Section] 230, [which reduces platform liability for what’s posted online.] So all these people who are now all of sudden hot to trot, ‘Oh my gosh, faces are private,’ could give a shit when women’s lives were destroyed by revenge porn.
A right to privacy in your face has never happened in the law. That’s a new thing that people are making up now. I get the right to privacy in our sexuality, because we all fucking wear clothes, right? But that goes back centuries. So the logic is really fucked up and skewed here.
You’ve said in the past Google could be prosecuted under the CFAA. As the law exists and as it’s interpreted, Clearview probably could be, too?
Oh, that was a risk case. And that’s what I fought against. Have you read hiQ v LinkedIn? Essentially what hiQ stands for is: you’ve got a First Amendment right to access public information on the public internet. It’s different if that information is marked private and you bypass privacy restrictions. But Clearview doesn’t do that. I think the CFAA issue is dead, honestly, for Clearview because unless the Ninth Circuit is wrong in its reasoning in hiQ v LinkedIn. [Clearview claims to only scrape data from public web pages.] So you’re back to the fundamental paradigm of what gives the state the right, or anyone the right, to determine access to a public library or public art museum based on the fact they think the use of that information is harmful.
There is no case law that recognizes a biometric exception to First Amendment protections. What’s going to stop the state once it starts with [putting limits on accessing] biometric information from deciding that it wants to regulate speech in other areas outside of recognized exceptions to the first amendment, which is speech of constituted criminal conduct, fraud, defamation, obscenity? It’s a lot more complicated than all these people wandering around making up privacy rights out of their ass that they haven’t theorized, that they haven’t reconciled with the First Amendment, and are based on facts of computer functionality that they don’t understand.
Clearview is accessing public information, but it’s not clear what it’s doing with it. It’s building a tool that could be used for surveillance that could eventually infringe on people’s rights. That’s the concern.
First of all, there’s a really intense surveillance tool called Facebook. Facebook is a surveillance tool that all government intelligence and surveillance agencies would love to create. And now the private sector has created it for them.
Facebook is a surveillance tool that all government intelligence and surveillance agencies would love to create. And now the private sector has created it for them.
It’s surveilling you 24/7, reading the barometric pressure from your phone and finding out what floor of the building you’re on. You know, if you’re talking on a smartphone, you’re already under surveillance.
So now you’re telling me that is an act of surveillance to index and search photos from the past. And then provide a URL link to that public entity. We’re not talking about surveillance here. Because all Clearview is doing is taking the public title frame, the public image and the public URL. So now explain to me how that constitutes surveillance. If you’re walking down the street looking at people, is that surveillance?
Well, it’s what they’re building. It’s AI that has people worried.
Clarify that concept, because that’s an incoherent statement to me and that’s a conclusory statement. When you say what it is we’re building, what do you think they’re building?
I couldn’t say for sure. That’s why Congress has asked Clearview to clarify its business.
That’s the problem. People have the feeling they can’t articulate, that they can’t coherently present. And maybe that feeling is right. The problem is that when you act on these kinds of feelings and you start moving into the law, you get all sorts of unintended consequences.
You’ve said in the past one flaw of the CFAA is that its punishments are not proportional to the actual harm done. Could you state what the harm is of a hacker breaking into Clearview?
What the harm is? Again, I’m not going to make a statement on that at this point in time. It’s a particular case, but I do stand by that the punishment should be proportional to the harm. Absolutely. One good example of this is how the U.K. treats its hackers, as opposed to the U.S. Are you familiar with Mustafa [Al-Bassam]?
No, sorry.
Look it up some time. He was part of the Lulzsec and Anonymous hacker groups in 2010, 2011. They hacked Rupert Murdock’s News of the World and ran his obituary. They hacked all sorts of stuff. So Mustafa is finishing up his computer Ph.D. and working on selling his second startup and is a productive member of society, [when he was arrested]. If he’d been prosecuted in the United States for his crimes, he’d still be in jail.
I’ll end with a line that I say all the time, if the United States was prosecuting computer crimes in the 1970s like it does now, there would be no Microsoft, there would be no Apple, because all these tech bros started out hacking. Bill Gates put a virus out on a corporate computer network when he was a teenager. I’ve yet to meet a good coder who didn’t learn by taking systems apart.
There’s also an economic argument. These prosecutions are bad for the economy. Finally, I think most of these cases should just be civil, unless you’re messing with the hospital or taking out a power grid or something that actually causes harm. This puritanical desire to punish runs rampant in the U.S. judicial system. And it’s unfortunate and it’s why we’ve got more people incarcerated per capita than almost any nation in the world, including China, Russia and all those oppressive regimes.
I need to go into a meeting now. You can follow up with me later, I’ll talk about this until the cows come home.