Who are Clearview AI? A facial recognition startup known as Clearview AI was recently breached and millions of people’s identities are at stake. The startup came to the forefront on January 18 thanks to a New York Times article and the report revealed that the technology Clearview AI was using allowed police authorities to match unknown pictures of people with a database of over 3 billion pictures taken from Facebook, Google, Venmo, YouTube, etc. to reveal addresses, names, and more of the unknown person in the photo. The company has repeatedly said its search engine can only be accessed by law enforcement agencies and select security professionals as an investigative tool. Until now apparently.
According to The Daily Beast, Clearview AI was breached just this week. The Daily Beast reported that it had received a notice sent to Clearview’s customers claiming that an attacker had “gained unauthorized access” to its customer list, the number of customer searches and other information. In the notice itself Clearview said that there was no violation of the company’s servers, and that there was “no compromise between Clearview’s systems or network.” Their lawyer contradicted their statements to their customers though.
While publically contradicting the company’s statement to users, Clearview AI’s lawyer Tor Ekeland said in a statement to USA Today that, “Security is Clearview’s top priority. Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.” So what’s the truth? Were the servers accessed or not?
Clearview AI can only be used by police and authorities in an attempt to help find criminals that they have a picture of but cannot identify. This can be an incredibly useful and powerful tool for authorities to help keep communities safer, but what happens when that database and technology is taken out of the Clearview AI regulations and hands? Is it safer?
On Clearview AI’s homepage, the company states that, “Clearview searches the open web. Clearview does not and cannot search any private or protected info, including in your private social media accounts.” They request access to join and states it is available now for local law enforcement. The problem is, when that data is accessed in a breach it can become public, criminals having their hands on technology that can match over 3 billion random faces to identities can not be a good thing. Clearview AI’s technology is incredibly divisive and will only continue to appear more often in the media now that they’ve been breached.