In its finding, the ICO detailed how Clearview AI failed to inform people in the UK that it was collecting their images from the web and social media to create a global online database that could be used for facial recognition; failed to have a lawful reason for collecting people’s information; failed to have a process in place to stop the data being retained indefinitely; and failed to meet data protection standards required for biometric data under the General Data Protection Regulation. The ICO also found the company asked for additional personal information, including photos, when asked by members of the public if they were on their database. The privacy watchdog also concluded that given the higher number of UK internet and social media users, Clearview AI’s database is “likely to include a substantial amount of data” from UK residents, and while the company no longer offers services to UK organisations, it continues to do so in other countries, and this may include using personal data of UK residents. Read more: ‘Booyaaa’: Australian Federal Police use of Clearview AI detailed “Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images,” UK Information Commissioner John Edwards said. “The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice. “People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement.” The enforcement action follows a joint investigation the ICO carried out with the Office of Australian Information Commissioner (OAIC). The investigation into Clearview AI by both privacy watchdogs has been underway since 2020, and was conducted in accordance with the Australian Privacy Act and the UK Data Protection Act. The pair investigated how the company used people’s images, data scraping from the internet, and biometric data for facial recognition. “This international cooperation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues,” Edwards said. Earlier this month, in a landmark settlement, Clearview AI agreed to cease sales to private companies and individuals in the United States, and also agreed to stop making to database available to Illinois state government and local police departments for five years. The New York-based company, however, continue to offer its services to other law enforcement and federal agencies, and government contractors outside of Illinois. 

Clearview AI slammed for breaching Australians’ privacy on numerous frontsAFP used Clearview AI facial recognition software to counter child exploitationVictoria Police emails reveal Clearview AI’s dodgy direct marketingEuropean Parliament passes non-binding resolution to ban facial recognitionSingapore must take caution with AI use, review approach to public trust