France imposes maximum fine on Clearview AI for GDPR violations


Clearview AI, the controversial facial recognition company that removes selfies and other personal data from the internet without consent to power an AI-powered identity matching service it sells to law enforcement and security forces others, was hit with another fine in Europe.

This comes after it failed to respond to an order last year from CNIL, France’s privacy watchdog, to stop its illegal handling of French citizens’ information and delete their data.

Clearview responded to this order by, well, masking the regulator – thus adding a third GDPR violation (non-cooperation with the regulator) to its previous tally.

Here is the CNIL’s summary of Clearview’s shortcomings:

  • Unlawful processing of personal data (violation of Art. 6 GDPR)
  • Rights of individuals not respected (articles 12, 15 and 17 of the GDPR)
  • Lack of cooperation with the CNIL (article 31 of the GDPR)

“Clearview AI had two months to comply with the injunctions formulated in the formal notice and justify them to the CNIL. However, he did not provide any response to this formal notice“, writes today the CNIL in a press release announcing the sanction [emphasis its].

“The president of the CNIL therefore decided to seize the select committee, responsible for pronouncing the sanctions. On the basis of the elements brought to its attention, the select committee decided to impose a maximum financial penalty of 20 million eurosin accordance with Article 83 of the GDPR [General Data Protection Regulation].”

The EU GDPR allows for penalties of up to 4% of a company’s annual worldwide turnover for the most serious breaches, or €20 million, whichever is greater. But the CNIL’s press release clearly indicates that it imposes the maximum possible amount here.

The question of whether France will receive a penny of this money from Clearview, however, remains open.

The US-based privacy specialist has faced a series of sanctions from other data protection agencies across Europe in recent months, including fines of 20 million euros in Italy and Greece; and a smaller penalty in the UK. But it’s unclear whether he handed over any money to any of these authorities – and they have limited resources (and legal means) to attempt to sue Clearview for payment outside their own borders. .

So GDPR sanctions mostly look like a warning to stay away from Europe.

Clearview’s PR agency, LakPR Group, sent us this statement following the CNIL sanction — which it attributed to CEO Hoan Ton-That:

“There is no way to determine if a person has French nationality, only from a public photo on the Internet, and therefore it is impossible to delete data from French residents. Clearview AI only collects accessible information to the public on the Internet, like any other search engine like Google, Bing or DuckDuckGo.

The statement goes on to reiterate Clearview’s previous assertions that it has no establishment in France or the EU, nor does it engage in any activity that “would otherwise mean that it is subject to the GDPR”, as it puts it. said — adding: “Clearview AI’s publicly available image database is collected legally, just like any other search engine like Google.

(NB: on paper the GDPR has extraterritorial reach, so his old arguments don’t make sense, while his assertion that he’s not doing anything that would subject him to the GDPR seems absurd given that he’s amassed a base data of over 20 billion images worldwide and that Europe is, uh, part of planet Earth…)

Ton-That’s statement also repeats a widespread claim in Clearview’s public statements in response to the stream of regulatory penalties her company is attracting, that it created its facial recognition technology “for the purpose of helping to make safer communities and helping law enforcement solve nasty problems. crimes against children, the elderly and other victims of unscrupulous acts” — not to profit from the illegal exploitation of people’s privacy — not that, in any case, having a “pure” motive would make no difference to its requirement under European law to have a valid legal basis for processing data of individuals in the first place.

“We only collect public data from the internet and adhere to all privacy and legal standards. I am sorry for the misinterpretation by some in France, where we do not do business, of Clearview AI’s technology to the company. My intentions and those of my company have always been to help communities and their inhabitants live a better and safer life,” concludes the Clearview PR.

Each time it has received a sanction from an international regulator, it has done the same thing: deny that it has committed an infringement and deny that the foreign body has any jurisdiction over its activities – its strategy to deal with its own anarchy when it comes to data processing therefore seems simple non-cooperation with regulators outside the United States.

Obviously, this only works if you anticipate that your executives/senior managers will never set foot in the territories where your company is sanctioned and give up on selling the sanctioned service to foreign customers. (Last year, Sweden’s data protection watchdog also fined a local police authority for illegal use of Clearview – so European regulators can also act to clamp down on any local requests, if necessary. .)

At home, Clearview has finally had to deal with some legal red lines recently.

Earlier this year, he agreed to settle a lawsuit that accused him of violating an Illinois law prohibiting the use of individuals’ biometric data without their consent. The settlement included Clearview agreeing to certain limits on its ability to sell its software to most US companies, but it still trumpeted the result as a ‘huge win’ – saying it would be able to circumvent the ruling by selling its algorithm (rather than access to its database) — to private companies in the United States

The need to give regulators the means to order the removal (or removal from the market) of algorithms trained on illegally processed data seems like an important upgrade to their toolkits if we are to avoid a dystopia fueled by the AI.

And it turns out that the EU’s new AI law may contain such power, according to the legal analysis of the proposed framework.

The bloc also more recently presented a plan for an AI liability directive that it wants to encourage compliance with the broader AI law – by tying compliance to reduced risk that model makers of AI, deployers, users, etc. can be successfully sued if their products cause a series of harms, including to people’s privacy.


Comments are closed.