At the beginning of last year, concerns about facial recognition technology increased with the appearance of the start-up Clearview AI. It was problematic for two main reasons. First, the size of its database, which the company claimed held over three billion facial images. Secondly, for its business model, which was mainly based on providing police forces with access to that database. A video on the company’s home page states: “our mission is to support law enforcement to make where we live, work, and play a safer place”. The reality of that focus is now becoming clear. According to an investigation by BuzzFeed News:
A controversial facial recognition tool designed for policing has been quietly deployed across the country with little to no public oversight. According to reporting and data reviewed by BuzzFeed News, more than 7,000 individuals from nearly 2,000 public agencies nationwide have used Clearview AI to search through millions of Americans’ faces, looking for people, including Black Lives Matter protesters, Capitol insurrectionists, petty criminals, and their own friends and family members.
An article on MIT Technology Review revealed that many police officers in New York have been trying out the system, something the NYPD failed to disclose when asked about this point.
Outside the US, Swedish police have been using the system, but without prior authorization. This has led the Swedish Authority for Privacy Protection to rule that the Swedish Police Authority processed personal data in breach of the Swedish Criminal Data Act. Finland’s National Bureau of Investigation also tested Clearview AI last year, using it in an attempt to identify possible victims of sexual abuse. However, a police news release noted that: “The alleged incident took place during the processing of personal data in a service for which information security or compliance with data protection legislation may not have been ensured in advance in a sufficient manner.”
Elsewhere in the EU, police forces are building an interconnected network of facial recognition databases. Although there is no indication that Clearview AI’s technology will be used, something similar will be required in order to search through the large number of images. In Latin America, politicians in Buenos Aires, Brasilia and Uruguay want governments to legalize the use of facial recognition for surveillance purposes. Meanwhile, in Russia, access to Moscow’s large-scale facial recognition database is not restricted to the police: for around $200, anyone can forward a photo of an individual and receive information about that person’s movements in Moscow over the previous month, in general illegally.
No wonder, then, that there are growing calls for bans or at least stricter controls on the use of facial recognition for surveillance purposes. There has already been one small victory in the UK, where the civil rights organization Liberty won a court case against the police use of facial recognition:
In a judgment handed down today, the Court of Appeal agreed with Liberty’s submissions, on behalf of Cardiff resident Ed Bridges, 37, and found South Wales Police’s use of facial recognition technology breaches privacy rights, data protection laws and equality laws.
The judgment means the police force leading the use of facial recognition on UK streets must halt its long-running trial.
The specific grounds for the court’s decision was that the police had failed to verify that the facial recognition software was free of unacceptable biases on the grounds of race or sex – a well-known problem with such systems. Other successes around the EU are listed by the new ReclaimYourFace campaign challenging biometric mass surveillance. These include:
stopping the use of facial recognition technology in French schools; calling for the Data Protection Authority’s investigations against the use of facial recognition by the Hellenic police; celebrating the City of Prague for refusing the introduction of facial recognition technologies in public; stopping an unlawful deployment of biometric surveillance in the Italian city of Como; as well as crowdsourcing a comprehensive mapping of all live facial recognition cameras in the city of Belgrade, Serbia.
Many digital rights groups believe that a broader ban is needed on harmful applications of facial recognition. In January of this year, 61 civil society organizations sent an open letter to the European Commission demanding “red lines for the applications of AI that threaten fundamental rights”, including for facial recognition. Last month, 116 Members of the European Parliament wrote to the European Commission in support of the letter.
A worldwide ban on the use of facial recognition systems has been called for by Amnesty International. The Ban the Scan campaign has started in New York City, and will expand to focus on the use of facial recognition in other parts of the world. Although Amnesty International hopes that its campaign will be global, the main impetus to regulate facial recognition is likely to come from the EU. The existence of the GDPR makes restrictions inevitable, since facial data is highly personal, and therefore covered by the region’s strict privacy legislation. However, as Politico points out, that does raise an interesting issue. The EU is keen to cooperate with President Biden on a range of tech issues, and to coordinate policy moves where possible. The emphasis on privacy in the EU may prove problematic with the more relaxed approach of the US to the use of facial recognition systems, particularly by the authorities. It’s yet another area where major cultural differences between the US and Europe are likely to make otherwise straightforward collaboration more difficult.
Featured image by Hans Hillewaert.