How repressive regimes use facial recognition technology
Forms of facial recognition have been around since the middle of the last century, starting in the 1960s when American mathematician Woody Bledsoe got a computer to analyse the distances between facial features in photos and then try to match them to other images.
But it is only in recent years that the technology has become sophisticated enough to rapidly identify people going about their lives with a high accuracy. Coupled with the rise in surveillance, via CCTV and increasing numbers of cameras trained on the streets, it has become theoretically possible to identify almost anyone when they are caught on video.
The rollout of facial recognition has proved controversial in many democracies. In the US, the ACLU has campaigned “against the growing dangers of this unregulated surveillance technology”. In 2023 it was revealed that US police had run nearly 1m searches against a database compiled by Clearview AI, which had scraped billions of images from social media and other sources without user permission.
In the UK, privacy campaigners have raised concerns about government plans to extend the use of live facial recognition technology, as well as to allow police to run searches against 50m driving licence holders.
And it’s not just governments that can use facial recognition. Increased computing power and declining costs have seen facial recognition tools become available to the public, though they lack access to the networks of surveillance cameras.
But the use of facial recognition by authoritarian regimes is particularly worrying.
Perhaps the most extreme example comes from China, which not only has the most extensive domestic facial recognition system, but is also the biggest exporter of similar tech. It uses its huge network of cameras and systems that connect to them to do everything from shaming citizens for wearing sleepwear out in the streets to tracking members of the oppressed Uyghur minority.
Russia, too, has embraced facial recognition. The technology, in some cases trained with the help of gig workers around the world, as our latest investigation reveals, has been used extensively to target those opposing Vladimir Putin’s harsh rule. Many activists and protestors have been summoned by police after being identified on camera.
Leaks from the Kremlin Leaks project show that Putin’s own office is working on a secret project to extend and linkup surveillance across Russia, using facial recognition to ensure they can watch and identify anyone challenging the state.
As the technology becomes more powerful, with increased computing power, more widespread surveillance systems and AI-algorithms trained on more and more data, the ability to identify people via any camera system will likely become even more widespread. That is unless democracies and the people in them decide where the limits on their use should be, and enforce them.
Written by: Jasper Jackson
Deputy editor: Katie Mark
Editor: Franz Wild
Production editor: Frankie Goodway
Fact checker: Somesh Jha
Our reporting on Big Tech is funded by Open Society Foundations. None of our funders have any influence over our editorial decisions or output.
-
Area:
-
Subject: