What is a deepfake – and how are they being used by scammers?
If you’ve spent any amount of time scrolling on social media, you’ve probably come across a deepfake. You may have seen unreal Keanu Reeves on TikTok or Miquela on Instagram. Perhaps you watched Kim Kardashian rowing with her neighbour Idris Elba on ITV’s Deep Fake Neighbour Wars.
A deepfake is a type of digital content – usually video or audio – that has been generated using artificial intelligence and mimics a person’s likeness or voice. And they are becoming increasingly hard to tell apart from the real thing. Today’s advanced machine learning algorithms can manipulate images, audio and videos in a way that makes the resulting deepfakes convincingly realistic.
The technology holds promise for various uses, such as creating digital doubles in films or providing customised support to students. But it has sparked major concerns around misinformation, identity theft and the erosion of trust in the digital information landscape.
Face-swapping technology, which has been around for more than a decade, enables users to replace one person’s face with another’s. It can create the illusion of someone saying or doing something they never did.
That kind of deepfake is what you may have seen on TV and social media but it has also been used to carry out fraud: one scam in China used face-swapping technology to convince someone to hand over $600,000 during what they thought was a video call with a friend.
Another form of AI technology – and one that particularly threatens coming elections – is audio deepfakes or voice cloning. These are systems that can take audio clips of someone speaking and generate a voice that sounds just like the person’s own – which can “say” whatever it’s programmed to.
Two days ahead of Slovakia’s elections in 2023, a faked audio clip with the voice of Michal Šimečka, the leader of the liberal Progressive Slovakia party, circulated on social media. The clip falsely suggested that the politician had been buying votes from the country’s Roma minority in an attempt to manipulate the election.
Audio deepfakes have been used in scams, where fraudsters access recordings of people’s family members to replicate their voices and make hoax calls asking for money. Deepfakes of children’s voices have been used for scam kidnappings.
A third type of AI technology, synthetic media generation, can create entirely artificial content – pictures, videos, music or audio – that never existed in reality. This opens the door to the production of fictional scenarios or events. Deepfake images of nonexistent people, for example, have been used to create fake accounts, which were then used in information operations, or to spread disinformation.
Even more sophisticated scams have used AI technology to forge real-life video meetings. In Hong Kong, a finance worker at a multinational company was tricked into paying out $25m after fraudsters staged a fake video call with the company’s chief financial officer and other colleagues impersonated by deepfakes.
Older examples of AI-generated content were fairly easy to spot due to asymmetries, mismatched lip movements and blurry features. But the technology is growing more sophisticated and is now capable of creating believable imitations of people in real time.
When exposed to content that seems real but is dubious, a good rule of thumb is to fact check the reported information, seek out the original source and gather credible evidence from authoritative sources. If something appears suspicious or misleading, you can always report it to our tipline.
Reporter: Francesca Visser
Big Tech editor: Jasper Jackson
Deputy editors: Chrissie Giles and Katie Mark
Editor: Franz Wild
Production: Emily Goddard
Fact checker: Ed Siddons
-
Area:
-
Subject: