If you’ve seen Tom Cruise doing strange magic tricks on TikTok, Donald Trump playing Saul Goodman in a strange rendition of Better Call Saul, or Obama doing a public address insulting high-profile celebrities, you have witnessed a deepfake.
These smart uses of technology can edit voices, faces, and generally digitally manipulate media to make it seem like someone said or did something that they didn’t do.
But how do these manipulated forms of media work, should you be concerned about them, and are they likely to be an issue way into the future? We spoke to the technologist Sam Gregory to find out.
What is a deepfake and where does the name come from?
Deepfake is a term used to describe the manipulation of video, sound, image or any other digital content. In other words, it’s “a way in which you make someone look like they said or did something they never did,” says Gregory.
However, as a catch-all phrase, deepfake is slightly too specific to the idea of face swapping. Because of this, there is a lot of support for using different terms to describe the technology.
“A lot of folks prefer to use terms like ‘synthetic media’ because it allows us to include the face swap, the lip-sync dubbing, the ability to make someone’s face or body move based on another source. It even allows us to include the ability to create events and faces that never existed,” says Gregory.
The origin of the term is another reason many are looking to find a new term for these forms of digital manipulation. Back in 2017, a Reddit user started using these tools to place the faces of actresses and celebrities into pornographic videos.
“We’re sort of trapped in the first word that was used by the creator who was making these non-consensual sexual images. The Reddit user is called ‘deepfakes’,” says Gregory.
Are deepfakes a new issue or something that has been around for a while?
It’s really been in recent years that we’ve seen the attention on deepfakes widen, but how long has this technology been available, or is it a completely new issue that we are facing in the modern digital age?
“Media manipulation, like the ability to edit video and photos or even manipulate them, we’ve had that for a long time. The advances that allowed us to have the deep learning side of this are really in the last eight or nine years,” says Gregory.
“The ability to do this, to use these algorithms that learn from the data, you feed it in order to then build, for example, a fake face of your AI. That’s really a technical advance of the last 10 years.”
Gregory believes that, while this technology has been available for a while, it has seen more interest in the public eye due to its origins in the world of pornography where it received large amounts of publicity.
However, he also believes that this technology and its potential dangers are not completely understood by the general public.
“There’s a lot of hype. There were headlines in recent years about how deepfakes will disrupt elections globally. That hype has taken away from what the real threats are. So they’ve been in the public eye for five years or so, but often in this rather distorted way that doesn’t capture the real threats that exist.”
Can deepfakes be used positively?
For the most part, the coverage of deepfakes or synthetic media has focused on the negative side of the technology, but that doesn’t mean it can’t be used positively.
Sam Gregory believes there are five key ways that the technology can be used positively:
- Deepfakes can be used to create powerful satire and parodies. These realistic deepfakes can create realistic renditions of politicians, and celebrities which, if labelled as deepfake content, could make for better immersion in satire.
- While the most realistic versions of deepfakes require powerful software and skill, there are also plenty of apps for the average person to use. These can overlay celebrity faces onto yours, turn yourself into a realistic version of your friend, or make it look like someone is singing a song.
- Another positive use of deepfakes is when it comes to film and TV. With the use of deepfake technology, video wouldn’t need to be dubbed over. Instead, the actors could be edited to make it look like they were speaking in each different language needed for the films. However, this would likely be expensive and is currently not a realistic option.
- Gregory also believes deepfakes could lead to a new way of searching for content. Instead of Google or Wikipedia telling you the information you were after, a realistic avatar could talk to you. In a video-forward world, this could be a promising alternative.
- The final positive use Gregory lists is to protect people. “There was a movie called Welcome To Chechnya that featured very vulnerable LGBTQ activists in Chechnya, and they recruited volunteers outside the country, creating deepfake faces using volunteers, and they swapped them with the vulnerable activists in the film in Chechnya,” says Gregory.
Do deepfakes take a lot of energy and time to create?
Recent technologies, mostly those in the cryptocurrency space, have faced backlash for the amount of energy that they require to run. But what about deepfakes, are they using extortionate levels of energy to run?
“They’re not taking up the energy of a small country like Bitcoin does or something like that. But they are computationally intensive, and that’s been one of the big races from companies in video to develop the computational power that you can use to do this,” says Gregory.
“It’s not cheap to do a good deepfake. As we look ahead to what the threats are, it’s worth remembering that at least for the moment, to do the really good face-swap deepfake is still computationally intensive. They take some investment in expensive computers to do well.”
About our expert, Sam Gregory
Sam Gregory is a technologist and the program director of Witness, a group which helps people use video to protect human rights. He is an expert in new forms of digital misinformation, most specifically, deepfakes.
Read more:
- What is Bitcoin and can it be a viable currency?
- Can NFTs solve their massive carbon footprint problem?
- Deepfakes: the fight against this dangerous use of AI