Let me start by saying this: I truly believe digital technology has an instrumental role in the way we communicate, learn and work today. It offers us tons of helpful tools, apps, resources, and facilitates instant access to any information we need (or want). So much information that we can’t always keep up with it.
But there’s also a dark side of the digital revolution. The one in which groups of individuals use technology for malicious purposes: deepfakes, to be more specific.
Picture this: You woke up in the morning, grab your smartphone to check your Instagram account, and see a video of your favorite actor. How do you know it’s real?
What are deepfakes
Also known as synthetic media, deepfakes are pieces of content (audio or video) created, altered, or synthesized with the help of deep learning – a form of artificial intelligence – to persuade people to believe they’re real. When they are actually fake.
Oftentimes, these synthetic versions of people’s faces, images, or voices look so realistic that it gets challenging for the human eye to accurately distinguish what’s real and what’s fake. Simply put, deepfakes can alter our perception of reality and focus on manipulating us.
If you do a quick research on the Internet on “how to create deepfakes”, you’ll run into various video tutorials with easy-to-use steps to follow and make your own.
That means that anyone with an Internet connection, some basic technical skills, and a dedicated application can create a deepfake. If you own an iPhone, you can use the Avatarify app to control the face of another person and make videos of people doing things that didn’t happen. According to Washington Post, this app “has been downloaded more than 6 million times since February alone.”
With the rapid advancements in AI, we’ve reached a point where this technology “is getting powerful enough to make people say things they never said and do things they never did. Anyone can be targeted, and everyone can deny everything.” Nina Shick, an expert in synthetic media, cybersecurity, and the geopolitics of technology, noted in her book “Deepfake: The coming infocalypse”
With that in mind, here are some key questions that are worth thinking about:
- How are deepfakes impact our knowledge of the current reality?
- To what extent do deepfakes play into our confirmation bias?
- Why is synthetic media blurring our reality and challenging our perspective?
- How do we filter the images, videos we see on social media and distinguish the real from the fake?
- Why do they pose a security risk?
Significant advances in artificial intelligence are allowing machines to produce synthetic media that will have broader implications on how we generate content, communicate, or see the world. It’s essential to understand what’s at stake.
Findings from the Europol report show that cybercriminals will leverage AI both as an attack vector and an attack surface. It also warns that “new screening technology will be needed in the future to mitigate the risk of disinformation campaigns and extortion, as well as threats that target AI data sets.”
The rise of deepfakes can wreak havoc on both individuals and society and it’s essential to understand what’s at stake in the current digital landscape.
3 deepfakes examples to understand how they work
To have a better grasp of how deepfakes work and why they pose a serious threat to our security, let’s have a closer look at these 3 deepfakes examples.
1. Tom Cruise deepfakes that went viral on TikTok
A series of fake videos showing the actor were posted on the social network by the account @deeptomcruise. Chris Ume, the Belgian visual effects actor, who created these videos, said they were the product of weeks of work. He also stated that he used “open-source deepfake software, existing editing tools, and his own visual effects expertise” to impersonate the movie star.”
To make things look more realistic, the Belgian artist teamed up with the actor Miles Fisher, a Cruise lookalike.
The impersonator Fisher (left) and the deepfake Cruise (right)
Source: The Verge
The laugh, the gestures, the facial expression, all seem to portray a genuine Tom Cruise when in reality, he’s a fake. The artist told CNET these videos were strictly made for fun and to raise awareness about the worrying and fast development of deepfakes.
Just saw the #deepfakes of Tom Cruise and feel like a monkey looking at a spacecraft pic.twitter.com/CRO5NpEMFa
— Angad Singh Chowdhry (@angadc) February 26, 2021
2. Deepfake of Facebook founder Mark Zuckerberg
This technology is rapidly evolving and creating counterfeit media that could trigger new concerns about what we see and believe it’s real on the Internet.
With the help of Artificial Intelligence, two artists Bill Posters and Daniel Howe, in partnership with an advertising company created an altered video of Mark Zuckerberg and uploaded it on Instagram.
The deepfake highlights the Facebook founder giving a speech about Facebook’s To enhance credibility, the video features the CBN trademark to make it look like part of the news section.
3. Deepfake Queen’s Elizabeth Christmas speech
Another example of deepfake which poses a worrying misinformation threat is a video released by the UK Channel 4 television. It presents Her Majesty, “the Queen” who delivers an alternative traditional Christmas message.
This fake video was created by Framestore, a visual effects company, using the voice of a British actress Debra Stephenson to impersonate the real Queen.
Deepfakes don’t impersonate only celebrities, politicians (the Obama fake video), or other influencers on social media. They also target women who are victims of fake pornography campaigns. Altered intimate images or videos are exposed on social media without consent taking a toll on victims and invading privacy.
According to Sensity AI research company, 90% and 95% of online deepfake videos are nonconsensual porn. In the context of the pandemic and more Internet usage, these stats are even more worrying.
How to detect deepfakes and avoid falling victims to them
There’s no doubt that deepfakes pose a serious problem for both individuals and society.
To better counteract misinformation and fabricated media created with the help of AI, we need to focus more on awareness training programs that teach us how to tackle fake news and spot a deepfake video or image.
Foster your critical thinking skills and develop “an eagle eye” for deepfakes by paying greater attention to details such as:
- Analyze the face, because most deepfakes are linked to facial transformations.
- Look closely to see if you notice unnatural hair or skin colors because deepfakes focus on altering these aspects.
- Check for unnatural body positioning or awkward head that could indicate the image of the video you’re seeing could be a fake.
- Listen carefully and see if there’s a robotic-sounding voice because deepfakes focus on modifying the voice of real people.
- Pay attention to the background and look for blurry visuals or anything unnatural or suspicious.
In an interview with Andrew Yang about the future of deepfakes, Nina Schick emphasizes the importance of developing a conceptual framework to help us get a better understanding of what’s going on with this technology. To do that, we need to do a lot of reading and research to know why the information ecosystem is disrupted and how it impacts our lives.
Use specialized detection tools designed to scan and detect fake images, videos, or any type of altered content used in fake news or media manipulation.
To combat disinformation and educate users, Microsoft launched a dedicated tool called the Video Authenticator, which “can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated”.
The University of Buffalo recently announced a new tool that can automatically detect deepfake photos by analyzing light reflections in the eyes. According to experiments, this tool is 94% effective with portrait-like photos.
How cybercriminals use synthetic media for cyberattacks
With the shift to remote working and people relying more on video and audio-based ways of communications, malicious actors get creative and use deepfake apps to make fake images or videos and launch social engineering campaigns.
Recently, the FBI issued a warning in which it stated that cybercriminals “almost certainly will leverage synthetic content for cyber and foreign influence operations in the next 12-18 months.”
The Wall Street Journal reported that malicious actors can use the AI-generated voice deepfake in a Business Email Compromise (BEC) attack by impersonating a British CEO’s voice and demand a transfer of $240,000. They use AI technology to clone authentic people’s voices and carry out the attack.
Deepfakes become a powerful cyberweapon for cybercriminals, mostly because they use online media to reach millions of people all over the globe at unprecedented speed.
What infosec pros say about the future of deepfakes
When we interviewed Andrei Cotaie and Tiberiu Boros, two infosec professionals from Adobe, we learned how they see deepfakes evolving.
“These fakes are already affecting security systems that rely on face/voice biometric information for user recognition/authentication.”
They also added that “while image generation systems like DeepFake have still a road ahead, it is a matter of time until we are faced with having to come up with countermeasures.”
Take it from an expert like Nina who believes that:
“This technology is still nascent, but in a few years’ time anyone with a smartphone will be able to produce Hollywood-level special effects at next to no cost, with minimum skill or effort.”
Hany Farid, a computer scientist at the University of California, Berkeley, emphasizes deepfakes require no specific skill or effort:
“The threat is the democratization of Hollywood-style technology that can create really compelling fake content,”
Irakli Beridze, Head of the Centre for AI and Robotics at UNICRI also noted that:
“as AI applications start to make a major real-world impact, it’s becoming clear that this will be a fundamental technology for our future”.
3 key takeaways to ponder
- Foster your critical thinking skills and develop “an eagle eye” for deepfakes by paying greater attention to details
- Deepfakes might be in the early stages, but it’s a growing threat and we need to be more aware of this media manipulation and fake news.
- It’s important to have a conceptual framework to help us get a better understanding of what’s going on with this emerging technology.