How to spot deepfake videos — 15 signs to watch for
Deepfake videos could have serious implications during the 2020 election season. Learn why.
A sample of dataset images used in the Deepfake Detection Challenge.
Deepfake technology can be used create convincing but false video content. Deepfake videos are often designed to spread misinformation online.
For instance, you might view a deepfake video that appears to show a world leader saying things which they actually never said. That could lead to “fake news” that stirs emotion or sways public opinion.
Fake videos have featured subjects like Donald Trump, Barack Obama, and Mark Zuckerberg.
There is heightened concern that deepfake videos could have serious implications during the 2020 election season. People rely on the internet for information, and manipulated videos could potentially influence what they think and how they vote.
That’s why it’s a good idea to know how to spot deepfake videos. It’s not always easy, but here’s help.
In this article you’ll learn how deepfakes work and how the technology is being used for spreading misinformation and other malicious purposes. You’ll also find out how to spot 15 telltale characteristics of deepfakes.
The payoff? It could help protect you against being duped by this evolving technology.
What is a deepfake?
The word “deepfake” combines the concept of deep learning with something that is fake. Deepfakes are a form of artificial intelligence — a compilation of doctored images and sounds put together with machine-learning algorithms.
Deepfake technology manipulates media by creating people that don’t exist, or by making it appear that real people are saying and doing things they didn’t say or do.
The term first became popular in 2017 after a Reddit user called himself “deepfakes” and shared doctored, pornographic videos. How? He face-swapped celebrity faces onto other people’s bodies by manipulating Google’s open-source, deep-learning technology.
Audio deepfakes are another form of deception. Here’s how it works. Deepfake machine-learning and synthesizing technology creates what are known as “voice skins” or “clones” that enable someone to pose as a prominent figure. An audio deepfake scam is designed to make you believe the voice on the other line is someone you know, like your boss or a client, so you’ll take an action — like sending money.
How do deepfakes work?
To spot deepfakes, it helps to know how they work.
One method is through GAN — short for Generative Adversarial Network — which engages in face generation. GAN uses a set of algorithms to train itself to recognize patterns. This training helps it learn real characteristics to be able to produce fake images.
There also are artificial intelligence (AI) algorithms known as encoders that are used in face-swapping and face-replacement technology. This involves running thousands of face shots of two people through an encoder to find their similarities. A decoder, or second AI algorithm, then retrieves and swaps these face images to enable someone’s real face to be superimposed onto another person’s body.
What is the purpose of a deepfake?
The purpose of deepfake is to use fake content to influence viewers and listeners into believing something happened that didn’t.
Deepfakes are often used to spread misinformation and for other malicious purposes. Here’s a partial list:
- Phishing scams
- Data breaches
- Hoaxes
- Celebrity pornography
- Reputation smearing
- Election manipulation
- Social engineering
- Automated disinformation attacks
- Identity theft
- Financial fraud
- Blackmail
In that last case, blackmailers claim they’ll release a fake but damaging video of you if you don’t give them money or something else of value.
Can you spot deepfake videos?
If you’re watching a deepfake video or listening to deepfake audio, will you know if the media is real?
As detection technology advances, so does the quality of deepfake technology. But there are ways to spot deepfakes on your own and with some AI help.
15 ways to spot deepfake videos
AI technology can be difficult to recognize with your eyes alone, so emerging technology can help zero in on characteristics that are harder to see.
Researchers are looking at soft biometrics, such as how a person speaks, along with other qualities in videos to help foster deepfake detection. This focus on soft biometrics is important, because you can watch for these telltale characteristics on your own as well.
Here are 15 things to look for when determining if a video is real or fake.
- Unnatural eye movement. Eye movements that do not look natural — or a lack of eye movement, such as an absence of blinking — are red flags. It’s challenging to replicate the act of blinking in a way that looks natural. It’s also challenging to replicate a real person’s eye moments. That’s becomes someone’s eyes usually follow the person they’re talking to.
- Unnatural facial expressions. When something doesn’t look right about a face, it could signal facial morphing. This occurs when a simple stich of one image has been done over another.
- Awkward facial-feature positioning. If someone’s face is pointing one way and their nose is pointing another, you should be skeptical about the video’s authenticity.
- A lack of emotion. You also can spot facial morphing or image stiches if someone’s face doesn’t seem to exhibit the emotion that should go along with what they’re supposedly saying.
- Awkward-looking body or posture. Another sign is if a person’s body shape doesn’t look natural or there is awkward or inconsistent positioning of head and body. This may be one of the easier inconsistencies to spot, because deepfake technology usually focuses on facial features rather than the whole body.
- Unnatural body movement. If someone looks distorted or off when they turn to the side or move their head, or their movements are jerky and disjointed from one frame to the next, you should suspect the video is fake.
- Unnatural coloring. Abnormal skin tone, discoloration, weird lighting, and misplaced shadows are all signs that what you’re seeing is likely fake.
- Hair that doesn’t look real. You won’t see frizzy or flyaway hair, because fake images won’t be able to generate these individual characteristics.
- Teeth that don’t look real. Algorithms may not be able to generate individual teeth, so an absence of outlines of individual teeth could be a clue.
- Blurring or misalignment. If the edges of images are blurry or visuals are misaligned — for example, where someone’s face and neck meet their body — you’ll know something is amiss.
- Inconsistent noise or audio. Deepfake creators usually spend more time on the video images rather than the audio. The result can be poor lip-syncing, robotic-sounding voices, strange word pronunciation, digital background noise, or even the absence of audio.
- Images that look unnatural when slowed down. If you watch a video on a screen that’s larger than your smartphone or have video-editing software that can slow down a video’s playback, you can zoom in and examine images more closely. Zooming in on lips, for example, will help you see if they’re really talking or if it’s bad lip-syncing.
- Hashtag discrepancies. There’s a cryptographic algorithm that helps video creators show that their videos are authentic. The algorithm is used to insert hashtags at certain places throughout a video. If the hashtags change, then you should suspect the video has been manipulated.
- Digital fingerprints. Blockchain technology can also create a digital fingerprint for videos. While not foolproof, this blockchain-based verification can help establish a video’s authenticity. Here’s how it works. When a video is created, the content is registered to a ledger that can’t be changed. This technology can help prove the authenticity of a video.
- Reverse image searches. A search for an original image, or a reverse image search with the help of a computer, can unearth similar videos online to help determine if an image, audio, or video has been altered in any way. While reverse video search technology is not publicly available yet, investing in a tool like this could be helpful.
Using technology to spot deepfakes
Several groups are coming up with ways to foster greater AI transparency and protect people from deepfakes. Here are some of them.
- Twitter and Facebook. Social media platforms like Twitter and Facebook have banned the use of malicious deepfakes.
- Google. Google is working on text-to-speech conversion tools to verify speakers.
- Adobe. Adobe has a system that enables you to attach a sort of signature to your content that specifies the details of its creation. Adobe also is developing a tool to determine if a facial image has been manipulated.
- Researchers at the University of Southern California and University of California, Berkeley. A notable push is being led by university researchers to discover new detection technologies. Using machine-learning technology that examines soft biometrics like facial quirks and how a person speaks, they’ve been able to detect deepfakes with 92 to 96 percent accuracy.
- Deepfake Detection Challenge. Organizations like the DFDC are incentivizing solutions for deepfake detection by fostering innovation through collaboration. The DFDC is sharing a dataset of 124,000 videos that feature eight algorithms for facial modification.
- Deeptrace. This Amsterdam-based startup firm is developing automated deepfake detection tools to perform background scans of audiovisual media — similar to a deepfake antivirus.
- U.S. Defense Advanced Research Projects Agency. DARPA is funding research to develop automated screening of deepfake technology through a program called MediFor, or Media Forensics.
Deepfakes moves, countermoves, and you
Here’s the problem. Creators of detection algorithms respond to the latest deepfakes with their own new technology. In turn, the people behind those deepfakes respond to the new detection technology.
While the battle goes back and forth, it’s a good idea to know how to spot deepfakes — at least some of the time — and take steps not to be fooled. It helps if you’re skeptical about video you see on the internet and careful about sending funds until you’re sure the voice on the other end of the line is legit.
Try Norton 360 FREE 7-Day Trial* - Includes Norton Secure VPN
7 days of FREE* comprehensive antivirus, device security and online privacy with Norton Secure VPN.
Join today. Cancel anytime.
*Terms Apply
Editorial note: Our articles provide educational information for you. Our offerings may not cover or protect against every type of crime, fraud, or threat we write about. Our goal is to increase awareness about Cyber Safety. Please review complete Terms during enrollment or setup. Remember that no one can prevent all identity theft or cybercrime, and that LifeLock does not monitor all transactions at all businesses. The Norton and LifeLock brands are part of Gen Digital Inc.
Want more?
Follow us for all the latest news, tips and updates.