What are deepfakes and how to identify them?
)
Scrolling through your feed, you see a video of your favourite celebrity saying something shocking. But something feels off. The voice seems to sound right, the face looks familiar, but, what’s wrong with it?
By Virgin Media Edit
- Published
- 29 Aug 2025
What are deepfakes?
Deepfakes are fake videos, images, or audio clips created by artificial intelligence (AI). Basically, computer software learns how to swap one person's face onto another person's body in a video. Think advanced face-swap filters, but way more convincing.
While deepfakes can be used for entertainment, such as in films or video games, they also carry serious risks. Scammers can use deepfakes to impersonate individuals for fraud, spread false information, and damage personal or professional reputations.
By the numbers:
2019: The AI firm Deeptrace found 15,000 deepfake videos online
2023: At least 500,000 video and audio deepfakes were shared on social media
Around 90% of deepfakes target women
Basic deepfakes can now be made in minutes with apps
How are deepfakes made?
Deepfakes aren’t just edited or photoshopped content. They use AI and machine learning to create highly realistic videos and audio that manipulate real people’s faces and voices.
Creating a face-swap video involves a few AI steps. First, you run thousands of face shots of the two people through an AI algorithm called an encoder. This encoder finds and learns similarities between the two faces.
Then, one AI “decoder” is trained for each person. To swap faces, the algorithm feeds images through the other person’s decoder, reconstructing one face with the expressions of the other. This process is repeated for every frame to create a realistic video.
Types of deepfakes you need to know about
Deepfakes can come in many forms, each using AI in different ways to alter or create audio and video. Here are the main types you're most likely to come across:
Face-swap deepfakes: These are the most common type of deepfake, sticking someone's face on another person's body. The results can range from funny, obviously fake clips to highly convincing videos that make you question if it’s real. Face-swaps are often used in memes, parody videos, or, unfortunately, in harmful contexts like fake celebrity content.
Voice cloning deepfakes: AI can now copy someone's voice so precisely you'd swear it's really them speaking. Scammers have used this in phone calls to trick people into transferring money or revealing sensitive information. Fake audio clips can also be spread online to make it seem like someone said something they never actually did, damaging reputations and trust.
Lip-sync deepfakes: They manipulate someone's lip movements to match new audio, making it look as if they said specific words. This technique is often used to create fake speeches, interviews, or misleading political clips. Even if the person’s real voice isn’t used, the visuals can be enough to fool viewers into believing it’s genuine.
Full-body reenactment: Instead of altering an existing photo or video, these generate completely new people from scratch using AI. The result can be a realistic-looking face or even a full video of a person who doesn’t actually exist. While sometimes used harmlessly (like in gaming or advertising), synthetic deepfakes can also be used to create fake social media accounts or spread misinformation under the guise of a “real” person.
Why can deepfakes be dangerous?
While deepfakes can be used to generate new art or the entertainment industry, they can also be used in harmful ways:
Misinformation and political manipulation: Fake videos of world leaders or public figures are used to sway opinion, spread propaganda, or create confusion.
Fraud and identity theft: Criminals impersonate executives, employees, or private individuals to access sensitive data like bank accounts or company records.
Reputation damage and blackmail: The most common harmful use is non-consensual deepfake sexual content, Deepfakes can also place victims in compromising or illegal scenarios to extort, harass, or ruin reputations.
False evidence and stock manipulation: Fabricated videos or audio clips can be used to mislead legal proceedings or influence a company’s market value.
7 quick ways to spot a deepfake
Deepfakes can be incredibly realistic, but most still leave behind small “glitches” that reveal they aren’t genuine. Here are the main signs to look out for:
Look at the eyes: Watch for eyes that blink too frequently, not enough, or in a robotic way. Pupils may also fail to react naturally to changes in light.
Check the skin: The face might look overly smooth compared to the neck or hands, show unnatural colour differences, or have mismatched ageing.
Watch the mouth: Words and lip movements should sync naturally.
Listen closely: Deepfake voices often sound too polished, with no quirks or hesitation. Listen for robotic tones, odd breathing, or accents that change mid-sentence. Clear audio on a grainy video (or the opposite) is suspicious.
Verify the source: Be wary of anonymous uploads, “new” accounts sharing only one viral clip, or content originating from untrustworthy sites. Cross-reference with BBC, Sky News or other reliable sources.
Lighting errors: Shadows may be missing or cast in the wrong direction, and sometimes the face looks lit differently from the rest of the body.
Hair and clothing: Hair may appear stiff or unnaturally animated, clothing patterns can distort, and jewellery may glitch or vanish.
Screenshot a frame and reverse-search it on Google Images. If the image shows up elsewhere in a different context, that's a red flag
Check if the same footage appears elsewhere online
Cross-reference with BBC, Sky News, or other reliable sources
Are deepfakes legal in the UK?
While some deepfakes are harmless or even creative, it’s important to understand the different ways these AI-generated videos and images may break the law.
Key examples of illegal use of deepfakes include:
Intimate image abuse: It is illegal to share, threaten to share, or create intimate photos or videos of someone without their consent, including deepfake images or videos.
Child sexual abuse material: Creating, sharing, or possessing indecent images or videos of anyone under 18 is illegal. This applies to both real and digitally altered deepfakes.
Hate crimes: Using deepfakes to threaten, intimidate, or incite hatred against individuals or groups is a criminal offence.
Fraud & false communications: Deepfakes can be used to trick people into giving money, personal information, or access to accounts. Sending false videos impersonating someone with the intent to cause serious emotional or physical harm is illegal.
Terrorism & extremism: Deepfakes promoting or glorifying terrorism should be reported to authorities.
Stalking, harassment & blackmail: Repeated threats or extortion to demand money, sexual favours or other actions, involving deepfakes is a crime.
What to do if you spot a deepfake
Knowing how to respond to a deepfake can help prevent the spread of misinformation and protect yourself and others.
Don’t share it: Your first instinct might be to share it with, “Is this real?” Don't. Even sharing to debunk gives fake content more reach.
Look at their features: Examine the person’s hands and face closely. Do their eyes match other photos? Are the fingers correct in number and appearance?
Verify with fact-checkers: Verify with fact-checkers on sites like Full Fact or BBC Reality Check, which can confirm whether content is genuine or misleading.
Report to the platform: Use the platform’s reporting tools for fake content. Include any relevant details, such as URLs, usernames, or screenshots, to help moderators review the content.
Report illegal content: In the UK, you can report illegal deepfakes online via Police UK. Details like URL, usernames, platform, or social media site you were using can be useful when reporting it. You can report fraud to Action Fraud, the UK’s national reporting centre for fraud and cybercrime.
How Virgin Media is helping you stay safe online
Deepfakes are becoming easier to create and harder to spot, but you’re not powerless. By watching for unnatural eye movements, mismatched skin tones, inconsistent audio, and verifying sources, you can protect yourself and others from misleading content.
Virgin Media takes your online security seriously and offers built-in tools to help you stay protected from scams, misinformation, and harmful content.
With our superfast, reliable broadband, you can quickly fact-check suspicious videos and access trusted sources to confirm whether content is genuine.
If you’re a Virgin Media broadband customer, Essential Security helps protect you from viruses, phishing sites, and unsafe websites. It alerts you if you try to access potentially dangerous content, keeping you safer while browsing.
Not with us yet? Explore our latest broadband deals, all with Essential Security included at no extra cost, and enjoy a safer, smarter online experience.
Browse the Virgin Media range
Virgin Media services are only available in eligible Virgin Media network areas. All of the products on this page are subject to survey, network capacity and a credit check.