Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Deepfakes and synthetic media: How to spot them, verify claims, and protect yourself

Why this matters now

Mr Moonlight profile image
by Mr Moonlight
Deepfakes and synthetic media: How to spot them, verify claims, and protect yourself
Photo by Tom Kotov / Unsplash

In early 2024, a finance worker at a multinational firm transferred $25 million to fraudsters after joining a video call with what appeared to be the company's chief financial officer and several colleagues. Every person on the call was a deepfake. In the UK, synthetic audio clips of politicians have circulated during election periods, and fraudsters increasingly use voice-cloning technology to impersonate family members in distress, convincing victims to send money within minutes.

Deepfakes (synthetic media created or manipulated using artificial intelligence) are no longer a future threat. They are here, they are convincing, and they are being weaponised for fraud, disinformation, harassment, and political manipulation. The technology to create them is now accessible to anyone with a laptop and an internet connection.

What you can do in five minutes: Learn three quick checks that work on most fakes, bookmark two free verification tools, and understand the one question that stops most scams cold.

What Are Deepfakes?

Deepfakes are synthetic media (images, audio, or video) created or significantly altered using machine learning techniques, particularly deep neural networks. The term combines "deep learning" and "fake."

Three main categories:

  1. Face swaps and reenactment: Replacing one person's face with another's in video, or puppeteering a person's facial expressions and head movements using someone else as the driver.
  2. Voice cloning: Synthesising someone's voice from audio samples, sometimes requiring only seconds of source material.
  3. Fully synthetic media: Generating entirely artificial images, audio, or video of people who may or may not exist.

Modern generative AI tools can also create "cheapfakes" (media manipulated using simple editing rather than AI) or enhance low-quality fakes to make them more convincing. The line between deepfakes and other forms of manipulation is increasingly blurred.

Why Deepfakes Are Hard to Detect

The technology is improving faster than detection methods. Research from the University of California, Berkeley shows that human ability to detect deepfakes is poor and declining as quality improves. Even experts struggle with high-quality fakes.

Key challenges:

  • Compression and platform processing: When videos are uploaded to social media, they are compressed and re-encoded, which destroys many forensic artefacts that detection tools rely on. A detectable fake on the original file may become undetectable after being shared on WhatsApp or Twitter.
  • Adversarial adaptation: Deepfake creators actively test their outputs against detection tools and refine them until they pass.
  • Context collapse: A real video can be presented with false context (wrong date, location, or claim), which is technically not a deepfake but has the same effect.
  • Confirmation bias: People are more likely to believe content that aligns with their existing beliefs, making them less critical of fakes that "confirm" what they already think.

According to the Reuters Institute, professional fact-checkers now spend more time verifying the context and claims around media than analysing the media files themselves.

The Five-Minute Verification Checklist

When you encounter a suspicious image, audio clip, or video, especially if it is viral, emotionally charged, or asks you to act urgently, use this checklist:

1. Pause and Check Your Reaction

If it makes you angry, afraid, or excited, stop. Emotional content is designed to bypass critical thinking. Take a breath before sharing or acting.

  • For images: Right-click (or long-press on mobile) and select "Search image with Google" or use TinEye.
  • For video: Take a screenshot of a distinctive frame and search that image.
  • What to look for: Earlier versions, different contexts, or evidence the media is old or from a different event.

3. Check the Source

  • Who posted it first? Trace the content back to its earliest appearance. Be suspicious of accounts created recently, accounts with few followers, or accounts that only post divisive content.
  • Is there a credible original source? News organisations, official accounts, or verified eyewitnesses are more reliable than anonymous accounts.

4. Look for Verification by Trusted Fact-Checkers

Search for the claim or key details on:

5. Use Free Detection Tools (With Caution)

Important: No tool is foolproof. A "pass" from a detection tool does not guarantee authenticity. Use tools as one input among many.

Red Flags: What to Look For

Visual (Images and Video)

  • Unnatural facial movements: Odd blinking patterns, lip-sync errors, or facial expressions that do not match the emotion or context.
  • Inconsistent lighting or shadows: Faces lit differently from the environment, or shadows that do not match the light source.
  • Blurring or distortion around edges: Particularly around the hairline, jawline, or where the face meets the background.
  • Unnatural eye gaze or reflections: Eyes that do not track properly, or reflections in glasses that do not match the scene.
  • Temporal inconsistencies: Jewellery, clothing, or background objects that appear or disappear between frames.
  • Unusual skin texture: Overly smooth or waxy skin, or skin that looks airbrushed.

Audio

  • Robotic or flat intonation: Lack of natural emotional variation or emphasis.
  • Breathing and mouth sounds: Real speech includes breaths, lip smacks, and pauses. Synthetic audio often lacks these or adds them unnaturally.
  • Background noise inconsistencies: Abrupt changes in ambient sound, or a voice that sounds "too clean" for the environment.
  • Repetitive patterns: Identical pronunciation or pacing of repeated words or phrases.

Context and Metadata

  • Missing metadata: Legitimate media usually includes information about when and where it was created. Absence of metadata is suspicious.
  • Inconsistent details: Clothing, weather, or landmarks that do not match the claimed time or location.
  • Too perfect: Media that is unusually high quality, perfectly framed, or captures an unlikely moment may be staged or synthetic.

Common Scam Scripts Using Deepfakes

1. The Family Emergency (Voice Cloning)

The scam: You receive a call or voice message from someone who sounds like a family member (child, grandchild, sibling) saying they are in trouble (arrested, in an accident, stranded abroad) and need money urgently. The voice is cloned from social media videos or voicemail greetings.

Protection:

  • Hang up and call the person directly on a number you already have.
  • Agree on a family code word in advance that only real family members know.
  • Be suspicious of any request for urgent money transfer, especially via cryptocurrency, gift cards, or wire transfer.

2. The CEO Fraud (Video or Audio)

The scam: An employee receives a video call or message from someone who appears to be a senior executive, requesting an urgent wire transfer, password reset, or confidential information.

Protection:

  • Verify requests through a separate communication channel (call the person directly, or walk to their office).
  • Implement multi-person approval for financial transactions above a threshold.
  • Train staff to recognise that urgency and secrecy are red flags.

3. The Romantic Scam (Synthetic Profiles)

The scam: A person on a dating app or social media uses AI-generated profile photos and voice messages to build a relationship, then requests money for an emergency or investment opportunity.

Protection:

  • Reverse image search profile photos.
  • Request a live video call early in the relationship. Be suspicious if they always have excuses.
  • Never send money to someone you have not met in person.

4. The Celebrity Endorsement (Fake Video)

The scam: A deepfake video shows a celebrity or trusted public figure endorsing a product, investment scheme, or cryptocurrency. The video is shared on social media or in targeted ads.

Protection:

  • Check the celebrity's official social media or website for confirmation.
  • Be sceptical of investment opportunities that seem too good to be true.
  • Look for the red flags listed above.

5. The Blackmail or Sextortion (Fake Intimate Images)

The scam: A victim receives a message claiming the sender has intimate images or videos of them (often created by face-swapping the victim's face onto explicit content) and demanding payment to prevent distribution.

Protection:

  • Do not pay. Paying does not guarantee deletion and confirms you as a target.
  • Report to the police (this is a crime under UK law).
  • Document everything and seek support from organisations like the Revenge Porn Helpline.

Verification Workflows: What Journalists Do

Professional fact-checkers and investigative journalists use structured workflows to verify media. You can adapt these steps:

Step 1: Establish Provenance

  • Who created this? Find the original source.
  • When was it created? Check metadata, or use tools like Forensically to analyse JPEG metadata and error level analysis.
  • Where was it created? Use geolocation techniques (landmarks, language on signs, weather, sun angle).

Step 2: Assess the Source

  • Is the source credible? Check their history, other posts, and whether they have a track record of accuracy.
  • What is their motive? Are they trying to sell something, promote a political view, or generate engagement?

Step 3: Corroborate with Other Evidence

  • Are there other angles or sources? Real events are usually captured by multiple people.
  • Do the details match? Cross-reference claims with news reports, official statements, or databases.

Step 4: Consult Experts

  • For technical analysis: Forensic analysts can examine compression artefacts, lighting, and other technical details.
  • For context: Subject matter experts can assess whether the content is plausible.

Step 5: Publish Findings Transparently

  • Show your work: Explain what you checked and why you reached your conclusion.
  • Acknowledge uncertainty: If you cannot definitively verify or debunk something, say so.

The First Draft Essential Guide and Bellingcat's online investigation toolkit provide detailed workflows and tools.

Advice for Public Figures and Small Businesses

For Public Figures (Politicians, Executives, Activists)

You are a high-value target. Deepfakes of you can be used for reputational damage, fraud, or disinformation.

Proactive steps:

  • Watermark or sign official content: Use digital signatures or blockchain-based provenance tools to authenticate your real media.
  • Monitor for impersonation: Use tools like Google Alerts or social media monitoring services to detect fake content early.
  • Prepare a response plan: Have a crisis communication plan ready for when (not if) a deepfake of you appears.
  • Educate your audience: Regularly remind followers to verify content and provide an official channel for confirmation.
  • Limit available training data: Be mindful of how much high-quality audio and video of you is publicly available. Attackers need samples to create convincing fakes.

For Small Businesses

You are vulnerable to CEO fraud and reputational attacks.

Proactive steps:

  • Implement verification protocols: Require multi-channel confirmation for financial transactions or sensitive requests (e.g., a video call request must be confirmed by phone).
  • Train staff: Conduct regular training on deepfake awareness and social engineering tactics.
  • Use multi-factor authentication: Protect accounts with MFA to prevent account takeover that could be used to distribute fake content.
  • Monitor your brand: Set up alerts for your company name and key executives to detect fake content early.
  • Have a response plan: Know who will handle a deepfake incident, how you will communicate with customers and staff, and where you will report it.

Current UK Law

Deepfakes can violate several existing laws, depending on how they are used:

  • Fraud Act 2006: Using deepfakes to deceive for financial gain is fraud.
  • Malicious Communications Act 1988 / Communications Act 2003: Sending deepfakes to cause distress or anxiety is an offence.
  • Protection from Harassment Act 1997: Repeated use of deepfakes to harass someone is illegal.
  • Sexual Offences Act 2003: Creating or distributing intimate image deepfakes without consent may constitute a sexual offence.
  • Online Safety Act 2023: Platforms have duties to remove illegal content, including some deepfakes, and to protect users from harm.

As of 2024, the UK government has consulted on specific deepfake legislation, but the law is evolving. The Law Commission and Ofcom are key bodies shaping this area.

Where to Report

  • Police: Report fraud, harassment, or intimate image abuse to Action Fraud or your local police.
  • Platform: Report the content to the social media platform or website hosting it. Most platforms have policies against synthetic media used for harm.
  • Fact-checkers: Alert organisations like Full Fact or BBC Verify if the deepfake is spreading disinformation.
  • Revenge Porn Helpline: For intimate image deepfakes, contact the Revenge Porn Helpline (0345 6000 459).
  • National Cyber Security Centre (NCSC): For incidents affecting businesses or critical infrastructure, report to the NCSC.

Evidence Preservation

If you are a victim or witness:

  • Do not delete the content. Take screenshots and save copies.
  • Document everything: URLs, timestamps, usernames, and any communication with the perpetrator.
  • Seek legal advice: A solicitor can advise on civil remedies (e.g., injunctions) and criminal options.

Protecting Yourself: Practical Steps

Personal Security

  1. Limit your digital footprint: Be mindful of what photos, videos, and audio recordings of you are publicly available. Attackers need training data.
  2. Use privacy settings: Lock down social media profiles and limit who can see your content.
  3. Educate family and friends: Make sure your close contacts know about voice cloning scams and agree on a verification method (e.g., a code word).
  4. Monitor your online presence: Set up Google Alerts for your name and periodically search for your image.

Organisational Security

  1. Implement verification protocols: Require out-of-band confirmation for sensitive requests.
  2. Train staff regularly: Include deepfake awareness in security training.
  3. Use technical controls: Multi-factor authentication, email authentication (SPF, DKIM, DMARC), and endpoint security.
  4. Have an incident response plan: Know how you will respond to a deepfake incident.

Societal Resilience

  1. Support quality journalism: Subscribe to and share content from credible news organisations that verify their reporting.
  2. Promote media literacy: Encourage critical thinking and verification skills in your community, workplace, and family.
  3. Advocate for regulation: Support policies that require transparency in AI-generated content and hold platforms accountable.

The Future: What to Expect

Detection will remain difficult. As generative AI improves, the gap between creation and detection will likely widen. We cannot rely on technology alone to solve this problem.

Provenance and authentication will become critical. Initiatives like the Coalition for Content Provenance and Authenticity (C2PA) are developing standards to embed cryptographic proof of origin and editing history in media files. Major camera manufacturers and platforms are beginning to adopt these standards.

Regulation is coming. The EU's AI Act includes provisions on synthetic media transparency. The UK is likely to follow with similar requirements for labelling AI-generated content.

The threat will evolve. Expect deepfakes to be used in new ways: real-time video manipulation in live calls, synthetic "evidence" in legal proceedings, and personalised scams at scale.

Human verification will matter more. In a world where media can be faked, trusted relationships, out-of-band verification, and critical thinking become our strongest defences.

Summary: Your Action Plan

Today:

This week:

  • Train yourself to pause before sharing emotional content.
  • Set up Google Alerts for your name and business.
  • If you run a business, brief your team on CEO fraud and verification protocols.

This month:

  • Conduct a deepfake risk assessment for yourself or your organisation.
  • Review and update your incident response plan.
  • Educate your family, friends, or colleagues about deepfake scams.

Ongoing:

  • Stay informed about new deepfake techniques and detection methods.
  • Verify before you trust, especially for high-stakes decisions.
  • Advocate for transparency, accountability, and media literacy.

Deepfakes are a serious threat, but they are not insurmountable. With awareness, critical thinking, and the right tools, you can protect yourself, your family, and your organisation.


Key Sources:

Mr Moonlight profile image
by Mr Moonlight

Read More