Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

AI and elections: influence operations, deepfakes, and resilience

As generative AI lowers the cost of political manipulation, the most effective defence is not perfect detection but everyday verification habits that slow misinformation before it spreads

Mr Moonlight profile image
by Mr Moonlight
AI and elections: influence operations, deepfakes, and resilience
Photo by Element5 Digital / Unsplash

The single most useful response to AI-enabled election interference is behavioural rather than technical. When political content provokes a strong emotional reaction, anger, fear, triumph, or disgust, pause before you share it or act on it. That pause creates space for verification, which remains the most reliable defence even as synthetic media improves.

Start with independence. Ask whether the same claim is being reported by at least two reputable outlets that do original journalism rather than rewrites. Look for a primary source: a full speech, an unedited interview, an official document, a court filing, a regulator notice, or a campaign statement published in its original context. If the content is a clip, find the longer version and check what comes immediately before and after. If those steps are difficult or impossible, assume the content is designed to outrun scrutiny.

These habits matter because AI has shifted the economics of influence. It has not invented political deception, but it has made it cheaper, faster and more scalable, which changes how elections are contested online.

Why AI changes the manipulation landscape

Election interference has always relied on persuasion, repetition and timing. What generative AI adds is speed and volume. A small group can now produce thousands of plausible messages, images or audio clips tailored to different audiences, accents, regions and news cycles. Automated accounts can amplify them until they appear unavoidable. Text generators can flood platforms with near-duplicate posts that look like organic discussion, creating the illusion of consensus.

The result is rarely a single dramatic deepfake that settles an election. It is a constant background hum of distortion. Claims circulate faster than they can be checked. Context is stripped away. Screenshots and clips replace primary sources. By the time a correction arrives, the narrative has already moved on.

This is why the risk is not limited to outright lies. Misleading edits, selective quotation, fabricated screenshots and synthetic “documents” can all be effective without ever producing a perfect fake video.

Influence operations in practice

Modern influence operations tend to follow a recognisable pattern, even when the tools change.

They begin with seeding. A claim is introduced in a low-accountability environment, often through anonymous accounts, fringe sites or closed messaging groups. The initial version does not need to be persuasive, only provocative.

Next comes laundering. The claim is repackaged into formats that look more credible: a screenshot of a supposed headline, a short video clip, a chart with no source, a quote attributed vaguely to “officials”. AI helps by making this packaging fast and visually convincing.

Amplification follows. Coordinated networks, some automated, some human, push the content into wider feeds. The goal is not universal persuasion but saturation within particular communities, so the topic feels unavoidable.

Then comes capture. Journalists, campaigners and ordinary users respond, sometimes to debunk, sometimes to condemn. The operation succeeds if the claim becomes the frame of discussion, even when contested.

Finally, there is mutation. When challenged, the content shifts form. A video becomes an audio clip. A quote becomes a paraphrase. A screenshot becomes hearsay. Each mutation sheds verifiable detail and keeps the story alive.

Understanding this cycle matters because it explains why simply labelling a single piece of content as false rarely ends the story.

Deepfakes: visible risk, incomplete picture

Deepfakes attract attention because they are vivid and unsettling. Convincingly manipulated video or audio of a public figure can do real harm, particularly if it appears close to polling day, when rebuttal time is short. Cheap voice cloning makes impersonation easier, including robocalls and fake endorsements.

Yet in practice, simpler techniques often travel further. Selective editing, misleading captions, fabricated screenshots and lightly altered images are cheaper, easier to deploy and harder to debunk conclusively. The most effective content often uses AI sparingly to add plausibility rather than relying on a flawless synthetic performance.

This is why the right question for voters is not whether a piece of content is a deepfake in the technical sense. It is whether the underlying claim can be verified independently and placed in its original context.

Microtargeting and fragmented publics

AI also intensifies a trend that predates it: microtargeting. Campaigns and influence operations do not need to persuade everyone. They can tailor messages to narrow groups defined by location, language, interests or grievances. Generative tools make this cheap and responsive, allowing rapid adjustment to local issues and breaking news.

The democratic cost is fragmentation. You may see political content that your neighbours never encounter. A claim can be incendiary within one group and invisible elsewhere. That weakens shared debate and makes accountability harder, because misleading messages can be denied or reframed when challenged publicly.

Private channels amplify this effect. Forwarded messages, direct adverts and closed groups reduce visibility and scrutiny. As a rule, political content that arrives privately, without clear provenance, deserves extra scepticism.

Platform moderation: necessary but insufficient

Social platforms do remove some manipulated media and coordinated inauthentic behaviour. They label certain outlets, restrict some political advertising and invest in detection. These efforts matter, but they are constrained by scale, ambiguity and incentives.

Moderation systems must balance false positives and false negatives. They are vulnerable to gaming. Watermarks and provenance standards may help over time, but they are not universal and can be degraded by editing, translation or re-recording. Crucially, voters often see contested material long before any authoritative judgement is reached.

The practical implication is uncomfortable but clear. Individual verification habits remain essential. Platforms can reduce harm at the margins, but they cannot replace citizen judgment.

How to verify without becoming a forensic expert

Verification does not require specialist tools. It requires discipline.

First, identify the claim precisely. Write it as a sentence. Many manipulative posts rely on vagueness, allowing the impression to linger even when details collapse.

Second, trace it back. Search for the earliest version you can find. Look for the original upload, the first mention, the primary source. If the trail consists only of reposts of reposts, treat that as a warning.

Third, restore context. For video and audio, find the longer version. For images, look for the uncropped frame. For screenshots, look for the live page. If a post does not link to an original artefact, ask why.

Fourth, check time and place. Many viral clips are real footage from a different year or country. Look for dates, weather cues, signage and references that anchor the content.

Fifth, seek independent confirmation. Two credible sources with different incentives are worth more than dozens repeating the same wording. Uniform phrasing is often a sign of copy-and-paste amplification.

Finally, be wary of urgency. Calls to share “before it is deleted” are a classic manipulation tactic designed to bypass verification.

Reporting and escalation

When you encounter suspected manipulated content, use platform reporting tools. That is where rapid action, if any, usually begins. If the content involves impersonation, fraud or threats, escalate beyond the platform. In the UK, that may include reporting fraud and impersonation to Action Fraud and contacting the police via 999 or 101 where appropriate.

For journalists, researchers and campaigners, evidence preservation matters. Save URLs, capture screenshots with timestamps and account details, and note where the content appeared. Manipulative material often disappears once it has achieved reach, then re-emerges in altered form.

Election authorities also publish guidance on reporting concerns about the conduct of elections. Knowing those routes in advance reduces hesitation when time is short.

Uncertainty and the danger of overconfidence

Not every suspicious clip is synthetic. Real footage can be framed to mislead. Genuine mistakes by public figures can be falsely dismissed as deepfakes. Overconfidence cuts both ways.

A useful mental separation is between “I saw this” and “this is true”. Seeing a clip establishes that the clip exists, not that the claim it implies is accurate. The same applies to screenshots, charts and quotation graphics. Treat them as prompts to verify, not as proof.

Media literacy in an AI age

Media literacy is often framed as a school subject, but elections demand it from everyone. In an environment where persuasive content is easy to generate and hard to authenticate, literacy means understanding incentives as well as techniques. Ask who benefits if you believe or share a claim. Ask why it is framed the way it is. Ask what evidence would change your mind.

It also means valuing institutions that correct themselves. No outlet is infallible, but those that publish corrections and show their sourcing provide anchors in a noisy environment. Specialist communities and primary sources, academic papers, court documents, and official statistics often offer a higher signal than generic explainers.

Resilience is cumulative

No single action will immunise an election against AI-enabled manipulation. Resilience emerges from many small choices made by voters, journalists, platforms and institutions. Slowing down before sharing. Preferring primary sources. Correcting errors publicly. Avoiding the temptation to spread claims that flatter one’s own side.

Related reading

The most actionable step remains the simplest. Do not pass on political content you cannot verify, especially when it is designed to provoke. That restraint interrupts the very mechanism that influences operations that rely on rapid, emotional amplification.

The bottom line

AI has lowered the cost of political manipulation and increased its reach, but it has not abolished the need for persuasion grounded in reality. Elections are not decided by technology alone. They are shaped by how citizens interpret, verify and share information. In an AI-saturated media environment, democratic resilience depends less on perfect detection and more on everyday judgment exercised at scale.

Mr Moonlight profile image
by Mr Moonlight

Read More