Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

AI and misinformation: how falsehoods spread, and what actually reduces harm

Artificial intelligence has made it easier to produce convincing falsehoods and faster to spread them at scale. But research suggests the most effective responses are often simpler than new technology.

Mr Moonlight profile image
by Mr Moonlight
AI and misinformation: how falsehoods spread, and what actually reduces harm
Photo by Pau Casals / Unsplash

Artificial intelligence has changed the texture of misinformation, but not its underlying dynamics. Synthetic text, images and video have lowered the cost of producing misleading content, while recommendation systems reward material that provokes strong reactions. For readers, the most useful response is not technical expertise in spotting fakes, but habits that slow down exposure and reduce the chance of passing on claims that are wrong or misleading. The evidence is increasingly clear that small, consistent interventions can cut harm more reliably than sweeping technological fixes.

The immediate actions are mundane rather than dramatic. Pause before sharing. Check whether a claim appears in multiple independent outlets rather than a single viral post. Look for original sources rather than screenshots. These behaviours matter because most misinformation spreads not through hardened belief, but through inattention and speed. Artificial intelligence has accelerated the process, but it has not fundamentally altered human behaviour.

How AI changes the creation of falsehoods

Artificial intelligence has transformed the creation of misleading content by making it faster, cheaper and easier to tailor messages to specific audiences. Large language models can produce plausible news-style text in seconds. Image generators can create photographs that appear authentic at a glance. Video synthesis tools can produce clips that suggest people said or did things that never happened.

This does not mean AI has invented deception. Propaganda, hoaxes and manipulated media long predate machine learning. What has changed is scale and accessibility. Techniques that once required specialist skills are now available through consumer tools. As researchers at institutes such as the Oxford Internet Institute have noted, this widens participation in misinformation production without necessarily increasing its sophistication.

The result is a flood of low-cost content rather than a small number of masterfully crafted fakes. Much of it is sloppy, repetitive or easily debunked. Its impact comes from volume and persistence rather than quality.

Amplification matters more than creation

Evidence suggests that amplification, not creation, is where most harm occurs. Recommendation systems on social platforms are designed to maximise engagement, and emotionally charged or surprising claims tend to perform well under those incentives. AI-generated content fits easily into this environment, but it does not drive it.

Studies by research groups including the Pew Research Center show that false or misleading stories often reach large audiences because users share them quickly, sometimes without reading beyond the headline. Automated systems then pick up those signals and push the content further. Artificial intelligence accelerates this feedback loop by enabling rapid testing of variations until something sticks.

Coordination also plays a role. Groups can use AI tools to generate large volumes of similar messages, giving the appearance of widespread support or concern. This can influence journalists, moderators and ordinary users who take popularity as a proxy for credibility.

Belief is shaped by identity and context

Belief is the final and most complex stage. Research consistently shows that people are more likely to accept information that aligns with their existing views or group identities. Artificial intelligence does not change this psychological reality. A convincing deepfake may attract attention, but belief depends on trust in the source and resonance with prior assumptions.

This is why sensational demonstrations of fake videos often overstate their impact. In real-world settings, most people encounter misleading content in passing rather than as a carefully staged deception. The harm comes from repetition and normalisation rather than a single moment of persuasion.

What actually reduces harm

The strongest evidence points to modest interventions that reduce speed and reach rather than attempting perfect detection. Labelling content as disputed or synthetic can help when it is timely and clear, particularly for users who are undecided. Research from bodies such as the Alan Turing Institute suggests labels work best as prompts to think, not as definitive judgments.

Friction is another effective tool. Small delays, prompts that ask users to read an article before sharing, or reminders to consider accuracy have been shown in multiple peer-reviewed studies to reduce the spread of false information. These measures do not rely on users becoming experts. They work by interrupting automatic behaviour.

Downranking content that repeatedly proves misleading also reduces exposure without banning speech outright. Transparency reports from major platforms indicate that limiting distribution often has a larger effect than removing individual posts, which can trigger backlash or claims of censorship.

Media literacy remains important, but evidence suggests it works best when focused on specific skills rather than abstract warnings. Teaching people to look for original sources, check dates and recognise emotional manipulation has measurable benefits. Broad campaigns urging scepticism without guidance tend to have weaker effects.

What does not work reliably

Some popular responses have limited evidence behind them. Automated detection systems struggle to keep pace with new forms of content and can generate false positives that undermine trust. Overly aggressive takedowns can drive communities to alternative platforms where moderation is weaker.

Fact-checking after the fact has value, particularly for journalists and researchers, but it rarely reaches everyone who saw the original claim. Corrections also compete with the emotional impact of the initial story. This does not mean fact-checking is futile, but it should not be treated as a silver bullet.

Watermarking and provenance technologies can help establish authenticity at the point of creation, but they depend on widespread adoption and do little to address screenshots, re-uploads or content generated outside compliant systems. They are tools, not solutions.

Implications for publishers and platforms

For publishers, AI-driven misinformation raises familiar questions about verification at greater speed. Newsrooms face pressure to respond quickly to viral claims, but the evidence suggests accuracy remains the strongest defence of trust. Clear sourcing, visible corrections and restraint in amplifying unverified material matter more than adopting the latest detection tools.

Platforms face a different challenge. Their systems shape what spreads, and small design choices can have large effects. Research increasingly points towards responsibility for distribution rather than endless investment in content classification. Slowing virality and reducing incentives for sensationalism address multiple problems at once.

Both groups also face an audience that is tired of warnings and alarmism. Overstating the threat of AI-generated misinformation can be counterproductive, feeding cynicism and disengagement. The data shows that most people do not want to be fooled, but they operate under time pressure and information overload.

Verifying claims in an AI-saturated environment

For readers, verifying information does not require certainty or specialist tools. It involves a sequence of habits. Check whether a claim appears across multiple reputable outlets rather than a single viral source. Look for direct links to original documents, data or recordings. Be cautious of content that provokes strong emotional reactions or frames itself as being suppressed or forbidden. Treat screenshots and clips without context as prompts for further checking rather than evidence.

Artificial intelligence has changed how misinformation looks and how fast it travels, but it has not changed what reduces harm. Slowing down, diversifying sources and designing systems that reward accuracy over outrage remain the most reliable tools. In a landscape crowded with technical fixes and bold promises, the research points back to these unglamorous but effective measures.

Mr Moonlight profile image
by Mr Moonlight

Read More