Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

DeepMind scientist argues no AI system will ever become conscious, calling the assumption a 'fundamental fallacy'

The paper has been downloaded more than 27,000 times and directly challenges the AGI narratives underpinning billions of dollars in AI investments

Defused News Writer profile image
by Defused News Writer
DeepMind scientist argues no AI system will ever become conscious, calling the assumption a 'fundamental fallacy'
Photo by Growtika / Unsplash

A senior staff scientist at Google DeepMind has published a paper arguing that no artificial intelligence system, no matter how capable, will ever achieve consciousness, describing the widespread assumption that it could as a specific logical error he calls the "Abstraction Fallacy."

Alexander Lerchner's paper, titled "The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness," was first archived on PhilArchive on 8 March and has since been downloaded more than 27,000 times, making it one of the most-read philosophy of mind papers of the year.

The argument targets computational functionalism, the dominant view in AI research that subjective experience emerges from the right computational structure regardless of the underlying physical material.

Lerchner contends this is a category error.

Computation, he argues, is not an intrinsic physical process but a description that humans impose on physical events, mapping continuous voltages and transistor states onto discrete symbols.

That mapping requires what he calls a "mapmaker," an already-conscious agent who decides which physical states count as which symbols.

Without that agent, a computer is just physics.

"The development of highly capable artificial general intelligence does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool," Lerchner wrote.

The paper draws a sharp distinction between simulation, where a system mimics the behavioural outputs of consciousness, and instantiation, where a system actually possesses subjective experience.

Run a perfect simulation of a hurricane, Lerchner argues, and nothing gets wet.

The same logic applies to consciousness.

Crucially, Lerchner is not making a biological exclusivity argument.

He states explicitly that if an artificial system were ever conscious, it would be because of its specific physical constitution, not its software architecture, leaving open the theoretical possibility of non-biological consciousness while ruling out any path through symbolic computation.

The paper has attracted attention in part because of its source.

For a working scientist inside one of the world's most influential AI laboratories to publicly label the consciousness trajectory a fallacy introduces a rare note of internal dissent into a sector whose valuations are built partly on the promise of artificial general intelligence.

The version initially hosted on PhilArchive carried Google DeepMind letterhead but was later replaced with a version removing Google branding after a journalist's inquiry.

Google did not respond to a request for comment on the change.

The paper carries a disclaimer stating that its conclusions are the author's alone and do not reflect the official stance of his employer.

Several philosophers noted that Lerchner's argument echoes decades of similar work in philosophy of mind but said it was notable precisely because it came from inside a major AI laboratory rather than from an external critic.

Johannes Jäger, an evolutionary systems biologist and philosopher, offered a blunter summary of the practical implications: "An LLM doesn't do that. It's just a bunch of patterns on a hard drive."The recap

  • DeepMind scientist argues AI can simulate but not instantiate consciousness
  • Demis Hassabis predicted AGI impact as 10 times Industrial Revolution
  • Paper carries author disclaimer; Google did not respond to queries
Defused News Writer profile image
by Defused News Writer

Explore stories