Demis Hassabis: why AGI is closer than it looks, and further away than the hype suggests
A year ago, Silicon Valley was whispering about diminishing returns. Today, the chief executive of Google DeepMind says the question is no longer whether AI will work, but what kind of intelligence we are actually building
The slowdown that never really happened
When anxiety crept through the AI industry last year, the fear was simple: had large language models peaked?
According to Demis Hassabis, that worry has quietly evaporated. Progress did not stall. Instead, researchers learned how to squeeze far more value out of existing architectures, data and training techniques. Synthetic data filled gaps where real-world data looked scarce. Scaling laws kept holding longer than many expected.
The result is that large models continue to get better across reasoning, multimodality and tool use. The debate has shifted. It is no longer about whether these systems can improve, but about whether improvement alone gets you to Artificial General Intelligence.
What today’s models still cannot do
For all their fluency, today’s systems are missing capabilities that humans take for granted.
They do not truly learn over time. They lack durable memory. Long-horizon planning remains brittle. Models cannot update themselves in the wild, absorbing new experiences and reshaping their understanding of the world.
Hassabis is blunt about the limits of the current paradigm. Scaling alone is unlikely to be enough. One or two genuine breakthroughs are still required, particularly around continual learning, persistent memory and long-term reasoning.
Why foundation models still matter
That does not mean large foundation models are a dead end. Hassabis argues the opposite. They are likely to remain the core substrate of any future AGI system.
The open question is whether they are sufficient on their own or whether they need to be combined with other techniques. DeepMind’s own history points to the latter. Systems such as AlphaGo and AlphaFold blended deep learning with search, simulation and symbolic reasoning to achieve breakthroughs that brute force alone could not.
This hybrid approach, often described as neurosymbolic, may prove essential again.
Learning as the defining feature of intelligence
If there is one line Hassabis keeps returning to, it is this: intelligence is the ability to learn.
True general intelligence is not about passing benchmarks. It is about acquiring new skills in unfamiliar domains and adapting over time. Current models, impressive as they are, remain largely static after training.
DeepMind is experimenting with ways to combine the self-learning dynamics of systems like AlphaZero with the general knowledge embedded in foundation models. The goal is an AI that can personalise, adapt and genuinely improve with experience.
That goal has not been reached yet.
AGI, superintelligence and the timeline question
Part of the confusion comes from definitions. AGI remains poorly specified. Hassabis offers a demanding one: a system capable of all the cognitive abilities humans possess, including the highest levels of creativity.
By that standard, today’s models are nowhere close. They do not invent new branches of physics. They do not create radically new artistic movements. They lack physical intelligence and embodied understanding.
Even so, Hassabis places AGI plausibly five to ten years away. Superintelligence, which implies capabilities beyond any human, is a separate and more speculative question.
World models and the physical turn
One of the most intriguing threads in DeepMind’s work is video generation.
Models that can generate realistic scenes are not just media tools. They are steps towards world models, internal representations of how reality works. Such models allow planning, simulation and imagination over long time horizons.
This matters for robotics as much as for reasoning. An AI that can imagine multiple futures can choose better actions, whether it is manipulating objects or navigating complex environments.
The assistant that lives with you
All of this feeds into what Hassabis describes as the killer app: a universal digital assistant.
To work, it must be multimodal, context-aware and always present. Not just on phones or laptops, but potentially through wearable devices such as smart glasses. Google, through Google DeepMind and the wider Alphabet ecosystem, is again pushing in this direction after helping pioneer early, premature versions a decade ago.
Advances in batteries, displays and on-device AI make the idea far more plausible today.
Trust, ads and the business model problem
One line DeepMind is cautious about crossing is advertising.
Hassabis acknowledges the tension. An assistant that subtly optimises for ad revenue risks losing user trust. For now, there are no plans to introduce ads into Gemini-powered assistants, with alternative models such as hardware sales being explored instead.
Trust, privacy and alignment are treated as product features, not footnotes.
The bubble question, revisited
Hassabis does not dismiss the risk of excess. Massive infrastructure bets could still disappoint. Some applications may fail. Certain companies will not survive.
But unlike past tech bubbles, AI has already demonstrated profound value in science, biology and medicine. AlphaFold alone has reshaped drug discovery for millions of researchers.
That makes a total collapse unlikely, even if expectations reset.
A technology that reshapes humans, not replaces them
The interview ends on a philosophical note. Humans have always adapted to machines that outperform them in narrow domains. Chess did not die when computers became unbeatable. Go flourished.
Hassabis believes the same will be true of AI. The challenge is not competition, but meaning. As machines take on more work, humans will have to rediscover what they value doing.
If AGI arrives, it will not just test our engineering. It will test our ideas about purpose, creativity and what intelligence really means.