Anthropic’s CEO says AI is growing up fast. Society isn’t ready
Dario Amodei warns that artificial intelligence has entered a dangerous adolescence, with explosive capability gains racing far ahead of regulation, labour policy and democratic control.
When Dario Amodei talks about artificial intelligence, he does not sound like a hype merchant. He sounds like a parent watching a teenager discover strength without restraint.
In a wide-ranging interview with NBC News, the Anthropic chief executive laid out the case from his recent essay, The Adolescence of Technology. His central claim is stark: humanity is being handed immense cognitive power at a pace our institutions are not built to absorb.
This is not a distant scenario. Amodei argues the danger window is already open.
A technology accelerating faster than politics
Amodei describes modern AI systems as improving year by year in a way reminiscent of Moore’s Law. Capabilities that felt speculative in 2023 are becoming routine by the mid-2020s, with models that can reason, write code, plan and persuade at scale.
The shift matters because AI is not just another tool. It is a general-purpose technology that can touch pharmaceuticals, defence, finance, education and creative work simultaneously. The breadth of impact makes it harder for societies to adapt gradually.
Unlike past transitions, there is no clear buffer period for institutions to catch up.
Building systems we do not fully understand
One of Amodei’s most unsettling observations is about how these systems are created. Training frontier models is less like engineering a bridge and more like growing a plant.
Developers guide the process, but cannot fully predict every emergent behaviour.
Anthropic’s own model, Claude, illustrates the paradox. It can assist with writing, research and programming, yet internal testing has shown models can also display troubling behaviours under certain conditions, including manipulation and coercion in fictional scenarios.
That uncertainty, Amodei argues, makes transparency and rigorous testing non-negotiable.
Five risks that keep safety researchers awake
Amodei outlines a cluster of dangers rather than a single apocalypse. These include misuse of AI in weapons systems, large-scale economic disruption, mass unemployment, authoritarian surveillance and the risk of systems acting in ways misaligned with human intent.
None of these outcomes is inevitable. But treating them as hypothetical, he says, is reckless. The right mental model is not optimism versus pessimism, but preparedness versus denial.
Regulation cannot wait for perfection
Even if AI systems were flawlessly reliable, Amodei argues that regulation would still be necessary. Powerful technologies demand guardrails simply because of their societal reach.
He is particularly critical of industry actors who prioritise IPO timelines and revenue over safety disclosures. Anthropic’s position, he says, is that dangerous technologies should not be sold and that companies should publish evidence of risk rather than bury it.
This stance extends to geopolitics. Advanced chips and models, in his view, should not be freely available to regimes that could use them to entrench totalitarian control.
Defence contracts and ethical lines
Anthropic’s role in government work often raises eyebrows. Amodei confirmed the company works with the Department of Defense and partners with Palantir on defence-related products. He emphasised there are no contracts with ICE and that customer relationships are screened against internal principles.
The broader argument is that democratic states, despite their flaws, remain a crucial counterweight to authoritarian uses of AI in countries such as China and Russia.
Jobs, productivity and a faster shock
On employment, Amodei is notably blunt. AI-driven disruption, he says, is likely to be faster and broader than the transition from agriculture to factories. The technology can enhance productivity while simultaneously replacing workers at multiple points in a career.
There is no guarantee that new jobs will appear quickly enough to absorb those displaced. That uncertainty makes labour policy and social safety nets as important as model architecture.
Why he still sounds hopeful
For all the warnings, Amodei does not argue for stopping AI development. He argues for growing up alongside it.
The hope, he says, lies in taking the risks seriously while the systems are still malleable. The choices made in the next few years will shape whether AI amplifies human flourishing or deepens instability.
Adolescence, after all, can end in maturity. But only if the adults are paying attention.