The AI reckoning: Anthropic's Dario Amodei on the $50bn gamble, job losses, and why democracy must win the race
The man building one of the world's most powerful AI companies warns that half of entry-level jobs could vanish, and that's not even his biggest worry
Dario Amodei has a confession: he thinks some of his competitors are "yoloing" their way to potential bankruptcy. The CEO of Anthropic, one of the three companies leading the global race to build artificial general intelligence, sat down at a New York Times conference two weeks ago and delivered the most candid assessment yet of an industry hurtling toward either transformation or catastrophe, possibly both.
His most arresting claim? That we're witnessing financial plans so aggressive they border on delusional. One unnamed competitor, he notes, is projecting a path from a $74 billion loss to profitability within two years. "The math," he says drily, "may not be realistic."
But Amodei's concerns run deeper than balance sheets. The former OpenAI research lead who helped develop GPT-2 and GPT-3 before founding Anthropic now helms a company growing at 10x annually, projected to hit $8-10 billion in revenue this year. He's watched AI evolve from academic curiosity to the centre of national security strategy. And he's convinced that the technology's trajectory, whilst more predictable than the public imagines, carries risks that most of Silicon Valley is systematically underestimating.
The $50 billion Question Nobody Can Answer
The AI industry faces a timing problem that would make a Victorian railway speculator blush. Building a data centre takes 18-24 months. Predicting how much computing power you'll need to serve AI models two years hence? That's somewhere between educated guesswork and reading tea leaves.
"You have to decide now how much compute you need to buy," Amodei explains. "You face the risk of either not having enough compute to serve your customers, or buying too much and not being able to pay for it."
The sums involved are staggering. A single gigawatt of computing infrastructure (the scale at which frontier AI labs now operate) costs roughly $50 billion. Companies are entering "circular deals" where chip manufacturers like Nvidia invest billions in AI firms, which then use those funds to buy Nvidia chips. Anthropic has done some of these deals "on a smaller scale." Others are going much bigger.
"In principle, these deals can be reasonable," Amodei concedes. "Nvidia might invest $10 billion to cover costs for the first year, and you pay as you go." The problem emerges when the underlying assumptions require generating $200 billion annually by 2027 or 2028. "That," he says, "can lead to overextension."
The depreciation question adds another layer of uncertainty. How long will today's chips remain competitive? Conservative estimates assume aggressive obsolescence: new chips arrive within a year that are faster and cheaper, rendering previous generations less valuable. Some companies, Amodei suggests, may be making "overly optimistic assumptions," though he's careful not to name names. "It's possible they may be deluding themselves."
His own approach? "We aim to buy enough compute to be confident in our financial position, even in the 10th percentile scenario." Anthropic focuses on enterprise customers (coding, finance, biomedicine, retail, energy manufacturing) where margins are healthier than consumer plays. "We're efficient in training and inference and have good margins."
The Scaling Laws That Explain Everything (and Nothing)
Ask Amodei what surprised him most since he started as a research scientist at Baidu in 2014, and the answer isn't the technology itself. "I would not have been surprised by the economic impacts, the value it's creating, its centrality to the economy, national security, and scientific research." What did surprise him? "My own role as a leader in the space."
The technological trajectory, he insists, has been remarkably predictable. "The scaling laws (increasing compute and data to improve model performance) have been observed and documented over the last 12 years." The results speak for themselves: significant improvements in coding, science, biomedicine, law, finance, materials, and manufacturing.
This is where Amodei parts company with both the doomers and the dismissive. He's "one of the most bullish people around" on the technology itself. "The math of the technological side makes sense." Each new model release improves at tasks like coding and science. AI systems are winning high school and college mathematics olympiads. They're creating new mathematics. "Some individuals are relying on AI to write code and only editing it afterwards."
The path to AGI? Amodei thinks scaling current transformer models with increased compute will likely be sufficient, "with occasional small modifications." No dramatic breakthroughs required. "There's no point at which the models will start doing something different, just a continuation of what we've seen, only more so."
This steady exponential improvement (AI's equivalent of Moore's Law) is both reassuring and terrifying. Reassuring because it's predictable. Terrifying because of where it leads.
The "Country of Geniuses in a Data Centre" Problem
Amodei's most striking metaphor concerns national security. Imagine, he suggests, a "country of geniuses in a data centre": an AI system so intellectually capable it could outsmart other nations in intelligence, defence, economics, and strategy. Now imagine that capability in the hands of an authoritarian regime.
"If such a capability were placed in an authoritarian country, it could lead to oppression of its people and a perfect surveillance state." This is why he views selling advanced chips to China as fundamentally a national security issue, not merely an economic one. "Democracies need to be the first to develop such capabilities."
But he's equally concerned about surveillance creeping into democracies themselves. His proposed principle: "Use AI models aggressively in every possible way, except in ways that would make a country more like its authoritarian adversaries." The constraint matters. "We should observe this to avoid becoming like those adversaries."
This thinking puts him at odds with parts of Silicon Valley that view AI regulation as unnecessary meddling. The divergence, Amodei argues, comes down to proximity. "The actual researchers working on AI, as opposed to investors or general tech commentators, have a different perspective."
Those closest to the technology (the people actually building it) tend to be more worried, not less. They're excited about AI's potential to extend human lifespan and drive economic growth. But they're also concerned about national security risks, model alignment, and economic impacts. "These concerns need to be addressed through a federal framework."
The idea of a 10-year moratorium on all state regulation without federal guidelines? "Unpopular," Amodei says flatly. "AI is a new and powerful technology that requires careful consideration."
Half of Entry-Level Jobs Could Vanish
Amodei doesn't hedge on employment. "Some estimates suggest that half of all entry-level jobs could be lost." This isn't fear-mongering; it's arithmetic. As AI systems become capable of performing tasks that currently require human workers, those jobs will disappear.
His solution has three parts, each more ambitious than the last.
First, companies should focus on creating new value, not just cutting costs. "Use AI to leverage human workers to achieve more than they could before." Increase efficiency, yes, but channel those gains into expansion and new capabilities rather than pure headcount reduction.
Second, retraining programmes will be necessary, though "they may not be a complete solution." The government will need to step in fiscally, "possibly through tax policy, to support workers who are not benefiting from the increased productivity."
Here's where the numbers get interesting. If AI drives productivity growth of 5-10% annually (Amodei's estimate), that creates enormous wealth. "A large amount that could be redistributed to those who are not benefiting from it, with the government playing a role."
Third, and most radically, society itself will need to restructure. "In the long run, the structure of society will need to change to accommodate the effects of powerful AI." This could mean a world where work isn't the central focus of people's lives, where people work 15-20 hours per week: the vision John Maynard Keynes sketched nearly a century ago.
"Work would be more about personal fulfilment than economic survival," Amodei suggests. "Society will need to restructure itself to operate in a post-AI world."
Why Anthropic Thinks It's Different
In a field dominated by the consumer arms race between OpenAI and Google (both companies locked in what Google has internally termed "code red"), Anthropic has chosen a different path. "We're focusing on the enterprise, optimising our models for the needs of businesses."
The distinction matters more than it might seem. Consumer models optimise for engagement. Enterprise models optimise for coding, high intellectual activities, and scientific ability. "The personality and capabilities of AI models vary significantly," Amodei notes.
This specialisation, he argues, will persist even as models become more capable. "Even if AGI is achieved, different models will not necessarily converge to the same place." Companies build relationships with specific models. Downstream customers become accustomed to using them. Switching costs are real.
Anthropic's flagship model, Opus 4.5, is "considered the best model for coding." That technical edge translates into customer loyalty in ways that pure capability comparisons miss. "Businesses build relationships with specific models and have downstream customers who are accustomed to using those models."
The Bubble Question
So is it a bubble? Amodei's answer is carefully calibrated. "I want to separate the technological side from the economic side. The math of the technological side makes sense. The economic side is more uncertain."
The technology will deliver. The scaling laws hold. Models will continue improving. Revenue is growing: Anthropic's 10x annual growth proves the demand exists.
But the timing? The lag between investment and return? The assumptions baked into those $50 billion data centre builds? "Some companies may experience negative consequences if they make a timing error or miscalculate the growth of AI technology, even if the technology is powerful and fulfills its promises."
The amount of economic muscle flowing into AI is extraordinary, "potentially representing almost all of the growth in the United States GDP." That creates what Amodei calls a "cone of uncertainty." Estimates for future AI spending range from $20 billion to $50 billion annually. "That makes planning difficult."
Some players are managing the risk responsibly. Others are "pulling the wrist dial too far." The buffer between success and bankruptcy comes down to margins, and companies with consumer-focused models and lower margins are more vulnerable.
"The pressure to compete with other companies and authoritarian adversaries can lead to companies taking unwise risks," Amodei warns. "Some players may not be managing the risk well."
The Warning from Inside
What makes Amodei's assessment so striking is the source. This isn't a sceptical journalist or a short-seller talking their book. This is the CEO of one of the three companies most likely to build AGI, speaking two weeks ago about an industry he helped create.
He's bullish on the technology. He's building aggressively. Anthropic is hiring, expanding, and competing at the frontier. But he's also warning that the financial engineering underpinning the entire edifice may be unsound, that job losses will be severe, that national security risks are profound, and that society will need to fundamentally restructure itself.
"Warning about the potential downsides is the first step towards solving them," he says. It's a principle that applies to chip depreciation schedules, authoritarian AI, and the future of work alike.
The question is whether anyone's listening, or whether the race is already too far along to slow down.