Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Anthropic and Amazon bet $100 billion that compute is the moat that matters

The deal is Amazon's answer to its own OpenAI partnership. For Anthropic, it's an infrastructure arms race

Defused News Writer profile image
by Defused News Writer
Anthropic and Amazon bet $100 billion that compute is the moat that matters
Photo by Samuele Macauda / Unsplash

Anthropic has struck one of the largest compute deals in AI history, committing more than $100 billion to Amazon Web Services over the next decade in exchange for up to 5 gigawatts of capacity to train and run its Claude models.

To put that number in context: a large nuclear power plant produces roughly 1 gigawatt. Anthropic is securing the computing equivalent of five of them.

Amazon is investing $5 billion in Anthropic immediately, with up to $20 billion more tied to commercial milestones. That comes on top of the $8 billion Amazon had already put into the company since 2023, bringing its total potential commitment to $33 billion.

The timing is pointed. OpenAI sent a letter to investors last week pitching its compute capacity as its key competitive advantage over Anthropic, claiming the Claude maker had made a strategic misstep by not securing enough infrastructure. This deal is Anthropic's answer.

The infrastructure at the centre of the arrangement is Amazon's Trainium chip family, custom silicon designed for AI training and inference workloads. The commitment spans Trainium2 through Trainium4, with the option to purchase future generations as they become available. Nearly 1 gigawatt of Trainium2 and Trainium3 capacity is expected to be online by the end of 2026, with meaningful compute arriving within three months.

The commercial rationale is straightforward. Anthropic's annualised revenue has surpassed $30 billion, up from approximately $9 billion at the end of 2025. That growth has come with a cost: the company acknowledged that a sharp rise in consumer usage across its free, Pro, and Max tiers has hit reliability and performance, particularly during peak hours. The Amazon deal is explicitly framed as a response to that strain.

More than 100,000 organisations already run Claude on Amazon Bedrock, Amazon's managed AI inference service. Under the expanded arrangement, the full Claude Platform will be available directly within AWS, letting customers access Anthropic's tools through their existing accounts, billing, and security controls without additional credentials or contracts.

The deal deepens a relationship that has been building since Anthropic named AWS its primary cloud provider in 2023 and its primary training partner in 2024. Amazon has now run the same playbook with both of the world's leading AI labs: two months ago, it invested $50 billion in OpenAI and struck a comparable $100 billion cloud commitment.

For Amazon, the arrangement locks in demand for its custom silicon at a moment when Trainium is competing hard against Nvidia's dominance in AI chip supply.

There is a longer-term question embedded in the structure. Anthropic is betting its infrastructure future on a partner that is also a competitor in the cloud market. How long that tension stays manageable may determine whether this deal looks like a foundation or a constraint.

The recap

  • Amazon and Anthropic announce a compute supply deal for Claude.
  • Agreement will provide up to 5 gigawatts of compute capacity.
Defused News Writer profile image
by Defused News Writer

Explore stories