Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Amazon says its Trainium AI chip business has already hit multibillion-dollar scale

Mr Moonlight profile image
by Mr Moonlight
Amazon says its Trainium AI chip business has already hit multibillion-dollar scale
Photo by Igor Omilaev / Unsplash

Amazon is betting hard on its in-house AI silicon, and chief executive Andy Jassy says that bet is already paying off.

Speaking during AWS re:Invent, Jassy said the company’s Trainium2 chip has become a multibillion-dollar revenue run-rate business, with more than 1 million chips in production and over 100,000 companies using it through Amazon’s Bedrock AI platform.

Amazon also unveiled Trainium3, the next generation of its Nvidia rival. The company says it is four times faster than Trainium2 while using less power, a pitch aimed squarely at cloud customers frustrated by high GPU costs and constrained supply.

Jassy argued that Trainium is winning on “price-performance advantages over other GPU options that are compelling.”

AWS chief executive Matt Garman told CRN that one customer looms especially large in Trainium’s rapid growth. Anthropic, Amazon’s high-profile AI partner, is now using more than 500,000 Trainium2 chips to build new Claude models under Project Rainier, an enormous distributed cluster that went online in October.

Anthropic has designated AWS as its primary training provider in exchange for Amazon’s investment, even as its models also run on Microsoft’s cloud using Nvidia hardware.

OpenAI has begun using AWS as well, but Amazon says those workloads run on Nvidia systems, not Trainium, limiting their impact on Trainium revenue.

For all the momentum, any attempt to seriously challenge Nvidia’s dominance remains a moonshot. Only a handful of US tech giants have the in-house silicon design teams, interconnect tech and data-center scale required to build competitive AI systems at all. Nvidia also has a lock on the software ecosystem through CUDA, making it costly for developers to port models elsewhere.

Amazon’s next move may be a hedge against that reality. The forthcoming Trainium4 is being designed to interoperate directly with Nvidia GPUs in the same system, potentially letting customers mix and match hardware without abandoning the CUDA universe.

Whether that helps Amazon steal market share or simply reinforces Nvidia’s role at the centre of AI computing, the company seems satisfied with the trajectory. If Trainium2 is already generating billions and Trainium3 is faster and more efficient, Amazon may not need to topple Nvidia to claim a meaningful slice of the AI-chip economy.

Mr Moonlight profile image
by Mr Moonlight

Read More