Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google unveils two new AI chips at Cloud Next as hyperscaler hardware race intensifies

The TPU 8t and TPU 8i are designed for training and inference respectively, succeeding the Ironwood generation

Defused News Writer profile image
by Defused News Writer
Google unveils two new AI chips at Cloud Next as hyperscaler hardware race intensifies

Google used its Cloud Next conference in Las Vegas to launch two new artificial intelligence processors, the TPU 8t and TPU 8i, splitting its eighth-generation tensor processing unit (TPU) line into dedicated training and inference chips for the first time.

The move extends Google's decade-long effort to develop proprietary AI silicon and sharpens its challenge to Nvidia and AMD, the dominant suppliers of AI accelerators.

The TPU 8t is optimised for training and designed to compress frontier model development cycles from months to weeks, according to Amin Vahdat, Google's senior vice president and chief technologist for AI and infrastructure.

A single TPU 8t superpod scales to 9,600 chips, delivering 121 exaflops of FP4 compute performance, nearly triple the per-pod output of the previous-generation Ironwood, with two petabytes of high-bandwidth memory and double the interchip bandwidth.

Google said the chip offers 2.8 times better price-to-performance than its predecessor.

The TPU 8i is built for inference and designed to handle the demands of agentic AI workloads, where multiple specialised models operate collaboratively.

It scales to 1,152 chips per pod, delivers 11.6 exaflops of FP8 compute and offers 80% better performance per dollar compared to Ironwood.

Both chips run on Google's Axion Arm-based host processors, support liquid cooling and feature integrated power management that adjusts draw based on real-time demand, delivering up to twice the performance per watt of the previous generation.

Google said that by owning the full stack from host processor to accelerator, it can achieve system-level energy efficiencies that are not possible when the components are designed independently.

Both chips are expected to be generally available later this year.

The launch comes alongside expanded commercial arrangements with major AI laboratories.

Google recently announced a deal to supply Anthropic with multiple gigawatts of next-generation TPU capacity, giving the Claude developer access to up to one million chips.

The company is also working to supply capacity to OpenAI and Meta under separate agreements.

Rivals are pursuing similar strategies.

Amazon agreed an expanded chip deal with Anthropic that includes more than $100 billion in AWS spending over the next decade, while Meta and Microsoft continue to develop their own custom silicon, intensifying competitive pressure on third-party chip suppliers.

The recap

  • Google debuts TPU 8t and TPU 8i AI accelerators
  • TPU 8t offers 2.8x better price-to-performance than predecessor
  • Both chips will be available later this year, company said
Defused News Writer profile image
by Defused News Writer

Explore stories