Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Google's Gemini AI can now edit complex spreadsheets almost as well as a human expert

The AI assistant scored 70% on a standard industry test for real-world spreadsheet tasks, outperforming competing models

Defused News Writer profile image
by Defused News Writer
Google's Gemini AI can now edit complex spreadsheets almost as well as a human expert
Photo by Christine Sandu / Unsplash

Google has announced that Gemini, its AI assistant built into Google Sheets, has reached near-human performance on a standard industry benchmark for editing complex, real-world spreadsheets.

The benchmark in question, SpreadsheetBench, is a publicly available test used by researchers and companies to measure how well an AI can handle the kind of messy, complicated spreadsheets that people actually use at work, rather than simple or artificially clean examples designed to flatter a model's results.

Gemini in Sheets scored 70.48% on the full dataset, meaning it successfully completed roughly seven in ten tasks, outperforming competing AI models and approaching the level of a human expert working on the same problems.

That closing gap matters because spreadsheets remain one of the most widely used tools in business, and the tasks they involve, from restructuring data and writing formulas to merging tables and automating repetitive edits, consume significant time across finance, operations, research and many other fields.

An AI assistant capable of handling these tasks reliably could allow someone without specialist spreadsheet skills to describe what they need in plain language and have the work done automatically, rather than searching for the right formula or manually reorganising rows and columns.

The announcement is part of a broader set of updates Google is rolling out for Gemini across its Workspace suite, which includes Google Drive, Docs and Slides alongside Sheets.

Google said it would share full details of those additional updates separately, but framed the SpreadsheetBench result as a marker of how quickly AI assistance is moving from handling simple requests to tackling genuinely difficult, real-world tasks autonomously.

The recap

  • Gemini in Sheets reached state-of-the-art on SpreadsheetBench dataset.
  • It achieved a 70.48% success rate on the benchmark.
  • The announcement links to Keyword and Google Workspace posts.
Defused News Writer profile image
by Defused News Writer