Google has launched two autonomous AI research agents, Deep Research and Deep Research Max, built on its Gemini 3.1 Pro model and designed to automate the kind of exhaustive, multi-source analysis that typically consumes hours or days of human analyst time.
The release, announced on 21 April, targets professional workflows in finance, life sciences and market research, and represents the most significant upgrade to Google's research agent capabilities since the product debuted in December 2024.
Deep Research is optimised for speed and lower cost, suited to interactive surfaces where users need rapid feedback.
Deep Research Max uses extended test-time compute to iteratively reason, search and refine its output, delivering more comprehensive reports for complex, asynchronous research tasks.
On the DeepSearchQA benchmark, which measures retrieval and reasoning quality, Deep Research Max scored 93.3%, up from 66.1% in December, while on Humanity's Last Exam it rose from 46.4% to 54.6%.
Both agents are powered by Gemini 3.1 Pro, which Google released in February and which more than doubled the reasoning performance of its predecessor on the ARC-AGI-2 benchmark.
For the first time, the agents support the Model Context Protocol (MCP), an open standard that allows them to query both the open web and proprietary enterprise data sources through a single API call.
Google said it is working with data vendors including FactSet, S&P and PitchBook on MCP server designs to integrate financial datasets directly into research workflows.
The agents accept multimodal inputs including PDFs, spreadsheets, images, audio and video, and can generate native charts and infographics inline using HTML or Nano Banana, Google's image generation format.
New features include collaborative planning, which lets users review and refine the agent's research plan before execution, and real-time streaming of intermediate reasoning steps.
The full suite of Gemini API tooling is available alongside Deep Research, including Google Search, remote MCP servers, URL Context, code execution and file search, and developers can disable web access entirely to restrict the agent to their own data.
Google said the agents run on the same autonomous research infrastructure that powers capabilities within the Gemini app, NotebookLM, Google Search and Google Finance.
"Use Deep Research when you want speed and efficiency, and use Max when you want the highest quality context gathering and synthesis using extended test-time compute," chief executive Sundar Pichai wrote on X.
Related reading
- Google upgrades Dynamic Search Ads to AI Max with expanded advertiser controls
- Google Colab adds Learn Mode and Custom Instructions
- Meta breaks ground on Tulsa AI data center
Both agents are available now in public preview via paid tiers of the Gemini API, with Google Cloud access for startups and enterprises to follow.
Rough pricing based on Gemini 3.1 Pro consumption puts a Deep Research Max session at approximately $4.80 per report and a standard Deep Research run at around $1.22.
The recap
- Google unveils Deep Research and Deep Research Max agents.
- Deep Research Max uses Gemini 3.1 Pro and extended compute.
- Agents available in public preview via paid Gemini API tiers.