Google's new AI agent will run 160 searches while you sleep and hand you the report by morning
New Gemini 3.1 Pro-powered agents hit public preview with MCP support, native charts and multimodal inputs, as Google positions Deep Research as an enterprise workflow engine for finance and life sciences.
Google DeepMind has launched Gemini 3.1 Pro-powered Deep Research and Deep Research Max agents in public preview via the Gemini API.
Google DeepMind has launched two new autonomous research agents, Deep Research and Deep Research Max, in public preview via the Gemini API. Built on Gemini 3.1 Pro, the agents search the open web, user file uploads and connected data sources through Model Context Protocol (MCP) servers, generate charts and infographics natively, and consult over 100 sources in a single task.
The launch puts Google in direct competition with OpenAI and Anthropic on agentic research tooling, with direct implications for education and EdTech teams already using AI for research, curriculum work and analysis.
Google is positioning the release as a move beyond summarization into an enterprise workflow layer for finance, life sciences and market research, with named financial data partners already building MCP integrations.
Two speeds, built for different workflows
The standard Deep Research agent replaces the December preview release and is optimized for speed and lower cost, aimed at interactive user-facing products. Deep Research Max is the heavier option, designed for asynchronous background workflows such as overnight due diligence reports, and uses extended test-time compute to iteratively search and refine its output.
Max runs around 160 search queries per task, according to Philipp Schmid, AI Developer Experience at Google DeepMind, who announced the update on LinkedIn. The two new models are "deep-research-preview-04-2026" and "deep-research-max-preview-04-2026". Schmid reported benchmark scores of 93.3 percent on DeepSearchQA for web research and 85.9 percent on BrowseComp for hard-fact retrieval.
FactSet, S&P Global and PitchBook sign up to MCP integrations
Google is working with FactSet, S&P Global and PitchBook on MCP server designs so shared customers can plug financial data feeds directly into Deep Research workflows. The agents can run with Google Search, remote MCP servers, URL Context, Code Execution and File Search simultaneously, or with web access turned off entirely to operate only over user-supplied data. They accept PDFs, CSVs, images, audio and video as input, and support collaborative planning, where users review and edit the research plan before the agent executes. Real-time streaming surfaces the agent's reasoning steps as they happen.
Google's Chief Strategist pushes back on AI hype
Neil Hoyne, Chief Strategist at Google, set out the framing on LinkedIn. "Two speeds with two purposes. The regular Deep Research is the quick approach: It's fast, cheap and still gets you a solid answer. Max is the patient AI who works through the night so you wake up to something exhaustive," he posted. Hoyne said the agents pull in user files as well as Google data, generate charts and infographics in-line, and show users the plan before execution so they can "tweak, you redirect, you say 'focus here!', etc."
Hoyne added a note that will read as refreshingly honest against the current backdrop of AI hype: "No. Unfortunately, this won't replace good judgment, and it shouldn't. But for anyone tired of drowning in tabs, half-read PDFs, maybe this kind of AI gives you your evenings back."
The Deep Research infrastructure already powers research features in Gemini App, NotebookLM, Google Search and Google Finance, and is now exposed to developers through the Interactions API and Google AI Studio. The next question is which education data providers build their own MCP servers next, and how quickly classroom-grade research agents appear on top of the platform.