Google has turned Deep Research from a clever web-scraping helper into something much closer to an enterprise research engine. The company’s new Deep Research and Deep Research Max agents can pull from the public web and private company data in one pass, generate native charts inside reports, and plug into third-party systems through the Model Context Protocol, all through the Gemini API.
That matters because Google’s Deep Research is now aiming at the same analysts, consultants, bankers, and scientists who spend hours stitching together messy sources into something decision-makers can use. Google’s bet is that it already has the strongest search index, and now it wants the workflow plumbing too.
Deep Research and Deep Research Max split speed from depth
Google is not shipping one research agent and calling it a day. Deep Research is the faster option, built for interactive use cases where latency matters. Deep Research Max uses extended test-time compute, meaning it spends more time reasoning, searching, and refining before it answers.
That split matters because users need different tools for different jobs. A bank dashboard and an overnight diligence task do not need the same agent. Google says Max reaches 93.3% on DeepSearchQA and 54.6% on HLE, while the standard tier is tuned for lower-cost, quicker responses.
- Deep Research: optimized for speed and efficiency
- Deep Research Max: optimized for exhaustive context gathering and synthesis
- Both are available in public preview through paid tiers of the Gemini API
MCP support brings private enterprise data into the loop
The biggest practical change is Model Context Protocol support. MCP lets Deep Research query private databases, internal document stores, and third-party services without moving sensitive data out of the source environment. For enterprises, that is the difference between a flashy demo and something procurement teams might actually tolerate.
Google is already lining up data partners that matter in the real world, including FactSet, S&P, and PitchBook. That is a very deliberate move: competitors can imitate ”agentic research,” but they cannot easily replicate the combination of Google Search scale and trusted enterprise data plumbing. OpenAI and Perplexity are pushing hard in this space too, which is exactly why Google is leaning on infrastructure rather than just model theatrics.
Developers can combine Google Search, remote MCP servers, URL Context, Code Execution, and File Search, or switch off web access entirely and keep the agent inside private data. It also accepts PDFs, CSVs, images, audio, and video as grounding inputs, which is the sort of boring-but-important feature that decides whether a product survives outside a keynote.
Native charts are the quiet upgrade
Deep Research now generates charts and infographics inline, instead of forcing users to export findings and rebuild visuals elsewhere. That sounds minor until you remember that most corporate research is not judged by how elegant the model’s prose is; it is judged by whether a VP can drop the output into a deck without three more rounds of cleanup.
Google says the visuals render directly in HTML or its Nano Banana format, and the reports can be reviewed before execution with a collaborative planning step. Add streaming intermediate reasoning, and the system starts to look less like a chatbot and more like a controlled research pipeline with a UI.
Google is turning Deep Research into platform infrastructure
This launch also tells a bigger story about where Google thinks its AI value sits. Deep Research is no longer just a feature inside Gemini; Google says it powers parts of Gemini App, NotebookLM, Google Search, and Google Finance, and the API version uses that same autonomous research infrastructure. That is a strong signal to developers: this is not a toy layer, it is a platform bet.
The rapid evolution explains the urgency. Deep Research started in the Gemini app, moved through multiple model upgrades, then landed in the developer stack with the Interactions API. Now it is getting the sort of enterprise-facing controls that make sense for finance, life sciences, consulting, and anyone else who pays people to read too many documents.
The catch is the usual one. Benchmarks are neat; real research is a swamp of ambiguity, contradictory sources, and judgment calls. Google’s new agents may save a lot of time, but the first serious test is whether firms trust them enough to hand over the boring first draft of analysis, or whether these become yet another impressive API that never quite escapes the sandbox.

