Google launches Deep Research Max: supports MCP and can access enterprise private data

ChainNewsAbmedia

According to Google DeepMind’s official blog announcement, Google launched its next-generation autonomous research agents, Deep Research and Deep Research Max, on April 21, 2026. They are built on Gemini 3.1 Pro, following the formal release after a preview version was provided via the Interactions API in December 2025. Both agents are now available under the paid Gemini API plans in the form of a public preview, and Google Cloud’s startups and enterprise customers will be able to onboard them one after another.

The two variants target different use cases: interactive vs. asynchronous deep work

Google segments the two agents by usage scenarios: Deep Research emphasizes speed and low latency, making it suitable for interactive user interfaces. Deep Research Max, on the other hand, trades longer test-time compute for comprehensiveness, making it suitable for asynchronous workflows, where the agent can run long-duration tasks independently.

On retrieval and reasoning benchmarks, Deep Research Max shows “significant” improvement compared with the December 2025 version, referencing more information sources and identifying details that were previously overlooked.

Supports MCP: Google’s first integration of the open standard championed by the Claude camp

Both agents support the Model Context Protocol (MCP), allowing users to connect their own dedicated data sources via MCP. MCP is an open standard introduced by Anthropic at the end of 2024 and that rapidly expanded throughout the first half of 2026; as of March 2026, the cumulative number of installations has already exceeded 97 million. Google’s official adoption of MCP within its Gemini agents symbolizes the AI agent industry beginning to converge on a shared tool-connection protocol.

Feature list: multimodal research, native charts, internal data mode

Key capabilities in the Deep Research series include multimodal research (supporting PDFs, CSV, images, audio, and video as research materials), native generated charts and infographics (HTML and Nano Banana formats), and collaborative planning with users (providing human review checkpoints before execution), intermediate reasoning processes streamed in real time, and an optional setting to disable network access—so the agent uses only enterprise internal data for research.

This “disable network” option has clear significance for enterprise security and compliance scenarios: industries such as legal, healthcare, and finance can prevent the agent from cross-retrieving internal sensitive data and public web content, reducing data leakage and compliance risks.

Competitive landscape: three major players’ research agents go head-to-head in the same week

Google’s Deep Research Max and OpenAI’s Codex “for (almost) everything” update in the same week (computer use, memory, 90+ plugins), as well as the Live Artifacts introduced by Anthropic in Cowork, create a positive showdown. The three leading firms publish clear products in the same week under “enterprise-grade autonomous research / production agents,” reflecting that AI agents have moved from experimental technology to a commercialization-focused positioning battle.

In the official announcement, the product manager for Deep Research Max, Lukas Haas, and the program manager, Srinivas Tadepalli, said that the launch of these two agents marks an industry shift for AI research agents—“from pure web summarization to integrating enterprise internal data, native visualization, and iterative refinement.”

This article, Google’s first to introduce Deep Research Max: supports MCP and can connect enterprise private data, first appeared in Lian Xin News ABMedia.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.

Related Articles

DeepSeek V4 Pro with Ollama Cloud: One-click integration with Claude Code

According to an Ollama tweet, DeepSeek V4 Pro was released on 4/24, has been added to the Ollama catalog in cloud mode, and can call tools like Claude Code, Hermes, OpenClaw, OpenCode, Codex, etc. with just a single line of command. V4 Pro: 1.6T params, 1M context, Mixture-of-Experts; cloud inference does not download local weights. If you want to run it locally, you need to obtain the weights yourself and run it with INT4/GGUF and multi-card GPUs. Early speed tests were affected by cloud load; typical performance is about 30 tok/s, with a peak of 1.1 tok/s. It is recommended to use the cloud prototype first, and for production later, run inference yourself or use a commercial API.

ChainNewsAbmedia19m ago

UB (Unibase) up 14.96% in 24 hours

Gate News update: On April 27, according to Gate market data, as of the time of writing, UB (Unibase) is trading at $0.0491. It is up 14.96% over the past 24 hours, with a high of $0.0534 and a low of $0.0423. The 24-hour trading volume is $3.9667 million. The current market cap is approximately $123 million. Unibase is a high-performance decentralized AI memory layer that provides long-term memory and cross-platform interoperability for AI agents, enabling them to remember, collaborate, and self-evolve. Unibase aims to build an open agent internet, supporting seamless cooperation among intelligent agents across ecosystems, empowering developers to build the next generation of AI applications. This message does not constitute investment advice; please be mindful of market volatility risks when investing.

GateNews25m ago

Guo Ming-chi: OpenAI wants to build an AI Agent phone; MediaTek, Qualcomm, and Luxshare Precision are key in the supply chain

Guo Ming-chi claims that OpenAI is working with MediaTek, Qualcomm, and Luxshare Precision to develop an AI Agent phone, with mass production expected in 2028. The new phone will be centered on task completion: an AI agent will understand and execute requests, combining cloud and on-device computing, with a focus on sensing and contextual understanding. The specifications and supply chain list are expected to be finalized in 2026–2027; if it takes shape, it could bring a new upgrade cycle to the high-end market, and Luxshare may become a major beneficiary.

ChainNewsAbmedia38m ago

Xiaomi’s AI model lead: As AI competition shifts to the Agent era, self-evolution is a key event on the path to AGI

Xiaomi’s large-model team head, Luo Fuli, accepted an in-depth interview on the Bilibili platform on April 24 (video ID: BV1iVoVBgERD). The interview lasted 3.5 hours, and it was her first time, as the technical head, to publicly and systematically explain her technical viewpoints. Luo Fuli said that the large-model competition track has shifted from the Chat era to the Agent era, and she pointed out that “self-evolution” will be a key event for AGI in the coming year.

MarketWhisper1h ago

Tencent Cloud QClaw integrates with the Hermes framework, supporting switching between multiple models such as DeepSeek-V4 Pro

According to an official Tencent Cloud announcement on April 27, Tencent Cloud AI Agent desktop tool QClaw has officially released version v0.2.14. Tencent Cloud said this is the largest version upgrade QClaw has had to date. The core updates include integrating the Hermes Agent framework, upgrading the underlying model to a free-switching mode, and fully upgrading the “Inspiration Plaza” into the “Expert Plaza.”

MarketWhisper1h ago

xAI Grok Voice takes over Starlink customer service hotline, with 70% of calls automatically closed

According to an official announcement from xAI released on April 23, xAI has launched the Grok Voice Think Fast 1.0 voice AI agent, and it has already been deployed to the Starlink customer service hotline +1 (888) GO STARLINK. According to the test data disclosed in the announcement, 70% of calls are automatically closed by AI, with no human intervention required.

MarketWhisper1h ago
Comment
0/400
No comments