Vertex AI offers new ways to build and manage multi-agent systems

Vertex AI offers new ways to build and manage multi-agent systems

Every enterprise will soon rely on multi-agent systems – multiple AI agents working together – even when built on different frameworks or providers. Agents are intelligent systems that can act on your behalf using reasoning, planning, and memory capabilities. Under your supervision, they can think multiple steps ahead and accomplish tasks across various systems.

Multi-agent systems rely on models with enhanced reasoning capabilities, like those available in Gemini 2.5. They also depend on integration with your workflows and connection to your enterprise data. Vertex AI – our comprehensive platform to orchestrate the three pillars of production AI: models, data, and agents – seamlessly brings these elements together. It uniquely combines an open approach with comprehensive platform capabilities to ensure agents perform reliably: a combination that would otherwise require fragmented and fragile solutions.

Today, we’re announcing multiple enhancements to Vertex AI so you can: 

  • Build agents with an open approach and deploy them with enterprise-grade controls

  • Connect agents across your enterprise ecosystem

    • Agent2Agent protocol gives your agents a common, open language to collaborate – no matter which framework or vendor they are built on. We are driving this open initiative, partnering with 50+ industry leaders (and growing) to advance our shared vision of multi-agent systems.

    • Equip agents with your data using open standards like Model Context Protocol (MCP) or connect directly with APIs and connectors managed in Google Cloud. You can ground your AI responses in Google Search, your preferred data sources, or with Google Maps data.

aside_block
<ListValue: [StructValue([('title', '$300 in free credit to try Google Cloud AI and ML'), ('body', <wagtail.rich_text.RichText object at 0x3e63019337c0>), ('btn_text', 'Start building for free'), ('href', 'http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/'), ('image', None)])]>

Introducing Agent Development Kit and Agent Garden: Building agents with an open approach

Agent Development Kit (ADK) is our new open-source framework that simplifies the process of building agents and sophisticated multi-agent systems while maintaining precise control over agent behavior. With ADK, you can build an AI agent in under 100 lines of intuitive code. Check out the examples here.

Currently available in Python (more languages coming later in the year), you can:

  • Shape how your agents think, reason, and collaborate through deterministic guardrails and orchestration controls, giving you precise control over agent behavior and decision-making processes. 
  • Interact with your agents in human-like conversations with ADK’s unique bidirectional audio and video streaming capabilities. With just a few lines of code, you can create natural interactions that change how you work with agents – moving beyond text into rich, interactive dialogue. Check out the demo of an interactive agent from the opening keynote at NEXT 2025 built on the ADK here.
  • Jumpstart your development with Agent Garden, a collection of ready-to-use samples and tools directly accessible within ADK. Leverage pre-built agent patterns and components to accelerate your development process and learn from working examples.
  • Choose the model that works best for your needs. ADK works with your model of choice – whether it is Gemini or your any model accessible via Model Garden. Beyond Google’s models, you can choose across 200+ models from providers like Anthropic, Meta, Mistral AI, AI21 Labs, CAMB.AI, Qodo, and more. 
  • Select your deployment target, be it local debugging or any containerized production deployment, such as Cloud Run, Kubernetes, or Vertex AI. ADK also supports Model Context Protocol (MCP), enabling secure connections between your data and agents.
  • Deploy to production using the direct integration to Vertex AI. This clear, reliable path from development to enterprise-grade deployment removes the typical overhead associated with moving agents to production.

While ADK works with your preferred tools, it’s optimized for Gemini and Vertex AI. For example, AI agents built with ADK using Gemini 2.5 Pro Experimental can break down complex problems through Gemini’s enhanced reasoning capabilities, and work with your preferred systems through its tool use capabilities. You can also deploy this agent to a fully-managed runtime and operate it at enterprise scale, using the native integration to Vertex AI from ADK.

Image #1 (ADK Coding)

ADK framework showing how you can build multi-agent systems

Hear how our customers are already using ADK:

“Using Agent Development Kit, Revionics is building a multi-agent system to help retailers set prices based on their business logic — such as staying competitive while maintaining margins — and accurately forecasting the impact of price changes. ADK streamlines multi-agent transfer and planning, such as knowing when to transfer between specialized agents (data retrieval) and tools (constraint application), thereby combining Revionics’ pricing AI with agentic AI to automate entire pricing workflows. Data is central to Revionics’ process, and the development kit enables agents to efficiently reason over big data through storage artifacts rather than relying solely on the LLM context.” – Aakriti Bhargava, VP of Product Engineering and AI at Revionics.

“We used the ADK to develop an agent that ensures we’re installing EV chargers where drivers need them most. The agent assists our data analysts to leverage geographical, zoning, and traffic data to inform and prioritize critical EV infrastructure investments that maximize driver convenience with less strain on our teams.” – Laurent Giraud, Chief Data (&AI) Officer, Renault Group. 

“We’ve implemented the Agent Engine as the backbone of our video analysis AI agent, powered by Gemini. This setup allows us to leverage the Python Vertex AI SDK without worrying about infrastructure, saving us an estimated month of development time. Plus, the Agent Engine’s API seamlessly connects with other Google Cloud products like Workflows, giving us excellent maintainability and room to grow.” – Rina Tsuji, Senior Manager, Corporate Strategy, Nippon Television Holdings, Inc.

Introducing Agent Engine: Deploying AI agents with enterprise-grade controls

Agent Engine is our fully managed runtime that makes it easy to deploy AI agents to production. No more rebuilding your agent system when moving from prototype to production. Agent Engine handles agent context, infrastructure management, scaling complexities, security, evaluation, and monitoring. Agent Engine also integrates with ADK (or your preferred framework) for a frictionless develop-to-deploy experience. Together, you can:

  • Deploy agents built using any framework – whether you’re using ADK, LangGraph, Crew.ai, or others, and regardless of your chosen model (Gemini, Anthropic’s Claude, Mistral AI, or others). This flexibility is paired with enterprise-grade controls for governance and compliance.
  • Keep the context in your sessions: Rather than starting from a blank slate each time, the Agent Engine supports short-term memory and long-term memory. This way, you can manage your sessions and your agents can recall your past conversations and preferences.
  • Measure and improve agent quality with comprehensive evaluation tools from Vertex AI. Improve agent performance by using the Example Store or fine-tune models to refine your agents based on real-world usage.
  • Drive broader adoption by connecting to Agentspace: You can register your agents hosted on Agent Engine to Google Agentspace. This enterprise platform puts Gemini, Google-quality search, and powerful agents in the hands of employees while maintaining centralized governance and security.

Here’s how it all comes together:

Image #2 (Agent Engine)

Agent Engine connects across your enterprise for multi-agent systems

In the coming months, we will further expand Agent Engine capabilities with advanced tooling and testing. Your agents will have computer-use capabilities and will be able to execute code. Additionally, a dedicated simulation environment will let you rigorously test agents with diverse user personas and realistic tools to ensure reliability in production.

Introducing Agent2Agent protocol: Connecting agents across your enterprise ecosystem

One of the biggest challenges in enterprise AI adoption is getting agents built on different frameworks and vendors to work together. That’s why we partnered with many industry leaders who share our vision of multi-agent systems to create an open Agent2Agent (A2A) protocol

Agent2Agent protocol enables agents across different ecosystems to communicate with each other,  irrespective of the framework (ADK, LangGraph, Crew.ai, or others) or vendor they are built on. Using A2A, agents can publish their capabilities and negotiate how they will interact with users (via text, forms, or bidirectional audio/video) – all while working securely together. 

As of today, 50+ partners such as Box, Deloitte, Elastic, PayPal, Salesforce, ServiceNow, UiPath, UKG, Weights & Biases, and many more are committed to working with us on the protocol. For details on the partners using the protocol, please refer to the blog here.

Image #3 (A2A protocol)

Defining interoperability together with our partners

Beyond working with other agents, your agents also need access to your enterprise truth – the ecosystem of information you have built across data sources, APIs, and business capabilities. You can equip agents with your existing enterprise truth data without building from scratch, using any approach you prefer: 

  • ADK supports Model Context Protocol (MCP), so your agents connect to the vast and diverse data sources or capabilities you already rely on by leveraging the growing ecosystem of MCP-compatible tools.

  • From ADK, you can also connect your agents directly to your enterprise systems and capabilities. This includes 100+ pre-built connectors, workflows built with Application Integration, or data stored within your systems like AlloyDB, BigQuery, NetApp and much more. For example, you can build AI agents directly on your existing NetApp data, no data duplication required.

  • Using ADK, you can also seamlessly connect to your existing agents built in other frameworks like LangGraph or call tools from diverse sources including MCP, LangChain, CrewAI, Application Integration, and any OpenAPI endpoints.

  • In Apigee API management, we manage over 800K APIs that power your business, within and beyond Google Cloud. Using ADK, your agents can also tap into these existing API investments – no matter where they reside – with proper permissions.

Once connected, you can ground your AI responses with information like Google Search or specialized data from providers like Cotality, Dun & Bradstreet, HGInsights, S&P Global, and Zoominfo. For agents that rely on geospatial context, today we’re making it possible to ground your agents with Google Maps1. We make 100 million updates to Maps data every day, ensuring it is fresh and factual. And now, using Grounding with Google Maps, your agents can provide responses with geospatial information tied to millions of places in the U.S. 

Enterprise grade security for your AI agents: Building agents your enterprise can trust

Beyond functionality, enterprise AI agents operating in production face security concerns like prompt injection attacks, unauthorized data access, and generating inappropriate content. Building with Gemini and Vertex AI in Google Cloud addresses these challenges in multiple layers. You can: 

  • Control agent output using Gemini’s built-in safety features including configurable content filters and system instructions that define boundaries around prohibited topics and align with your brand voice.

  • Manage agent permissions through identity controls that let you determine whether agents operate with dedicated service accounts or on behalf of individual users, preventing privilege escalation and unauthorized access.

  • Protect sensitive data by confining agent activity within secure perimeters using Google Cloud’s VPC service controls, preventing data exfiltration and limiting potential impact radius.

  • Establish guardrails around your agents to control interactions at every step – from screening inputs before they reach models to validating parameters before tool execution. You can configure defensive boundaries that enforce policies like restricting database queries to specific tables or adding safety validators using lightweight models.

  • Auto-monitor agent behavior using comprehensive tracing capabilities that give you visibility into every action an agent takes, including its reasoning process, tool selection, and execution paths.

Get started building multi-agent systems 

The real value of Vertex AI isn’t just the individual capabilities outlined above, but in how they work together as an integrated whole. What previously required piecing together fragmented solutions from multiple vendors now flows seamlessly through a single platform. This unified approach eliminates painful tradeoffs between models, integration with enterprise apps and data, or production readiness. The result isn’t just faster development – it’s more reliable agents ready for enterprise workflows. To get started today: 

  1. Build with Agent Development Kit
  2. Visit the Vertex AI console
  3. Explore our documentation

1. Grounding with Google Maps is currently available as an experimental release in the United States, providing access to only places data in the United States.