Google is preparing for the next phase of enterprise AI: agents that leave the building.

A blog post hosted by Google Cloud CTO Will Grannis, published March 17 with input from multiple Google executives, lays out the transition from single-purpose AI tools to multi-agent systems that span multiple enterprises. This isn’t theoretical — it’s the architecture Google is building toward.

From Internal Tool to Cross-Company Actor

The shift is fundamental. Instead of AI agents operating within a single company’s walls, Google envisions agents that coordinate across organizational boundaries — negotiating with partner agents, accessing shared data, and executing workflows that touch multiple companies simultaneously.

For advertising, this means moving from humans logging into platforms to adjust bids and approve creative, to agents handling the entire campaign lifecycle autonomously across publishers, ad networks, and brand partners.

The same pattern applies across industries: supply chain coordination, financial transactions, healthcare data sharing, legal contract negotiation.

Six Principles for Cross-Enterprise Agents

Google Cloud senior technical director John Abel outlined six considerations:

  1. Treat agents as contracted services — with defined scope, SLAs, and accountability
  2. Define risk levels explicitly for each agent’s operating environment
  3. Agree on data schemas and standards across all participants
  4. Keep humans in the loop — when one agent’s decisions affect other agents, humans evaluate changes requiring verification
  5. Clarify costs and commercial terms of each agent partnership upfront
  6. Standardize connected protocols for all participants

Zero Trust for Agents: Digital Passports

Google distinguished engineer Ashwin Ram sees zero trust as the foundation. Nothing is trusted by default — even an agent inside a company’s network must prove its identity and permission through a “digital passport” for each action or data request.

This applies across full multi-agent networks, with testing required for quality, latency, cost, and business impact. The key question: which actions can an AI agent take on behalf of another company, and under which agreements?

Paranoid Mode and Gradual Autonomy

Ben McCormack, another Google Cloud contributor, outlined hard constraints:

  • Agents need clear limitations and hard-coded guardrails — for example, an agent can edit files but never delete them
  • APIs should act as a deterministic rulebook that enforces rules the agent cannot override
  • For sensitive data, a “paranoid mode” requires user confirmation before high-risk actions
  • Autonomy should be gradual — agents should not be given full freedom at once

Data Governance That Travels

Software engineer Yingchao Huang emphasized that when an agent creates something new from a partner’s source material, the original access controls, retention policies, and audit trails should carry over. Data governance must travel with the data.

This is harder than it sounds. Current systems weren’t designed for autonomous actors that transform and redistribute information across organizational boundaries.

Continuous Learning or Rapid Decay

Antonio Gulli, Google senior director of the CTO Office for AI, warned that agents operating across organizations rapidly become outdated without continuous learning. Regulations change, markets shift, partner systems evolve.

Without feedback loops, cross-enterprise agents degrade from useful to dangerous — making decisions based on stale models in a dynamic environment.

What This Means for OpenClaw Users

Google’s framework applies directly to the OpenClaw ecosystem. As agents gain the ability to interact with external services — via MCP servers, A2A protocols, and direct API access — the same cross-boundary trust questions arise.

The difference: OpenClaw agents are self-hosted and user-controlled. The trust model starts from a fundamentally different place than cloud-managed enterprise agents. But the governance principles — digital passports, paranoid mode, gradual autonomy, data governance that travels — apply regardless of where the agent runs.


Google’s framing confirms what we’ve been tracking: the agent ecosystem is shifting from “how do I build an agent?” to “how do agents work together safely across organizations?” The companies that solve cross-boundary trust will define the next phase of enterprise AI.