The Agent2Agent (A2A) Protocol is an open standard for communication and interoperability between independent AI agents. Google announced it on April 9, 2025, at Google Cloud Next with support from more than 50 technology partners. On June 23, 2025, Google donated the full specification, SDKs, and developer tooling to the Linux Foundation, establishing neutral governance over what had become the emerging standard for multi-agent coordination. By its first anniversary in April 2026, the project counted more than 150 supporting organizations, had released version 1.0 of the specification, and had reached production deployments across supply chain, financial services, and enterprise IT operations.
The protocol sits at the agent coordination layer of the AI stack, enabling agents built on different frameworks and by different vendors to discover each other, delegate tasks, and exchange results without exposing internal logic, memory, or tooling. It is explicitly complementary to Anthropic's Model Context Protocol, which handles the connection between an agent and its tools; A2A handles the connection between agents themselves.
The rise of autonomous AI agents exposed a structural problem in enterprise software. Organizations began deploying multiple specialized agents: one handling customer service, another managing inventory, a third analyzing financial risk, and so on. Each agent might be built on a different framework (LangGraph, CrewAI, Semantic Kernel, AutoGen), hosted by a different vendor, and designed to operate independently. When business processes required these agents to hand off tasks, share context, or coordinate on multi-step workflows, there was no standard way to do it.
The resulting approach was ad hoc. Engineering teams wrote custom integration code to bridge agents from different vendors. These bridges were brittle, expensive to maintain, and did not scale. Each new agent added to a system potentially required new integration work with every other existing agent. The communication patterns were also opaque: there was no common vocabulary for one agent to tell another what it was capable of, what format it expected inputs in, or how to authenticate requests.
This problem grew more acute as agentic AI moved from research into production. Gartner estimated that 40% of enterprise applications would feature task-specific AI agents by 2026. Without a common protocol, enterprises faced a fragmented landscape where agents from different vendors simply could not talk to each other, limiting the practical value of multi-agent systems and reinforcing vendor lock-in.
The designers of A2A drew a direct analogy to how HTTP unified web communication. Before HTTP, different computer networks used incompatible protocols and could not easily share information. HTTP gave the web a common language. A2A aimed to do something similar for the emerging layer of AI agent coordination: provide a shared vocabulary and transport convention so that any compliant agent could communicate with any other compliant agent, regardless of the underlying framework or vendor.
Google introduced the Agent2Agent Protocol on April 9, 2025, during the Google Cloud Next conference. The announcement came alongside Google's own expanding ecosystem of agent tooling, including the Agent Development Kit (ADK) and the Vertex AI agent platform. From the outset, Google framed A2A as an open, community-owned standard rather than a proprietary API, inviting other vendors to participate in its development.
The initial launch included endorsements and early participation from more than 50 organizations. Technology companies in the first cohort included Atlassian, Box, Cohere, Intuit, LangChain, MongoDB, PayPal, Salesforce, SAP, ServiceNow, and Workday. Major professional services firms also joined as launch partners, including Accenture, BCG, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, KPMG, McKinsey, PwC, TCS, and Wipro. The breadth of the initial coalition signaled that the agent interoperability problem was widely recognized across the industry.
Google published the specification on GitHub under the Apache 2.0 license and released a reference Python SDK alongside the announcement. The specification described the protocol's core data model (tasks, messages, artifacts, agent cards), transport bindings (HTTP with JSON-RPC 2.0 and server-sent events), and authentication approach.
On June 23, 2025, Google transferred the Agent2Agent Protocol specification, its accompanying SDKs, and all developer tooling to the Linux Foundation. The announcement was made at Open Source Summit North America in Denver, Colorado. At the time of the transfer, seven founding partner organizations co-established the Agent2Agent Protocol Project under the Linux Foundation: Amazon Web Services, Cisco, Google, Microsoft, Salesforce, SAP, and ServiceNow.
The decision to donate the project to the Linux Foundation reflected a deliberate strategy to ensure long-term vendor neutrality. Placing the standard under independent governance removed concerns that Google could unilaterally change the specification in ways that favored its own products. Jim Zemlin, executive director of the Linux Foundation, stated at the time that the donation would ensure long-term neutrality, open collaboration, and governance for the protocol.
The Linux Foundation set four primary objectives for the project: establish A2A as the premier interoperability standard for AI agents, cultivate a diverse global developer community, provide neutral governance, and encourage secure and collaborative AI innovation.
The project released A2A Protocol version 1.0 in March 2026, the first version designated as production-ready. Version 1.0 introduced signed Agent Cards for cryptographic identity verification, refined specifications to support enterprise-scale load-balancing patterns, and standardized multi-protocol support with gRPC alongside the original JSON-RPC 2.0 and HTTP+REST bindings.
At the project's first anniversary in April 2026, the Linux Foundation reported that more than 150 organizations had joined the project. Microsoft had integrated A2A into Azure AI Foundry and Copilot Studio. AWS added native support through Amazon Bedrock AgentCore Runtime. The core GitHub repository had accumulated more than 22,000 stars. Five production-ready SDKs covered Python, JavaScript, Java, Go, and .NET, with a Rust SDK also available.
The project also expanded into related protocol work. The Agent Payment Protocol (AP2) extended A2A for financial transaction coordination between agents. The A2UI (Agent to User Interface) specification addressed how agents could render content directly to end-user interfaces. The Universal Commerce Protocol (UCP) targeted agent-mediated commercial workflows.
A2A is built on existing web standards rather than inventing new transport mechanisms. The protocol uses HTTP as its transport layer, JSON-RPC 2.0 as the request-response format, and server-sent events (SSE) for streaming. This choice of widely understood standards was deliberate: it lowered the implementation barrier and allowed A2A to integrate with existing infrastructure, including API gateways, load balancers, proxies, and monitoring tools that already understand HTTP traffic.
The specification defines three protocol bindings that all map to the same semantic operations and data structures:
| Binding | Transport | Serialization | Use case |
|---|---|---|---|
| JSON-RPC 2.0 | HTTP POST | JSON | Default; widely compatible |
| gRPC | HTTP/2 | Protocol Buffers | High-throughput, typed services |
| HTTP+REST | HTTP verbs | JSON | RESTful integrations |
All client requests in the JSON-RPC binding are encapsulated in a standard JSON-RPC 2.0 Request object. The method field carries the operation name (for example, message/send or tasks/get). Responses follow JSON-RPC 2.0 conventions for both success and error cases.
Clients indicate the protocol version they expect using the A2A-Version header. Servers must process requests per the indicated version semantics or return a VersionNotSupportedError. Supported protocol bindings are declared in the Agent Card's preferredTransport and additionalInterfaces fields.
Many agentic tasks take longer than a simple HTTP request-response cycle can accommodate. A2A supports real-time incremental result delivery through server-sent events over a persistent HTTP connection. The SendStreamingMessage operation opens such a connection. The server then emits TaskStatusUpdateEvent objects as the task state changes and TaskArtifactUpdateEvent objects as partial outputs become available.
For tasks that exceed connection timeframes or where clients are disconnected, A2A also supports push notifications: the agent POSTs status updates to a webhook URL previously registered by the client. This three-tier update model (polling via GetTask, streaming via SSE, and asynchronous push via webhooks) gives implementers flexibility to match the delivery mechanism to the latency and reliability requirements of each use case.
The A2A specification defines the following operations:
| Operation | Purpose |
|---|---|
SendMessage | Initiate a task or continue a multi-turn interaction; returns a Task or direct Message |
SendStreamingMessage | Same as SendMessage but opens an SSE stream for incremental updates |
GetTask | Retrieve current task state, optionally including history |
ListTasks | Query tasks with filtering and pagination |
CancelTask | Request cancellation of an in-progress task |
SubscribeToTask | Open an SSE stream for updates on an existing task |
SetTaskPushNotification | Register a webhook URL for async updates |
GetTaskPushNotification | Retrieve registered webhook configuration |
GetExtendedAgentCard | Retrieve an authenticated, more detailed version of the Agent Card |
The Agent Card is central to how A2A enables decentralized discovery. It is a JSON document that an A2A server publishes, typically at the well-known path /.well-known/agent-card.json. Any client agent can fetch this document to learn what a remote agent can do before initiating any task.
The Agent Card contains the following information:
The skills array is particularly important for multi-agent orchestration. When a client agent needs to delegate a subtask, it can query the skill descriptions to determine whether a remote agent is capable of handling the work. Skills include human-readable descriptions that allow language models to reason about whether a given remote agent is appropriate for a given task.
Version 1.0 of the specification introduced signed Agent Cards, which allow clients to cryptographically verify that a card has not been tampered with. This addresses a class of attacks where an adversary replaces a legitimate agent card with one that advertises false capabilities or redirects requests to a malicious endpoint.
For extended capabilities, agents can offer an GetExtendedAgentCard endpoint that requires authentication before returning a more detailed version of the card. This allows agents to expose sensitive capability information only to verified clients.
The Task is the fundamental unit of work in A2A. A task represents a discrete piece of work that a client agent delegates to a remote agent. Each task has a unique server-generated identifier. Tasks are stateful: they progress through a defined lifecycle of states.
Task lifecycle states:
| State | Type | Meaning |
|---|---|---|
SUBMITTED | Active | Task acknowledged by server |
WORKING | Active | Server is actively processing |
INPUT_REQUIRED | Interrupted | Agent is waiting for additional client input |
AUTH_REQUIRED | Interrupted | Authentication or authorization needed before continuing |
COMPLETED | Terminal | Task finished successfully |
FAILED | Terminal | Task ended in error |
CANCELED | Terminal | Task was canceled by the client |
REJECTED | Terminal | Agent declined to accept the task |
The interrupted states support multi-turn interactions and human-in-the-loop workflows. An agent can pause a task and request additional information from the client before continuing. This models real-world situations where an agent discovers mid-execution that it lacks information or authorization to proceed.
Messages are the communication units exchanged within a task. Each message has a role designation ("user" or "agent") and contains one or more Parts. Messages can reference a task and a context to maintain conversational continuity across multi-turn interactions.
A Part is the smallest content unit within a message or artifact. A Part can contain:
This flexible Part model allows A2A to carry diverse content types within a single protocol, including text, images, audio, video, structured data, and embedded UI components.
Artifacts are the tangible outputs of a task. Where messages carry communication between agents, artifacts carry deliverables: a generated document, a processed dataset, an audio file, a structured report. Each artifact has a unique identifier, a human-readable name, and is composed of one or more Parts.
Artifacts support streaming delivery. When a task produces a large output incrementally (for example, a long document being generated token by token), the server can emit TaskArtifactUpdateEvent messages that append chunks to the artifact rather than waiting until the full output is ready.
The contextId field groups logically related tasks and messages across conversation sessions. A context persists across multiple task invocations, allowing an orchestrating agent to maintain continuity in a multi-step workflow. Clients can provide their own context identifiers (within constraints) or receive server-generated ones.
The protocol includes a formal extension mechanism. Agents declare supported extensions via URI identifiers in their Agent Card. Extensions can be marked as optional (the agent can proceed without them) or required (the agent will reject requests that do not indicate support). Messages and artifacts include extension URIs indicating which extensions influenced their content. This allows the core protocol to remain lean while supporting domain-specific additions without breaking backward compatibility.
A2A and the Model Context Protocol (MCP) are often discussed together because both address agent communication, but they operate at different layers of the stack and solve different problems.
MCP, developed by Anthropic and released in November 2024, standardizes the connection between an AI model or agent and the external tools and data sources it uses (databases, APIs, file systems, code interpreters). MCP uses a client-server model where the agent is the client and a tool or resource is the server. Each MCP interaction is typically discrete and stateless: the agent calls a tool, gets a result, and moves on.
A2A, by contrast, addresses the connection between two agents. Rather than an agent calling a tool, A2A describes how Agent A delegates a complex task to Agent B. The interaction is stateful and potentially long-running. Agent B may itself use MCP internally to access its own tools while processing the task.
| Dimension | MCP | A2A |
|---|---|---|
| Layer | Agent-to-tool | Agent-to-agent |
| Relationship | Client calls server (tool) | Peer-to-peer delegation |
| State | Stateless per call | Stateful, task-scoped |
| Duration | Sub-second to seconds | Seconds to hours |
| Visibility | Agent knows tool schema | Agent cards opaque to each other |
| Primary use | Access databases, APIs, files | Delegate subtasks, coordinate workflows |
| Originator | Anthropic | Google (donated to Linux Foundation) |
The official A2A documentation frames the relationship with an analogy: a mechanic (agent) uses MCP to communicate with diagnostic equipment (tools), while using A2A to communicate with other mechanics, the shop manager, and parts suppliers (other agents). Most production multi-agent systems use both protocols: MCP for each agent's internal tool access and A2A for coordination across agents.
When Google launched A2A in April 2025, the initial coalition included more than 50 companies. The technology partners spanned AI infrastructure (Cohere, MongoDB), business applications (Atlassian, Box, Intuit, Workday), fintech (PayPal), and enterprise software (Salesforce, SAP, ServiceNow). The professional services participants (Accenture, Deloitte, McKinsey, PwC, and others) indicated that consulting firms were prepared to build A2A-based solutions for enterprise clients from the protocol's earliest days.
When the project transferred to the Linux Foundation in June 2025, seven organizations became founding members of the formal project:
| Organization | Role in A2A ecosystem |
|---|---|
| Amazon Web Services | Cloud platform; later added native A2A support in Bedrock AgentCore Runtime |
| Cisco (Outshift) | Enterprise networking and security |
| Protocol creator and primary contributor | |
| Microsoft | Cloud platform; integrated A2A into Azure AI Foundry and Copilot Studio |
| Salesforce | CRM and business automation |
| SAP | Enterprise resource planning |
| ServiceNow | IT service management and workflow automation |
The project grew from 50 initial partners at launch to more than 100 by June 2025 and more than 150 by April 2026. The expansion included major enterprise software vendors (IBM, Adobe, S&P Global Market Intelligence, Twilio) and platform providers across the cloud computing, financial services, healthcare, and manufacturing sectors.
Production deployments emerged within the first year. Tyson Foods and Gordon Food Service built collaborative A2A systems to share product data and sales leads in real time, reducing friction in food supply chain coordination. ServiceNow integrated A2A into its AI Agent Fabric multi-agent layer. Twilio implemented latency-aware agent selection using the Agent Card skill descriptions to route tasks to the most appropriate available agent.
The A2A project maintains official SDKs under the a2aproject GitHub organization. All SDKs are released under the Apache 2.0 license and are designed to implement the same specification, enabling cross-language agent interoperability.
| Language | Repository | Notes |
|---|---|---|
| Python | a2a-python | Reference implementation; most extensively documented |
| JavaScript | a2a-js | Supports Node.js and browser environments |
| Java | a2a-java | Enterprise Java and Spring Boot integration |
| Go | a2a-go | High-performance server implementations |
| C#/.NET | a2a-dotnet | Microsoft stack integration |
| Rust | a2a-rs | Systems programming and performance-critical deployments |
All SDKs handle the low-level mechanics of the protocol: constructing JSON-RPC 2.0 requests, managing SSE connections, parsing Agent Cards, and implementing the task lifecycle state machine. Developers build on top of the SDK to add their agent's specific logic.
The project also maintains a samples repository with worked examples across all supported languages. Additional tooling includes the A2A Inspector, a debugging utility for inspecting messages exchanged between agents, and the Technology Compatibility Kit (TCK), a test suite that implementers can run to verify their SDK or server implementation conforms to the specification.
Framework-level integrations are available for LangGraph, CrewAI, Semantic Kernel, AutoGen, and Google's Agent Development Kit (ADK), among others. These integrations allow developers to expose existing agents built in these frameworks as A2A-compliant servers without rewriting core agent logic.
Large organizations with multiple specialized AI agents across business functions use A2A to connect those agents into end-to-end workflows. A procurement agent can delegate spending-category analysis to a finance agent, which returns structured results that the procurement agent incorporates into a purchase recommendation. Each agent retains its specialized tooling and knowledge while participating in the broader workflow through a common protocol.
Supply chain processes involve multiple parties (manufacturers, distributors, retailers, logistics providers) each operating their own systems and agents. A2A enables agents at different organizations to coordinate on order processing, inventory queries, and fulfillment status without requiring direct system integrations. Agents at different companies can discover each other via Agent Cards and communicate through standardized task delegation, reducing the integration burden of multi-party supply chain automation.
Financial institutions use A2A to coordinate specialized agents across risk assessment, compliance checking, fraud detection, and customer service. A loan origination workflow, for example, might involve a customer-facing agent delegating identity verification to a compliance agent, credit analysis to a risk agent, and document processing to a back-office agent, with results flowing back through A2A task responses. The protocol's support for long-running tasks and human-in-the-loop pauses accommodates regulatory requirements that mandate human review at certain decision points.
In IT operations, A2A enables agents specialized in monitoring, diagnostics, remediation, and change management to coordinate on incident response. An alert from a monitoring agent can be routed via A2A to a diagnostic agent that determines root cause, which then delegates remediation to an execution agent that implements the fix, all while an audit agent logs each step. ServiceNow's integration with A2A targets precisely this class of multi-agent IT automation.
Google's Agentspace and similar platforms allow businesses to make third-party agents available to their employees. A2A provides the communication layer that enables those third-party agents (built by different vendors on different frameworks) to participate in collaborative workflows alongside first-party agents. Google Cloud also opened an AI Agent Marketplace where independent software vendors can sell A2A-compliant agents directly to enterprise customers.
A2A is designed with enterprise security requirements in mind, though the specification delegates many security decisions to implementers rather than mandating specific mechanisms at the protocol level.
All HTTP-based A2A communication must use HTTPS in production environments. The specification recommends TLS 1.3 or later with strong cipher suites. The gRPC binding uses TLS by default.
Authentication in A2A operates at the HTTP transport layer rather than within the protocol payload. Agents declare their supported authentication schemes in the Agent Card using OpenAPI security scheme objects. Supported schemes include:
Credential provisioning happens out-of-band. A client agent obtains credentials separately (for example, through an OAuth token exchange) and passes them in the Authorization HTTP header when making requests. No credentials are embedded in Agent Cards.
Mutual TLS is considered the strongest option for agent-to-agent authentication because it provides cryptographic identity binding that does not rely on shared secrets. Each agent's identity is bound to a certificate, which can be revoked immediately if the agent is compromised. Organizations with existing device certificate infrastructure (such as 802.1X) can extend that infrastructure to manage agent identity at scale.
The A2A specification does not define a standard authorization framework. This is a recognized gap. Once a client agent is authenticated, it is up to the remote agent to determine whether the client is permitted to request the specific skill it is asking for. The recommended approach is to scope OAuth tokens to specific skills, enforcing least-privilege at the credential level.
Pre-1.0 versions of the specification had no mechanism for clients to verify that an Agent Card had not been tampered with. Version 1.0 introduced signed Agent Cards, where the card includes a cryptographic signature that clients can verify against a known public key. This protects against attacks where an adversary replaces a legitimate agent card at the DNS or CDN level to redirect traffic to a malicious endpoint.
The A2A project maintains a public roadmap on the official documentation site. As of early 2026, the stated roadmap includes:
.well-known endpointsThe project also has active discussions around agent reputation systems (mechanisms to assess whether a given agent is trustworthy based on its history of task execution) and cross-organization policy frameworks (how enterprises can set access control policies that govern which external agents are permitted to interact with their internal agents).
Industry reception to A2A was broadly positive at launch, though accompanied by practical questions about adoption pace and the relationship to existing integration patterns.
The breadth of the initial partner list was widely cited as evidence of genuine cross-industry demand for agent interoperability. The decision to donate the protocol to the Linux Foundation within two months of launch was interpreted as a signal that Google had learned from earlier dynamics in the AI standards space, where standards maintained by a single company tended to attract less participation from competitors.
The A2A versus MCP framing was covered extensively in technical media. The consensus that emerged was that the two protocols addressed different problems and were complementary rather than competing, though some commentators initially suggested the protocols overlapped. Google and Anthropic both published documentation affirming the complementary framing.
By September 2025, some analysts began asking whether A2A adoption was proceeding as quickly as the initial announcement implied. A blog post from fka.dev titled "What happened to Google's A2A?" raised questions about whether real-world production deployments were materializing as fast as the partner count suggested. The April 2026 one-year report from the Linux Foundation directly addressed this by citing specific named production deployments and cloud platform integrations.
Academic and security research took a more critical look at the protocol. A 2025 paper on the arXiv ("Improving Google A2A Protocol: Protecting Sensitive Data and Mitigating Unintended Harms in Multi-Agent Systems") identified weaknesses in how the protocol handled sensitive data in multi-agent systems, including token scope issues and the absence of explicit user consent mechanisms before sensitive data sharing between agents. Cloud Security Alliance published threat modeling analysis of A2A using the MAESTRO framework in April 2025, cataloging attack surfaces including agent card tampering, agent impersonation, and replay attacks.
Several limitations of A2A have been noted by researchers and practitioners:
No standardized authorization framework: A2A handles authentication but leaves authorization to implementers. In complex multi-agent systems, this can lead to "authorization creep," where agents accumulate broad permissions that are not properly scoped to specific tasks. The OAuth 2.0 delegation model works for single-hop authorization but lacks a standard mechanism for multi-hop delegation chains, where Agent A delegates to Agent B which further delegates to Agent C.
Agent Card trust depends on infrastructure security: Agent Cards lack mandatory cryptographic verification in versions prior to 1.0. Even with signed cards in version 1.0, the security of the signing mechanism depends on the broader PKI infrastructure of the deploying organization.
No native machine-readable skill schema: Agent Card skills include human-readable descriptions that language models can use to reason about agent capabilities. However, there is no fully standardized machine-readable schema for skill inputs and outputs (analogous to OpenAPI schemas for REST APIs). This means agents cannot automatically generate type-safe client code for skill invocation without additional conventions.
Sensitive data handling: The protocol does not provide built-in mechanisms to prevent agents from unnecessarily passing sensitive data through intermediary agents in a delegation chain. Research on multi-agent systems has demonstrated prompt injection attacks with high data leakage rates against systems lacking additional controls.
Long-lived token vulnerabilities: The lack of enforcement of strict token expiration durations creates windows of vulnerability if credentials are compromised. The protocol does not prescribe token lifetime limits.
Discovery at scale: The .well-known/agent-card.json convention works for discovering a known agent at a known domain but does not address the broader problem of finding agents that can handle a specific task across the entire ecosystem. Registry consolidation is on the roadmap but not yet standardized.