In this article
April 18, 2025
April 18, 2025

Agent to agent, not tool to tool: an engineer’s guide to Google’s A2A protocol

Think of MCP as “plug this model into my data” and A2A as “now let several specialised models talk to each other.”

Why do we need another protocol?

Most of the first‑wave “AI agent” stacks were really solo players: a single LLM given a toolbox by the developer.

Anthropic’s Model Context Protocol (MCP) solved the wiring problem for that model‑to‑tool layer by acting as a “USB‑C port for AI,” standardizing how an LLM is given files, database rows, or API handles.

But as soon as you want multiple autonomous agents—each possibly built in a different framework or run by a different vendor—you hit a new wall: how do those opaque black boxes discover one another, negotiate capabilities, and push tasks back and forth securely?

Google hopes you’ll choose the Agent2Agent (A2A) protocol to answer that question.

Think of MCP as “plug this model into my data” and A2A as “now let several specialized models talk to each other.” Used together, they let you compose rich, multi‑agent systems without inventing yet another bespoke JSON dialect.

Core building blocks developers should know

AgentCard (/.well-known/agent.json)

A machine‑readable manifest containing skills, endpoint URL and auth requirements—comparable to an OpenAPI spec for agents. Enables zero‑config discovery.

Task

The unit of work exchanged between agents. Moves through states: submitted → working → input‑required → completed

Message & part

A message is a turn in the conversation; it is composed of one or more Parts (TextPart, FilePart, DataPart). This explicit typing lets agents negotiate UX (e.g., inline JSON vs. PDF vs. audio).

Artifact

Durable output of a task—build logs, generated code, signed documents—also expressed as Parts so downstream agents can pick them up.

Streaming & push

tasks/sendSubscribe streams Server‑Sent Events; optional webhooks let a server push TaskStatusUpdateEvents into your infrastructure without polling. Built‑in patterns for conversational and “batch” agents alike.

Design principles that feel familiar to backend engineers

Build on proven web standards

A2A is plain HTTPS plus JSON‑RPC for requests, SSE for real-time updates—no gRPC or exotic transports to tunnel through firewalls.

Secure by default

Auth schemes mirror OpenAPI, so you can reuse OAuth 2.0 bearer tokens, mTLS, or signed JWTs that your infra team already trusts.

Long-running task support

Native lifecycle events keep both sides in sync for jobs that run minutes to days, something missing in many chat‑style APIs.

Modality-agnostic

Parts can be text, audio, or video, opening the door to multimodal agent collaboration.

Typical request flow (at a glance)

This model is intentionally symmetrical: any compliant agent can act as a client or server, enabling peer-to-peer meshes as well as classic hub-and-spoke designs.

Where A2A and MCP meet

MCP is fundamentally agent‑to‑tool; A2A is agent‑to‑agent.

They are complementary layers, not competitors. Google’s spec even includes a guidance note on using MCP inside an A2A agent to expose tool skills.

Use MCP when:

  • A single LLM needs access to tools like Postgres or a vector index. The MCP server exposes query/similarity tools
  • You need to keep tool access air-gapped or inside a VPC. MCP servers can run inside the same VPC
  • Agents need internal access to tools. Each agent can use MCP locally

Use A2A when:

  • Multiple agents need to communicate (e.g. pricing ↔ legal ↔ fulfillment). A2A coordinates task flow between agents
  • Agents live in separate networks or VPCs. They can then discover each other and connect via Agent Cards

Getting hands-on quickly

Read the spec

https://google.github.io/A2A/

Spin up the sample server

git clone https://github.com/google/A2A&& docker compose up

gives you Python & JS reference implementations plus a CLI client.

Publish your first Agent Card

Drop a agent.json alongside your service describing skills and auth headers; test discovery with the CLI.

Bridge to MCP

Inside your agent process, mount local tools as MCP servers (e.g., Cloudflare Worker, Claude Desktop) and advertise those skills in your Agent Card.

Now external agents can first negotiate via A2A, then tunnel calls via MCP without hard‑coding integration details.

Because A2A is just HTTP, you can progressively adopt it: start by wrapping an existing REST service as an “agent,” then graduate to full autonomous workflows.

Why A2A matters for developers

Framework freedom

Crewai Python agents can delegate a subtask to a Genkit JavaScript agent with zero custom glue code.

Enterprise-grade controls

IAM, audit logging, DLP and proxy inspection all slot in because traffic is sent over standard web technologies.

Less brittle orchestration

Tasks and artifacts are first‑class objects, not string‑encoded JSON blobs inside a chat message.

Designed with the future in mind

The spec already sketches negotiation for video, forms and dynamic UX, so your investment survives the jump to multimodal agents.

Final thoughts

MCP let us plug LLMs into the enterprise; A2A lets us network those newly empowered agents. Together they offer a composable, vendor‑neutral stack that feels familiar: REST and WebSockets.

If your 2025 roadmap includes more than one autonomous agent, learning A2A now will save you a lot of bespoke web‑hooks later.

Happy hacking—and may your agents always find the right collaborators.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.