Skip to content

A2A Inspector in my application and MCP for it

Ukrainian version

EMM A2A Inspector: React Flow map, Agent Card skills, and tasks playground

The first time you need to debug an A2A interaction, search usually surfaces the official inspector from a2aproject: install it, enter a URL, read the agent card, chat, and peek at raw JSON-RPC on the side. It is a solid tool. It knows nothing about your repository, and that is a feature: point it at any host on the internet and click away.

Why you want an inspector

A2A is not “just another REST”. There is an Agent Card (who I am, which skills, where to call), and JSON-RPC-style methods such as tasks/submit / tasks/status. A mistake in the card, duplicate skill id, broken URL, wrong agent on a skill, and an external client or IDE no longer understands what is going on. A mistake in the request body, 400, empty response, a task stuck “somewhere”, and you guess whether it is network, auth, or routing to the wrong graph.

An inspector shortens that loop: see the card exactly as the server returns it, not how you remember it from yesterday’s commit; send a test call and immediately see raw JSON, without five layers of abstraction in the app code. That is handy when you add a skill, change the backend, compare staging and localhost, or explain to a colleague “this is the contract that is live right now”. So the inspector is not a “pretty dashboard for its own sake”; it is protocol feedback: quickly see whether the break is in capability advertisement or in task execution.

In Expert Memory Machine I took a different path not because “mine is cooler”, but because I already live inside one backend with many LangGraph agents and custom MCP servers. A separate app from another repo would constantly drift from what actually runs in application. So my inspector is a slice of the UI plus a stdio server for Cursor. I keep MCP not to “chat with the agent over A2A like in a chat UI”, but to exercise A2A from the IDE: card, submit, status, validation, without hand-written curl, and with the option for the assistant to drive those steps. In the application UI the point is: you do not jump between “product” and “protocol tool”; on one screen you see who exists in langgraph.json, who is exposed on A2A, and who runs through threads.

How it works in the browser for humans

In the center, a map on React Flow. On the left a notional browser, in the middle a node for “A2A JSON-RPC”, on the right your graphs. If an agent is really exposed on A2A, an arrow runs from the center to it and animates. If not, a dashed “threads” line from the client straight through, visible at a glance without reading configs. Clicking a node selects that agent in the list. When you hit tasks/submit and it completes, the edge to the matching graph highlights for a couple of seconds, if skillId could be matched to a known id. That is not part of the A2A spec; it is a visual hint: “the request went here”.

Below are skill cards from the Agent Card and a “Spec check” block. It is deliberately minimal: required fields, unique ids among skills. There is no full JSON Schema per protocol version, and it should not be sold as a guarantee. If the spec grows, the rules need to be updated by hand.

There is also a small playground: taskId, skillId, message text (for me often a JSON string for the task manager). Two buttons, submit and status, call POST /api/a2a/tasks. Handy to check “does the backend respond at all” without opening Postman. A debug console collects raw lines (outbound, inbound, errors); Journal is the same, with responses sometimes truncated so the screen does not turn into a wall of JSON. For heavy debugging you still reach for server logs or the official inspector against an external URL. I would not pretend otherwise.

Screenshots

A2A Inspector, 2026-04-11 18:59:40

A2A Inspector, 2026-04-11 18:59:57

A2A Inspector, 2026-04-11 19:00:16

A2A Inspector, 2026-04-11 19:00:33

A2A Inspector, 2026-04-11 19:01:04

A2A Inspector, 2026-04-11 19:01:10

MCP when you are already in the IDE, or testing a flow through an agent that does not “know” A2A

The main scenario is simple: through MCP you run A2A like quick tests and checks, locally, on staging, after changing the card or router. It does not replace product chat with an agent; it is the layer of “hit the endpoint and read the response”.

Five tools, in essence: fetch the card from /.well-known/agent.json, send tasks/submit, query tasks/status, validate raw JSON of the card, build a short Markdown report (card + validation). For regressions and integration debugging that is usually enough; an agent in Cursor can walk the chain “card → submit → status” as if you were dictating steps to a teammate. The catch is that it is a normal HTTP client. Long SSE streams or all the socket behavior of the real runtime are not shown by these tools, for that you still use backend logs, the in-app playground, or the official inspector.

Comparison with the official inspector

The official inspector is tuned for a human: enter a URL of someone else’s server, talk, watch the console. Mine is for a human already inside EMM and for an LLM via MCP. So “better” depends on the job.

You do not have to keep a mental map of skill → graph. The same X-API-Key, the same routers as in the application, fewer surprises like “works in the inspector, fails in the app”. The official UI’s client → A2A → agents diagram simply is not about that, your langgraph.json is not in view.

If you need to connect quickly to someone else’s A2A in the cloud, the a2a-inspector repo with docker run and port 8080 is often simpler. You do not have to build the whole frontend. Chat and the side console in that tool can be nicer for a “pure” protocol experiment when you do not care about graphs.

The official inspector is a universal browser client to any URL. Mine is part of the application and MCP for my monorepo. They do not have to compete; sometimes you keep both.

What not to expect

Card validation stays light. The playground sends one message shape; you still catch protocol corner cases with tests or an external client.