Skip to content

EMM A2A Phase 1: Process Manager as A2A Server

EMM A2A Phase 1 Architecture

Українська версія доступна

Expert Memory Machine already has several LangGraph agents: Process Manager, Task Manager, Calendar, Finance, Content Classifier. They communicate via MCP and task_store. But if an external agent (e.g. Claude Code or another orchestrator) wants to delegate work to them — we need a standardized interface. That's why I started integrating A2A.

What Is A2A

A2A (Agent-to-Agent) — open protocol from Google (April 2025) for AI agent interaction. Transport: JSON-RPC 2.0 over HTTP(S). Idea: one agent can delegate work to another via a standardized interface, without point-to-point integrations. Instead of writing a separate REST API for each agent, the client discovers the agent, sees its skills, and sends a task in a unified format.

A2A vs MCP — not competitors. MCP (Model Context Protocol) — for model-to-tools: function calls, resources. A2A — for agent collaboration: delegation, multi-turn, stateful. MCP: "model calls tool". A2A: "agent delegates task to another agent". In EMM I use both: MCP for Task Manager, Confluence, etc.; A2A — so an external agent can call Process Manager as a peer.

Core A2A elements:

  1. Agent CardGET /.well-known/agent.json. JSON describing the agent: name, skills, capabilities, URL. Discovery: another agent finds this one and sees what it can do. Like OpenAPI, but for agents.
  2. Task lifecycle — submitted → working → completed | failed. Client sends task, gets taskId, can poll status. For long-running tasks — polling or streaming.
  3. Message format — multi-modal: text, images. I only have text parts for now; images planned for later.

EMM + A2A Architecture

flowchart TB
    subgraph Client["A2A Client (external agent)"]
        C[Client Agent]
    end

    subgraph Backend["viz/backend (FastAPI)"]
        AC["GET /.well-known/agent.json"]
        TASK["POST /api/a2a/tasks"]
        ADAPTER[A2A Adapter]
        AC --> ADAPTER
        TASK --> ADAPTER
    end

    subgraph Agents["LangGraph Agents"]
        PM[Process Manager]
        TM[Task Manager]
    end

    C -->|"1. Discovery"| AC
    C -->|"2. tasks/submit (JSON-RPC)"| TASK
    ADAPTER -->|"graph.ainvoke()"| PM
    PM -->|"task_store, MCP"| TM

Interaction Sequence (Phase 1)

sequenceDiagram
    participant C as A2A Client
    participant API as FastAPI
    participant Adapter as A2A Adapter
    participant PM as Process Manager
    participant TM as Task Manager

    C->>API: GET /.well-known/agent.json
    API->>C: Agent Card (skills: weekly_report)

    C->>API: POST /api/a2a/tasks (tasks/submit)
    Note over C,API: JSON-RPC: method, params.taskId, params.message

    API->>Adapter: parse JSON-RPC, route by skillId
    Adapter->>Adapter: ProcessManagerMapper: message → LangGraph input

    Adapter->>PM: graph.ainvoke({"operation":"weekly_report","board_id":"..."})
    PM->>TM: get_board_metrics, task_store (existing)
    TM-->>PM: board data
    PM-->>Adapter: result (report_md)

    Adapter->>Adapter: map result → A2A Task + Artifact
    Adapter-->>API: Task {status: completed, artifact}
    API-->>C: JSON-RPC response {result: {task}}

What I Implemented in Phase 1

  1. Agent CardGET /.well-known/agent.json. Returns JSON with skills (so far weekly_report), capabilities, url. Client can first do GET, see available skills, then send submit with the right operation in message.

  2. tasks/submitPOST /api/a2a/tasks with JSON-RPC. Body: method: "tasks/submit", params: { taskId, message }. Under the hood — Process Manager graph.ainvoke(). Single endpoint for all A2A requests; routing by skillId (Phase 2) or default — Process Manager.

  3. ProcessManagerMapper — converts A2A message to LangGraph input and back. I didn't touch Process Manager — it accepts {"operation":"weekly_report","board_id":"..."}. Adapter maps: extracts JSON from message.parts[0].text, passes to graph, wraps result in A2A Task with artifact. Separation of concerns: agent doesn't know about A2A.

Code: Agent Card — static JSON with skills. Mapper — extracts parts[].text, parses JSON, merges with defaults. Result — result_to_a2a_task wraps into A2A Task.

# viz/backend/viz_backend/a2a/agent_card.py
def build_agent_card(base_url: str) -> dict:
    return {
        "name": "Expert Memory Machine",
        "skills": [{"id": "weekly_report", "name": "Weekly Report", ...}],
        "capabilities": {"streaming": True, ...},
    }

# viz/backend/viz_backend/a2a/process_manager_mapper.py
def a2a_message_to_input(message: dict) -> dict:
    parts = message.get("parts") or []
    text_parts = [p.get("text") for p in parts if p.get("type") == "text"]
    merged = {"operation": "weekly_report", "board_id": None, ...}
    for t in text_parts:
        data = json.loads(t) if t.startswith("{") else {"board_id": t}
        for k, v in data.items():
            if v is not None and k in merged:
                merged[k] = v
    return merged

Example Request

curl -X POST http://localhost:8000/api/a2a/tasks \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0",
    "method": "tasks/submit",
    "params": {
      "taskId": "test-1",
      "message": {
        "role": "user",
        "parts": [{"type": "text", "text": "{\"operation\":\"weekly_report\",\"board_id\":\"<BOARD_ID>\"}"}]
      }
    },
    "id": 1
  }'

Response: JSON-RPC with result.task.status = "completed" or "failed", result.task.artifact — result (report markdown). On error — status: "failed", error text in artifact.

What you need for the request: board_id — UUID of a board from Task Manager. You can get it from the frontend (URL when opening a board) or via MCP Task Manager. Process Manager collects metrics from that board (velocity, blocked, cycle time) and builds a weekly report.

Limitations

  • I wrote a custom JSON-RPC parser instead of using a2a-langchain-adapters. Spec is still draft — libraries may change. Risk: I'll have to update manually. But while spec is unstable, custom parser gives control.
  • One skill — weekly_report. Calendar, Finance, Content Classifier — not in A2A yet. I'll add them later when adapter layer stabilizes.
  • No auth, rate limiting, observability (Phase 4). Fine for local dev; for production — separate phase.

Why I Chose Process Manager

Clear input/output; it already uses task_store (delegation to Task Manager). Task Manager is more complex — many operations (list_board, create_card, move_card, list_workspaces). Process Manager has one operation weekly_report with two params. Phase 1 — pilot for adapter layer: verify A2A ↔ LangGraph mapping works before scaling to other agents.

What's Next

Phase 2 — Task Manager (list_board): second skill, routing by skillId.
Phase 2+ — TaskStore and tasks/status: result available after submit, can poll status later. Phase 3 — streaming: SSE instead of polling. More in the next articles.