MCP Async Tasks: Building long-running workflows for AI Agents
What MCP Tasks are, why they matter, and the full technical guide to implementing them.
The 2025-11-25 revision of the Model Context Protocol (MCP) introduces Tasks: an experimental primitive that upgrades MCP from “synchronous tool calls” to a call-now, fetch-later protocol. In practical terms, Tasks let an MCP request return immediately with a durable handle, while the real work continues in the background and can be polled or subscribed to later.
Even though Tasks are still labeled experimental, they’re already one of the most important changes MCP has shipped. They close a production gap every serious agent developer has run into: timeouts, blocked sessions, and ad-hoc async hacks for anything that takes longer than a typical RPC round-trip. Tasks standardize long-running operations across the ecosystem so clients, servers, and SDKs can interoperate without bespoke side channels.
This article goes deep on how Tasks work, why the design matters, and how to implement them safely.
Why MCP Tasks are a big deal

Before Tasks, MCP requests were effectively synchronous: a client calls tools/call, waits, and receives a result. That model breaks under real workloads:
- Long operations exceed transport or host timeouts (think 30-minute ETL jobs, large file conversions, or multi-step provisioning).
- Agents can’t parallelize well because they’re stuck waiting for a single tool call to return before planning the next.
- Progress reporting is inconsistent because every server invents its own way to represent “still working”.
Tasks fix this by introducing a cross-request async state machine. Any request type that opts in can be augmented into a Task, and clients can rely on uniform semantics for status, progress, results, and cancellation.
The feature is experimental mainly because the maintainers want room to tune ergonomics and edge-cases (especially around SDK helpers and UX patterns). But the core protocol pieces are already solid enough that SDKs are actively implementing SEP-1686 across languages.
Mental model: Tasks as durable request executions
In MCP Tasks, every async operation has two roles:
- Requestor: sends a task-augmented request (client or server).
- Receiver: accepts it, executes the work, and owns task lifecycle (client or server).
The protocol is requestor-driven: the requestor decides when to create a task and how to orchestrate polling or concurrent tasks, while the receiver decides which requests are task-augmentable and how long tasks live.
Think of a task as a small, durable state machine plus a results pointer.
Capability negotiation: Opting in, per request type
Tasks aren’t assumed in MCP. Both sides have to advertise support during initialization, and they do it granularly: a peer doesn’t just say “I support tasks,” it says which kinds of requests may be task-augmented. That keeps async behavior predictable and prevents surprise background jobs in places that still need to be synchronous.
Server capabilities example
The following example tells us three concrete things about the server:
- It can accept task-augmented requests for the
tools/callmethod. - It implements task lifecycle APIs: By advertising
listandcancel, the server is promising support fortasks/listandtasks/cancelin addition totasks/get/tasks/result. So clients can enumerate in-flight jobs and stop them, rather than only polling. - Anything not listed is still synchronous: If a request type isn’t present under
requests, clients must not try to task-augment it with this server.
Client capabilities example
This flips the perspective. Here the client is effectively saying:
- “If you (the server) ever send me a task-augmented
sampling/createMessageorelicitation/create, I can handle that.”
That matters because in MCP either side can be a requestor. For example, a server running a long task might need to elicit user input mid-workflow. With this capability, the client is declaring it supports those incoming async requests and can participate in that multi-step task lifecycle.
Rules to remember:
- If
capabilities.tasksis missing, don’t create tasks. - The
capabilities.tasks.requestsset is exhaustive: if a type isn’t listed, it can’t be task-augmented. tasks.listandtasks.cancelare separately negotiated; a peer may support task creation but not listing, etc.
Tool-level negotiation: execution.taskSupport
Tool calls get an extra layer: in tools/list, each tool can declare:
"forbidden"(default if missing): must stay synchronous"optional": requestor may choose sync or task"required": requestor must use tasks or the call is invalid
This is enforced only if the server also declared tasks.requests.tools.call. Otherwise tasks are forbidden regardless of tool metadata.
This two-tier negotiation (global + per-tool) is subtle but powerful: it lets a server say “Tasks exist,” while a specific tool can say “I’m always fast, don’t bother,” or “I’m batchy and slow, please always task-augment.”
How to create an async task
To request async execution, the requestor adds a task field inside the normal request params. The only currently defined task parameter is ttl (time-to-live in milliseconds). If the receiver supports Tasks for that request type, it will treat this as a long-running job and return a task handle immediately instead of blocking for the final result.
Example: task-augmented tools/call:
The receiver returns either the operation result directly (if you didn’t include task, or if Tasks aren’t supported here) or the metadata of a newly created task.
Example response:
Here’s what each field in that task object is telling you:
taskId: a globally unique, unguessable identifier for this execution. You’ll pass it totasks/get,tasks/result, andtasks/cancel. Task IDs are receiver-generated unique strings; in practice they should be UUID-grade and unguessable because they gate access to task state/results.status: current lifecycle state. Tasks have five states:working: the receiver has accepted the request and is actively executing itinput_required: the receiver needs additional inputcompleted: success, result availablefailed: execution failedcancelled: the requestor cancelled before completion
statusMessage(optional): human-readable context for UX/debugging (not a control field).createdAt: when the task record was created on the receiver; anchors TTL and helps with recovery after reconnect.lastUpdatedAt: last time anything about the task changed (status/progress); useful for detecting stalls.ttl: how long the receiver promises to retain the task/result before it may be deleted. The requestor can suggest a TTL, but the receiver is authoritative.pollInterval(optional, hint): how frequently the receiver wants you to polltasks/getto avoid overloading it. Value is in milliseconds.
At this point the receiver has accepted the work, and the requestor has a durable handle to track it.
Handling input_required (tasks + elicitation)
input_required is the bridge between async execution and interactive workflows. When a receiver needs more info to continue, it:
- Moves task to
input_required. - Sends elicitation (or other input request) tagged with the same related-task id.
- The requestor should preemptively call
tasks/resultto wait for the next stage while still polling status if desired.
This is how Tasks support multi-step back-and-forth without inventing a new control plane.
Polling for status: tasks/get
Requestors poll by calling tasks/get, and should respect the server’s suggested pollInterval. Polling continues until the task finishes or input_required is encountered.
Response returns the full Task object.
Retrieving results: tasks/result
Results are fetched separately, after the task finishes. tasks/result is blocking: it holds the response until the task is terminal.
Rules:
- If task is terminal, return exactly what the underlying request would have returned (success or JSON-RPC error).
- If task is non-terminal, block until terminal.
tasks/resultresponses must include related-task metadata in_metabecause the result payload doesn’t otherwise carrytaskId.
This symmetry is key: tasks don’t invent a new result format; they delay the existing one.
Associating messages to tasks
A task in MCP isn’t just “one request that finishes later.” It can turn into a mini workflow that spans multiple MCP interactions. While a task is running, the receiver may send additional MCP requests or notifications that are part of the same execution. For example, a long-running tool call might:
- emit a
notifications/tasks/statusupdate as it progresses, - pause to ask the user for clarification via
elicitation/create, - or request a model-generated intermediate step via
sampling/createMessage.
All of those are task-related messages: they’re not new, unrelated RPCs, they’re steps in the same async job.
To keep every step correctly attached to the right background execution, MCP requires task-related messages to carry a metadata tag:
What this accomplishes:
- Correlation across methods. The client can reliably tell that an elicitation or sampling request belongs to a specific in-flight task.
- Recovery and auditability. If a client reconnects or is juggling many tasks at once, it can still stitch the workflow together using the shared
taskId. - Safe orchestration. Agents can route replies, UI events, and results back to the correct task without guesswork.
A couple of important nuances:
- You don’t add this tag to the task-management calls themselves (
tasks/get,tasks/list,tasks/cancel) because they already includetaskIdas a parameter. tasks/resultmust include it in the response, because the returned payload is the original request result format and doesn’t otherwise carry ataskId.
Progress and status notifications
Notifications are push messages a receiver can send to proactively update the requestor about a task. They differ from polling in one key way: polling is requestor-driven and authoritative, while notifications are receiver-driven and best-effort. In practice that means you poll tasks/get for the source of truth, and use notifications only to learn changes sooner and improve UX. If a notification is missed, polling still gets you back in sync.
There are two optional notification paths:
notifications/tasks/status(state changes): When a task moves between lifecycle states (likeworking → input_required → completed), the receiver may emit a status notification. The payload is the full Task object, identical to whattasks/getreturns, so clients can update UI immediately. But clients must still poll until they observe a terminal state themselves.- Progress notifications (how far along): Tasks don’t introduce a new progress channel; they reuse MCP’s general Progress utility. If the original task-augmented request included a
progressToken, that same token remains valid for the entire task lifetime, and the receiver may emit standard progress notifications against it until the task reaches a terminal status.
Cancelling tasks: tasks/cancel
Requestors can explicitly cancel a task:
The following rules apply:
- You cannot cancel a task that has finished its execution.
- On valid cancellation, receiver should stop work and transition to
cancelledbefore responding. - Once cancelled, status must remain
cancelledeven if work later finishes. - Cancelled tasks may be deleted at any time; don’t depend on retention.
Listing tasks: tasks/list
If supported, receivers return tasks in pages with an opaque cursor (nextCursor). Any task retrievable via tasks/get must appear in tasks/list for that requestor.
This is especially useful for UIs that want to show “background jobs” or reconnect to tasks after a restart.
Error model
Tasks use two layers of error reporting:
- Protocol errors (standard JSON-RPC) for bad
taskId, invalid cursor, etc. - Execution errors expressed as
status: failed, with diagnostics instatusMessage.
- For
tools/call, a tool result withisError: truemaps to task failure.
Critically, tasks/result for a failed task returns the same error the original request would have returned, preserving compatibility.
Security considerations for MCP Async Tasks
Because tasks are fetched later by taskId, servers must treat task IDs as sensitive capability handles. Every task should be bound to the same authorization context (user / tenant / API client) that created it, and all follow-up calls must enforce that binding.
Concretely: tasks/get, tasks/result, and tasks/cancel should only succeed if the caller’s auth context matches the task’s; otherwise they should fail as if the task doesn’t exist.
Likewise, tasks/list must be filtered so a caller only sees tasks from their own context. If your deployment doesn’t have auth contexts, then task IDs must be cryptographically unguessable, TTLs should be short, and task endpoints should be rate-limited to prevent enumeration or data leakage.
Implementation notes for MCP Async Tasks
A few practical patterns emerge from SEP-1686 and early SDK work:
Server side
- Durable task store: A task must outlive the HTTP/SSE request that created it. That means you can’t keep task state only in memory tied to a connection. Persist it in something durable (a DB row, job queue record, or workflow engine) so the requestor can safely poll later, reconnect, or fetch results after a restart.
- State transitions are append-only: Tasks are a small state machine with terminal states (
completed,failed,cancelled). Once a task reaches a terminal state, it must never move again. This guarantees that clients don’t see “completed → working” regressions during retries, network races, or eventual-consistency delays. - Respect (but may override) TTL: The requestor can ask for a TTL when creating a task, but the receiver is authoritative. You should store the requested TTL for visibility, then return (and enforce) the TTL your system can realistically support. Clients should rely on the TTL in the returned Task object, not the one they requested.
- Optional progress: If your server already supports MCP progress, keep doing so — Tasks don’t introduce a new progress channel. The same
progressTokenfrom the original request remains valid for the full task lifetime, so clients can render progress bars without custom plumbing. - Idempotency: The protocol doesn’t require idempotent task creation, but real networks do. If a requestor retries a task-augmented request after a timeout, you don’t want to spawn duplicate background jobs. Practical fix: accept an idempotency key (or hash stable request inputs) and dedupe to the same
taskIdwhen you detect a retry.
Client / agent side
- Always poll as the source of truth: Status notifications are helpful, but they can be dropped or delayed. Clients should treat
tasks/getas authoritative and continue polling until they see a terminal status. Think “poll for truth, listen for speed.” - Parallelism becomes trivial: Once you can task-augment calls, you don’t need to serialize work behind slow tools. Fire off multiple tasks, keep their
taskIds, and poll each independently. This is the clean MCP-native way to do concurrency without inventing side channels. - UX: Use the receiver’s hints. Show
statusMessageto explain what’s happening, respectpollIntervalso you don’t overload servers, and surface progress events tied to theprogressTokenif they’re available. These three together make long-running jobs feel responsive instead of opaque. - On reconnect: Tasks are built for flaky connections. If a client restarts or an agent crashes, call
tasks/list(when supported) to rediscover in-flight tasks and resume polling/results, rather than leaving orphaned work running unseen.
Secure your Tasks with WorkOS MCP Auth
Async Tasks make MCP dramatically more powerful, but they also raise the stakes for security. A long-running task can span minutes or hours, emit follow-on requests, and expose results later via taskId. That means every task needs to be tied to a real user or tenant, with least-privilege access to the exact tools and resources it’s allowed to touch.
WorkOS makes that easy. AuthKit acts as an OAuth 2.1–compatible authorization server for MCP, aligned with the latest spec, so you can add standards-compliant auth to your MCP server without re-implementing OAuth edge cases yourself. It supports the core pieces you need for production-grade Tasks: PKCE flows, scoped tool permissions, secure token issuance/validation, and multi-tenant isolation, all with minimal MCP-specific glue.
If you’re building an MCP server that will run long-lived background work, secure it from day one. WorkOS MCP Auth lets you ship fast while staying compliant with the protocol and safe for real user data.
Final thoughts
Tasks may be experimental, but they’re foundational. They turn MCP into a protocol that can model real work: background jobs, human-in-the-loop steps, and multi-minute workflows, all while preserving interoperability and the simple JSON-RPC mental model MCP started with.
If you’re building an MCP server today, Tasks are the new default for anything slow. If you’re building a client or agent framework, Tasks are the key to safe concurrency and good user experience. And if you’re building enterprise MCP integrations, this is the primitive that makes “agentic automation” feel like infrastructure instead of a demo.
We’ll be watching how the experimental edges settle, but the direction is clear: async is now a first-class citizen in MCP.