MCP Authorization in 5 easy OAuth specs
Behind every secure MCP integration is a stack of OAuth standards working in harmony. Learn how they combine to deliver seamless authorization for LLMs.
Unless you’ve been living under a metaphorical rock, you’ve probably heard about Model Context Protocol (MCP), a new protocol from Anthropic that attempts to provide a standard way to provide tools and context to Large Language Models (LLMs).
In this post, we are mostly going to ignore the specific features of MCP and focus on a key community question: How do we securely authorize access to an MCP server? Until recently, there was no obvious solution, but with some of the latest changes to the authorization section of the specification, we now have a new foundation. And in case it isn’t obvious from the title, we’re going to be talking about OAuth.
It’s not just one OAuth specification either, but a handful, each layering on top of the last and solving a different problem.
Spec #0: No Authorization
In the very beginning, version 2024-11-05 of the MCP specification didn’t cover authorization as a concern, and so in hindsight, it was no surprise to find that nearly all MCP servers from back then were expected to be run on localhost
. You’d configure your preferred client, like Claude, to start the server, and all communication happened over the STDIO transport.
This was okay while we were all collectively still figuring things out, but it has some important problems.
First, you typically had to be a developer, or at least have development tools installed, in order to run these MCP servers. This made MCP difficult to adopt for non-technical users—an area where model providers had otherwise excelled.
Often, these local MCP servers were intended to interact with remote services and therefore needed to authenticate themselves. The typical solution was to get an API credential for said remote service and make it available to the MCP server via a file or environment variable, generally in plaintext. I’m sure security folks love this.
Spec #1: Good ol’ OAuth 2.0
It turns out that authorizing access to a remote service on behalf of a user is a problem we’ve had on the web before, which leads us into our first spec: The classic OAuth 2.0 Authorization Framework from RFC 6749.
!!Technically, MCP expects you to implement OAuth 2.1, but we aren’t going to get into the differences here.!!
If we start by assuming most MCP servers in the future will be running remotely (i.e. not on your laptop like in the early days), then the pieces start to fit. Taking the standard “roles” that OAuth outlines:
- Resource owner: This is you. Someone who uses a service, like say GitHub, and wants to let an LLM access your GitHub resources over MCP.
- Resource server: This is GitHub’s MCP server, which requires credentials to be accessed (by your LLM).
- Client: This is your LLM, like Claude or Cursor, which will be given credentials to access the MCP server.
- Authorization server: This is generally the service again, such as GitHub, and we’ll find out soon how we can be even more flexible here with, you guessed it, more specs.
With all of the “actors“ playing their proper ”roles“, things in theory become simpler. Connecting your LLM to an MCP server should be as simple as logging in with your account, like signing into Medium with your Google account.
But how do you tell your LLM about an MCP server? You could start by giving it the URL of its Server-Sent Events (SSE) endpoint, but there are quite a few more details when it comes to OAuth, like locations of the authorize
and token
endpoints. In addition, traditionally, OAuth clients need to register themselves ahead of time, and N new LLM providers times M new MCP servers, means a lot of registrations that need to be happening.
Spec #2: Protected Resource Metadata
So you’ve got the MCP server’s URL, but you still need to know how to interact with it securely:
- What token formats does it accept?
- Which scopes are supported?
- Which authorization servers does it trust?
Rather than hardcoding all of this and relying on assumptions, the server can publish a machine-readable metadata document at a well-known location (/.well-known/oauth-protected-resource
). This is defined in RFC 9728, and it’s basically a discovery document for protected resources. When a client first attempts to connect without any credentials, the server should return a 401 Unauthorized
response and a WWW-Authenticate
header with the path to this well-known URL.
This metadata tells the client everything it needs to know: what the resource is called, which authorization servers can issue tokens for it, and how to verify those tokens. The configuration that used to require digging through docs becomes self-serve and standardized.
Spec #3: Authorization Server Metadata
Once the client knows which authorization server to use — based on the aforementioned metadata — the next question is how to interact with it. Enter RFC 8414, which defines how authorization servers can publish their capabilities via their own well-known endpoint (/.well-known/oauth-authorization-server
).
This metadata answers questions like: Where should the client redirect the user to log in? How does it exchange a code for a token? What scopes, grant types, and client auth methods are supported?
With this in place, you don’t need to build custom config for each auth provider. Your LLM can simply follow the pointers and adapt to each new environment automatically — which, for something like MCP where users might be integrating with dozens of services, is a very big deal.
!!Your authorization server and MCP server don’t need to be the same server! The beauty of this separation of concerns means you can combine your MCP server with an off-the-shelf compatible authorization server like AuthKit and get started even faster.!!
Spec #4: Dynamic Client Registration
There’s one last piece: registration. Traditionally, OAuth clients need to be registered ahead of time — often manually — with the authorization server. But that model breaks down quickly in an ecosystem where any LLM could talk to any MCP server at any time, with new clients and servers being published every day.
That’s why RFC 7591 allows clients to register themselves dynamically. Instead of emailing an admin or filing a ticket, the LLM just makes a POST request to the registration_endpoint and says: “Hey, here’s who I am, here’s how to contact me, and here are the flows I want to use.”
The server can respond with a client ID, maybe a secret, and optionally some rules or scopes. The point is: it’s now possible to build fully self-serve, zero-touch client registration. And that’s essential if MCP is going to scale to arbitrary clients and services without admin bottlenecks.
Spec #5: Proof Key for Code Exchange (PKCE)
Okay, so we’ve got dynamic registration, metadata discovery, and OAuth 2.0 all working together. Your LLM is almost ready to initiate the flow. But there’s one last problem: it’s a public client.
In OAuth land, that means your LLM is not trusted as a safe place to store secrets — like a client_secret
. These are fine if you’re building a backend service, but your LLM runs locally or in a user-facing app. If you ship a secret with it, it’s not a secret anymore.
That’s where PKCE (pronounced “pixie”) comes in.
PKCE, short for Proof Key for Code Exchange (RFC 7636), was originally designed for mobile apps but is now standard practice for any public client. Instead of relying on a client secret, the LLM generates a one-time-use random string (the code verifier) and transforms it into a code challenge using a hashing algorithm. This challenge is sent during the authorization request. Later, when redeeming the code for a token, the LLM proves it’s the same client by presenting the original code verifier.
Why does this matter? It prevents attackers from hijacking the authorization code in transit — even if they somehow intercept it, they won’t have the code verifier needed to finish the flow.
In an MCP context, PKCE lets you run entirely in the open — with no secrets needing to be exchanged as part of the dynamic registration process, and no secret to possibly leak later on.
All together now
So that’s MCP authorization in five OAuth specs — from the humble beginnings of localhost-only servers to a future where any LLM can discover, register with, and get authorized to talk to any MCP server automatically.
Each spec we’ve covered builds on the last:
- OAuth 2.0 gives us the core authorization flow.
- Protected Resource Metadata lets clients learn how to talk to a server.
- Authorization Server Metadata explains how to get tokens.
- Dynamic Client Registration removes the need for manual setup.
- PKCE allows those registered clients to authenticate without needing to store a secret.
Individually, each spec solves a specific problem. Together, they form a kind of lattice: a standardized, composable way to plug together tools, models, and services securely. That’s a huge win not just for developers, but for users — the ones who just want to connect their LLM to their favorite tool without dealing with tokens, scopes, or arcane config files.
In practice
At WorkOS we’ve spent the long hours packaging all of these specs into a single API that applications can integrate with.
We then built mcp.shop, an online store where you can order some cool swag using an MCP client. All of it is powered by the WorkOS API.If you’re looking to build an MCP server of your own, check out our MCP Authorization guide.
Finally, if you’re just an OAuth nerd like us, take a look at our Careers page. We’d love to chat.