DPoP (RFC 9449) explained: How sender-constrained OAuth tokens make token theft a non-event
A practical walkthrough of RFC 9449 for engineers: the proof JWT, server-issued nonces, key storage in the browser, and where DPoP fits next to mTLS.
For most of OAuth 2.0's life, access tokens have been pure bearer tokens: whoever holds the string gets the access. If that token leaks through a compromised SPA, a malicious browser extension, an over-eager log line, or a TLS-terminating proxy, the attacker is indistinguishable from the legitimate client. That is the whole security model, and it is the reason token theft keeps showing up in post-mortems.
Demonstrating Proof-of-Possession (DPoP), standardized in September 2023 as RFC 9449, closes that gap at the application layer. It binds access and refresh tokens to a public/private key pair that the client holds, and requires the client to sign a fresh proof JWT for every token request and every resource request. A stolen token without the matching private key is inert.
DPoP is not theoretical. Bluesky's atproto OAuth profile requires it for every authorized request. FAPI 2.0, the open-banking security profile, names DPoP as one of two acceptable sender-constraining mechanisms. OAuth 2.1 and the Model Context Protocol (MCP) both list sender-constrained tokens as the recommended hardening for public clients, which now includes AI agents. If you are shipping a public OAuth client in 2026, DPoP is the question, not whether, but when.
This article walks through what RFC 9449 actually requires, what the proof JWT looks like on the wire, how the nonce dance works, and the browser-side implementation details that are easy to get wrong.
What DPoP is, in one paragraph
DPoP is an application-layer mechanism for sender-constraining OAuth 2.0 access tokens and refresh tokens. The client generates an asymmetric key pair (typically P-256 for ES256), keeps the private key on the device, and presents proof of possession on every token request and every resource request via a short-lived JWT in a DPoP HTTP header. The authorization server binds issued tokens to the JWK SHA-256 thumbprint of the client's public key by adding a confirmation (cnf) claim. The resource server verifies that the thumbprint bound to the token matches the public key used to sign the proof. A token alone is useless; a token plus a correctly scoped, freshly signed DPoP proof is what gets you access.
The problem DPoP solves
RFC 6750 defines bearer tokens as exactly what they sound like: anyone who possesses the token can use it. That is fine until one of the following happens:
- Cross-site scripting (XSS): Malicious script reads the token out of memory,
localStorage, or a cookie. - Malicious or compromised browser extensions: Extensions with broad permissions can read network traffic or page memory.
- Logs and observability pipelines: Tokens leak through access logs, error trackers, and debugging endpoints.
- Proxies and intermediaries: TLS-terminating proxies, corporate MITM boxes, and mobile carriers that insert themselves between the client and the API.
- Device compromise: Refresh tokens sitting on disk in a mobile app's shared storage.
PKCE (RFC 7636) protects the authorization code exchange, but does nothing once an access or refresh token has been issued. Refresh token rotation reduces the blast radius, but still assumes the attacker does not win the race. Sender-constraining is the class of mitigations that makes stolen tokens unusable in the first place.
DPoP versus the alternatives
There are really only two mainstream sender-constraining mechanisms in the OAuth ecosystem today.
Mutual TLS (RFC 8705) binds the access token to the client certificate presented during the TLS handshake. The authorization server records a hash of the certificate in the cnf.x5t#S256 claim, and the resource server checks that the same certificate is presented on every request. It is robust and it is what most open-banking deployments around the world still run on. But mTLS is a transport-layer mechanism, which means it requires X.509 certificate issuance and rotation, it needs TLS termination under your control, and it is effectively unavailable to browsers and most mobile SDKs. SPAs cannot present client certificates. For confidential server-to-server clients that already have a PKI, mTLS is still the stronger choice.
DPoP (RFC 9449) works at the application layer, using asymmetric JWT signatures over an HTTP header. There is no PKI to run, no certificate lifecycle, no TLS reconfiguration. Any client that can use Web Crypto, a native cryptographic library, or a JWT library can participate. The tradeoff is that DPoP has more moving parts in the request path, and the server must track JWT identifiers (jti) to detect replay within the proof's validity window.
There was a third contender, Token Binding (RFC 8471), which tied tokens to the TLS connection. Key browser vendors dropped support and it is now effectively dead. DPoP is what filled that vacuum.
FAPI 2.0 allows both mTLS and DPoP. Most new OAuth profiles that explicitly require sender-constraining accept either one.
The DPoP flow, end to end
Here is the sequence, assuming authorization code with PKCE as the grant.
- The client generates an asymmetric key pair on first use and persists it securely (more on "securely" below).
- The client starts the authorization code flow normally. Optionally, during pushed authorization request (RFC 9126), it sends the JWK thumbprint as the
dpop_jktparameter so the authorization server can bind the authorization code to the key from the start. Bluesky's profile, for instance, requires PAR for every client. - When the client exchanges the authorization code at the token endpoint, it adds a
DPoPheader containing a proof JWT whosehtmisPOSTand whosehtuis the token endpoint URL. - The authorization server verifies the proof, computes the SHA-256 thumbprint of the public key per RFC 7638, and issues an access token (and typically a refresh token) containing
cnf.jktset to that thumbprint. Thetoken_typein the response isDPoPinstead ofBearer. - The client calls the resource server. Two things change from the bearer-token flow: the
Authorizationscheme isDPoP(notBearer), and a newDPoPheader carries a fresh proof JWT whose payload includes anathclaim: the base64url-encoded SHA-256 hash of the access token. - The resource server verifies the proof's signature using the JWK embedded in the proof header, checks
htmandhtuagainst the actual request, checksathagainst the presented access token, and checks that the thumbprint of the proof's JWK matchescnf.jktinside the access token. If the access token is opaque, it pullscnf.jktfrom the introspection response instead.
A replayed access token without the matching private key produces an unverifiable proof. A replayed proof fails the jti and iat checks. A proof crafted for one endpoint fails the htm/htu check when pointed elsewhere.
Anatomy of a DPoP proof JWT
The proof JWT is the core artifact. It is not stored, it is not reused, and it is not the access token. It is a short-lived signed statement the client makes about a single request it is about to send.
Header:
- The
typvalue is mandatory and is what resource servers key on to reject proofs that were actuallyid_tokens or generic JWTs. - The
algmust be asymmetric; symmetric algorithms likeHS256are forbidden, andnoneis forbidden. - The
jwkis the public half of the client's key, embedded directly in the header. The server derives the thumbprint from this JWK, which is why rotating the key effectively invalidates any tokens bound to the old one.
Payload:
Four claims are always present:
jtiis a unique identifier; the server uses it to detect replay within the proof's validity window.htmis the HTTP method of the request the proof covers.htuis the target URI, minus query string and fragment.iatis the issue time. Servers typically reject proofs older than a small number of seconds.
On resource server requests (not token endpoint requests), two more claims appear:
athis the base64url-encoded SHA-256 hash of the access token. This ties the proof to one specific token so the same proof cannot be reused against a different token.nonceis present when the server has demanded one. More on this next.
The signature is computed over the header and payload using the private key, in the usual JWS fashion. The server verifies the signature using the public key in the header itself, then checks that the public key's thumbprint matches the token binding.
Server-issued nonces, and why your client will see use_dpop_nonce
RFC 9449 Section 8 introduces an optional but important wrinkle: server-issued nonces. The motivation is that a client running in a hostile environment (SPA, mobile app, anywhere an attacker might grab a proof before it is sent) could have its proof captured and replayed in the same small time window. To close this, servers can require that the proof include a nonce claim whose value the server chose.
The flow is:
- Client sends a request without a nonce (or with a stale one).
- Server responds with either a
400 invalid_dpop_proofon the token endpoint or a401on the resource endpoint, with the error codeuse_dpop_nonceand aDPoP-Nonceresponse header containing the current nonce. - Client retries the request with the nonce included in the proof.
- The server rotates nonces periodically and returns new values in the
DPoP-Nonceheader on subsequent responses. Clients persist the latest nonce and use it until the server gives them a new one.
Two gotchas. First, the authorization server's nonce and the resource server's nonce are separate, and the RFC expects clients to track them independently. Bluesky's implementation, for instance, explicitly documents that nonces are distinct between the authorization server and the PDS. Second, clients should not over-retry: if a second retry with the newly provided nonce still fails, there is a real error, not a nonce rotation.
Nonces are optional in the spec, but most deployments that take DPoP seriously turn them on.
Browser key storage is where most DPoP implementations go wrong
DPoP's entire security argument rests on the private key being, in fact, private. Browsers and mobile platforms give you two tools for that.
In the browser, use Web Crypto's SubtleCrypto.generateKey with extractable: false:
A non-extractable CryptoKey can be used to sign but cannot be exported, so an XSS payload that gets a reference to the key cannot exfiltrate it. It can still use the key to mint proofs as long as the page is open, which is why short access token lifetimes matter even more under DPoP: you reduce the window during which an attacker with XSS can act.
Persist the CryptoKeyPair object in IndexedDB, not localStorage. IndexedDB can store live CryptoKey objects with their non-extractable flag intact. localStorage only stores strings, which means anything you put there has to be extractable, which defeats the point. The oidc-client-ts library ships an IndexedDbDPoPStore that does exactly this, and it is a reasonable reference implementation to study.
On mobile, use the platform keystore: iOS Secure Enclave via CryptoKit, Android's AndroidKeyStore with StrongBox where available. Do not roll your own key persistence on a mobile device.
What changes for your authorization server and resource server
On the authorization server, roughly:
- Verify the incoming
DPoPproof signature using the embedded JWK. Reject anything that is not an asymmetric algorithm whose JWSalgvalue is in your allowed list (commonly ES256, ES384, EdDSA, or RS256 with PSS). - Check
typ === "dpop+jwt"before anything else. - Check
iatis within a small window (the RFC does not mandate a number; many servers use 60 seconds). - Track
jtivalues for replay detection across that window. - Verify
htmandhtuagainst the actual request. - Compute the JWK SHA-256 thumbprint and add it to the issued access token as
cnf.jkt. If you issue refresh tokens, bind those to the same thumbprint. The spec explicitly requires that refresh token use also be DPoP-constrained for public clients. - Advertise DPoP support via the
dpop_signing_alg_values_supportedmetadata parameter on your.well-known/oauth-authorization-serverdocument.
On the resource server, the extra work is:
- Look for
Authorization: DPoP <token>instead ofBearer. - Extract
cnf.jktfrom the access token (either from the JWT directly or via introspection if tokens are opaque). - Verify the
DPoPproof header the same way the authorization server does, plus theathcheck against a SHA-256 hash of the presented access token. - Confirm that the JWK thumbprint in the proof equals
cnf.jkt.
A good OAuth library handles all of this. Spring Security, Nimbus OAuth 2.0 SDK, oidc-client-ts, and requests_oauth2client all have DPoP support. If you find yourself writing JWT signature verification by hand, stop.
Where DPoP is actually being used right now
Adoption has moved faster than the usual standards-track timeline.
- Bluesky / atproto. Bluesky's OAuth profile requires DPoP on every request, with server-issued nonces, PAR, and PKCE. It is the first large consumer platform to make DPoP non-optional, and the reasoning is instructive. Bluesky cannot police every client built against its protocol, and it cannot revoke tokens across the network if a client leaks them, so it pushes the cost of a leak toward zero with sender-constraining.
- FAPI 2.0. The Financial-grade API Security Profile 2.0, maintained by the OpenID Foundation, accepts mTLS or DPoP as equally valid mechanisms for sender-constrained tokens. Open-banking and high-assurance financial deployments that do not want to manage client certificates increasingly reach for DPoP.
- OAuth 2.1 and MCP. The Model Context Protocol, which governs how AI agents connect to external tools, builds on OAuth 2.1. OAuth 2.1 makes sender-constrained tokens the recommended hardening for public clients. AI agents are public clients by definition, and the agent-as-attacker-target threat model is severe enough that sender-constraining is a near-term expectation rather than a best-practice suggestion.
A short list of things that will bite you
- Symmetric algorithms are forbidden.
HS256will not work. UseES256unless you have a reason. - Keys must be non-extractable in browsers. If you find yourself base64-encoding a private key into
localStorage, you have defeated DPoP. jtiuniqueness matters. Use a UUID or similar per proof; reusing identifiers gets rejected as replay.htudoes not include the query string or fragment. Normalize before signing and before verifying.- Nonces are per-server. Do not share authorization server and resource server nonces.
- Refresh tokens are DPoP-bound too. A confidential client still needs a valid proof on the refresh request.
- Clock skew is real. Allow a small
iattolerance on the server side, and consider signing with the client's corrected time rather than rawDate.now(). - Do not log proof JWTs. They are short-lived, but they contain keys and request metadata that do not need to end up in observability pipelines.
Further reading
DPoP is one piece of a larger shift. OAuth 2.1 consolidates a decade of lessons into a single profile, and nearly every hardening step it promotes (mandatory PKCE, elimination of the implicit grant, refresh token rotation, short-lived access tokens, exact redirect URI matching) composes with DPoP rather than replacing it. For the wider picture, see our OAuth 2.1 vs OAuth 2.0 guide and our summary of RFC 9700, the current OAuth security best-practices document.
The AI-agent angle is also worth tracking if you are building against MCP. Agents are public clients, their credentials are attractive targets, and the OAuth specs that underpin MCP already point toward sender-constraining as the next layer of defense. Our MCP Authorization in 5 easy OAuth specs and How to add OAuth to your MCP server both cover the ecosystem DPoP is about to become a default in.