In this article
April 20, 2026
April 20, 2026

DPoP (RFC 9449) explained: How sender-constrained OAuth tokens make token theft a non-event

A practical walkthrough of RFC 9449 for engineers: the proof JWT, server-issued nonces, key storage in the browser, and where DPoP fits next to mTLS.

For most of OAuth 2.0's life, access tokens have been pure bearer tokens: whoever holds the string gets the access. If that token leaks through a compromised SPA, a malicious browser extension, an over-eager log line, or a TLS-terminating proxy, the attacker is indistinguishable from the legitimate client. That is the whole security model, and it is the reason token theft keeps showing up in post-mortems.

Demonstrating Proof-of-Possession (DPoP), standardized in September 2023 as RFC 9449, closes that gap at the application layer. It binds access and refresh tokens to a public/private key pair that the client holds, and requires the client to sign a fresh proof JWT for every token request and every resource request. A stolen token without the matching private key is inert.

DPoP is not theoretical. Bluesky's atproto OAuth profile requires it for every authorized request. FAPI 2.0, the open-banking security profile, names DPoP as one of two acceptable sender-constraining mechanisms. OAuth 2.1 and the Model Context Protocol (MCP) both list sender-constrained tokens as the recommended hardening for public clients, which now includes AI agents. If you are shipping a public OAuth client in 2026, DPoP is the question, not whether, but when.

This article walks through what RFC 9449 actually requires, what the proof JWT looks like on the wire, how the nonce dance works, and the browser-side implementation details that are easy to get wrong.

What DPoP is, in one paragraph

DPoP is an application-layer mechanism for sender-constraining OAuth 2.0 access tokens and refresh tokens. The client generates an asymmetric key pair (typically P-256 for ES256), keeps the private key on the device, and presents proof of possession on every token request and every resource request via a short-lived JWT in a DPoP HTTP header. The authorization server binds issued tokens to the JWK SHA-256 thumbprint of the client's public key by adding a confirmation (cnf) claim. The resource server verifies that the thumbprint bound to the token matches the public key used to sign the proof. A token alone is useless; a token plus a correctly scoped, freshly signed DPoP proof is what gets you access.

The problem DPoP solves

RFC 6750 defines bearer tokens as exactly what they sound like: anyone who possesses the token can use it. That is fine until one of the following happens:

  • Cross-site scripting (XSS): Malicious script reads the token out of memory, localStorage, or a cookie.
  • Malicious or compromised browser extensions: Extensions with broad permissions can read network traffic or page memory.
  • Logs and observability pipelines: Tokens leak through access logs, error trackers, and debugging endpoints.
  • Proxies and intermediaries: TLS-terminating proxies, corporate MITM boxes, and mobile carriers that insert themselves between the client and the API.
  • Device compromise: Refresh tokens sitting on disk in a mobile app's shared storage.

PKCE (RFC 7636) protects the authorization code exchange, but does nothing once an access or refresh token has been issued. Refresh token rotation reduces the blast radius, but still assumes the attacker does not win the race. Sender-constraining is the class of mitigations that makes stolen tokens unusable in the first place.

DPoP versus the alternatives

There are really only two mainstream sender-constraining mechanisms in the OAuth ecosystem today.

Mutual TLS (RFC 8705) binds the access token to the client certificate presented during the TLS handshake. The authorization server records a hash of the certificate in the cnf.x5t#S256 claim, and the resource server checks that the same certificate is presented on every request. It is robust and it is what most open-banking deployments around the world still run on. But mTLS is a transport-layer mechanism, which means it requires X.509 certificate issuance and rotation, it needs TLS termination under your control, and it is effectively unavailable to browsers and most mobile SDKs. SPAs cannot present client certificates. For confidential server-to-server clients that already have a PKI, mTLS is still the stronger choice.

DPoP (RFC 9449) works at the application layer, using asymmetric JWT signatures over an HTTP header. There is no PKI to run, no certificate lifecycle, no TLS reconfiguration. Any client that can use Web Crypto, a native cryptographic library, or a JWT library can participate. The tradeoff is that DPoP has more moving parts in the request path, and the server must track JWT identifiers (jti) to detect replay within the proof's validity window.

There was a third contender, Token Binding (RFC 8471), which tied tokens to the TLS connection. Key browser vendors dropped support and it is now effectively dead. DPoP is what filled that vacuum.

FAPI 2.0 allows both mTLS and DPoP. Most new OAuth profiles that explicitly require sender-constraining accept either one.

The DPoP flow, end to end

Here is the sequence, assuming authorization code with PKCE as the grant.

  1. The client generates an asymmetric key pair on first use and persists it securely (more on "securely" below).
  2. The client starts the authorization code flow normally. Optionally, during pushed authorization request (RFC 9126), it sends the JWK thumbprint as the dpop_jkt parameter so the authorization server can bind the authorization code to the key from the start. Bluesky's profile, for instance, requires PAR for every client.
  3. When the client exchanges the authorization code at the token endpoint, it adds a DPoP header containing a proof JWT whose htm is POST and whose htu is the token endpoint URL.
  4. The authorization server verifies the proof, computes the SHA-256 thumbprint of the public key per RFC 7638, and issues an access token (and typically a refresh token) containing cnf.jkt set to that thumbprint. The token_type in the response is DPoP instead of Bearer.
  5. The client calls the resource server. Two things change from the bearer-token flow: the Authorization scheme is DPoP (not Bearer), and a new DPoP header carries a fresh proof JWT whose payload includes an ath claim: the base64url-encoded SHA-256 hash of the access token.
  6. The resource server verifies the proof's signature using the JWK embedded in the proof header, checks htm and htu against the actual request, checks ath against the presented access token, and checks that the thumbprint of the proof's JWK matches cnf.jkt inside the access token. If the access token is opaque, it pulls cnf.jkt from the introspection response instead.

A replayed access token without the matching private key produces an unverifiable proof. A replayed proof fails the jti and iat checks. A proof crafted for one endpoint fails the htm/htu check when pointed elsewhere.

Anatomy of a DPoP proof JWT

The proof JWT is the core artifact. It is not stored, it is not reused, and it is not the access token. It is a short-lived signed statement the client makes about a single request it is about to send.

Header:

  
{
  "typ": "dpop+jwt",
  "alg": "ES256",
  "jwk": {
    "kty": "EC",
    "crv": "P-256",
    "x": "...",
    "y": "..."
  }
}
  
  • The typ value is mandatory and is what resource servers key on to reject proofs that were actually id_tokens or generic JWTs.
  • The alg must be asymmetric; symmetric algorithms like HS256 are forbidden, and none is forbidden.
  • The jwk is the public half of the client's key, embedded directly in the header. The server derives the thumbprint from this JWK, which is why rotating the key effectively invalidates any tokens bound to the old one.

Payload:

  
{
  "jti": "c1d2e3f4-5678-9abc-def0-1234567890ab",
  "htm": "POST",
  "htu": "https://auth.example.com/oauth2/token",
  "iat": 1745107200
}
  

Four claims are always present:

  • jti is a unique identifier; the server uses it to detect replay within the proof's validity window.
  • htm is the HTTP method of the request the proof covers.
  • htu is the target URI, minus query string and fragment.
  • iat is the issue time. Servers typically reject proofs older than a small number of seconds.

On resource server requests (not token endpoint requests), two more claims appear:

  • ath is the base64url-encoded SHA-256 hash of the access token. This ties the proof to one specific token so the same proof cannot be reused against a different token.
  • nonce is present when the server has demanded one. More on this next.

The signature is computed over the header and payload using the private key, in the usual JWS fashion. The server verifies the signature using the public key in the header itself, then checks that the public key's thumbprint matches the token binding.

Server-issued nonces, and why your client will see use_dpop_nonce

RFC 9449 Section 8 introduces an optional but important wrinkle: server-issued nonces. The motivation is that a client running in a hostile environment (SPA, mobile app, anywhere an attacker might grab a proof before it is sent) could have its proof captured and replayed in the same small time window. To close this, servers can require that the proof include a nonce claim whose value the server chose.

The flow is:

  1. Client sends a request without a nonce (or with a stale one).
  2. Server responds with either a 400 invalid_dpop_proof on the token endpoint or a 401 on the resource endpoint, with the error code use_dpop_nonce and a DPoP-Nonce response header containing the current nonce.
  3. Client retries the request with the nonce included in the proof.
  4. The server rotates nonces periodically and returns new values in the DPoP-Nonce header on subsequent responses. Clients persist the latest nonce and use it until the server gives them a new one.

Two gotchas. First, the authorization server's nonce and the resource server's nonce are separate, and the RFC expects clients to track them independently. Bluesky's implementation, for instance, explicitly documents that nonces are distinct between the authorization server and the PDS. Second, clients should not over-retry: if a second retry with the newly provided nonce still fails, there is a real error, not a nonce rotation.

Nonces are optional in the spec, but most deployments that take DPoP seriously turn them on.

Browser key storage is where most DPoP implementations go wrong

DPoP's entire security argument rests on the private key being, in fact, private. Browsers and mobile platforms give you two tools for that.

In the browser, use Web Crypto's SubtleCrypto.generateKey with extractable: false:

  
const keyPair = await crypto.subtle.generateKey(
  { name: "ECDSA", namedCurve: "P-256" },
  false,              // non-extractable
  ["sign", "verify"]
);
  

A non-extractable CryptoKey can be used to sign but cannot be exported, so an XSS payload that gets a reference to the key cannot exfiltrate it. It can still use the key to mint proofs as long as the page is open, which is why short access token lifetimes matter even more under DPoP: you reduce the window during which an attacker with XSS can act.

Persist the CryptoKeyPair object in IndexedDB, not localStorage. IndexedDB can store live CryptoKey objects with their non-extractable flag intact. localStorage only stores strings, which means anything you put there has to be extractable, which defeats the point. The oidc-client-ts library ships an IndexedDbDPoPStore that does exactly this, and it is a reasonable reference implementation to study.

On mobile, use the platform keystore: iOS Secure Enclave via CryptoKit, Android's AndroidKeyStore with StrongBox where available. Do not roll your own key persistence on a mobile device.

What changes for your authorization server and resource server

On the authorization server, roughly:

  • Verify the incoming DPoP proof signature using the embedded JWK. Reject anything that is not an asymmetric algorithm whose JWS alg value is in your allowed list (commonly ES256, ES384, EdDSA, or RS256 with PSS).
  • Check typ === "dpop+jwt" before anything else.
  • Check iat is within a small window (the RFC does not mandate a number; many servers use 60 seconds).
  • Track jti values for replay detection across that window.
  • Verify htm and htu against the actual request.
  • Compute the JWK SHA-256 thumbprint and add it to the issued access token as cnf.jkt. If you issue refresh tokens, bind those to the same thumbprint. The spec explicitly requires that refresh token use also be DPoP-constrained for public clients.
  • Advertise DPoP support via the dpop_signing_alg_values_supported metadata parameter on your .well-known/oauth-authorization-server document.

On the resource server, the extra work is:

  • Look for Authorization: DPoP <token> instead of Bearer.
  • Extract cnf.jkt from the access token (either from the JWT directly or via introspection if tokens are opaque).
  • Verify the DPoP proof header the same way the authorization server does, plus the ath check against a SHA-256 hash of the presented access token.
  • Confirm that the JWK thumbprint in the proof equals cnf.jkt.

A good OAuth library handles all of this. Spring Security, Nimbus OAuth 2.0 SDK, oidc-client-ts, and requests_oauth2client  all have DPoP support. If you find yourself writing JWT signature verification by hand, stop.

Where DPoP is actually being used right now

Adoption has moved faster than the usual standards-track timeline.

  • Bluesky / atproto. Bluesky's OAuth profile requires DPoP on every request, with server-issued nonces, PAR, and PKCE. It is the first large consumer platform to make DPoP non-optional, and the reasoning is instructive. Bluesky cannot police every client built against its protocol, and it cannot revoke tokens across the network if a client leaks them, so it pushes the cost of a leak toward zero with sender-constraining.
  • FAPI 2.0. The Financial-grade API Security Profile 2.0, maintained by the OpenID Foundation, accepts mTLS or DPoP as equally valid mechanisms for sender-constrained tokens. Open-banking and high-assurance financial deployments that do not want to manage client certificates increasingly reach for DPoP.
  • OAuth 2.1 and MCP. The Model Context Protocol, which governs how AI agents connect to external tools, builds on OAuth 2.1. OAuth 2.1 makes sender-constrained tokens the recommended hardening for public clients. AI agents are public clients by definition, and the agent-as-attacker-target threat model is severe enough that sender-constraining is a near-term expectation rather than a best-practice suggestion.

A short list of things that will bite you

  • Symmetric algorithms are forbidden. HS256 will not work. Use ES256 unless you have a reason.
  • Keys must be non-extractable in browsers. If you find yourself base64-encoding a private key into localStorage, you have defeated DPoP.
  • jti uniqueness matters. Use a UUID or similar per proof; reusing identifiers gets rejected as replay.
  • htu does not include the query string or fragment. Normalize before signing and before verifying.
  • Nonces are per-server. Do not share authorization server and resource server nonces.
  • Refresh tokens are DPoP-bound too. A confidential client still needs a valid proof on the refresh request.
  • Clock skew is real. Allow a small iat tolerance on the server side, and consider signing with the client's corrected time rather than raw Date.now().
  • Do not log proof JWTs. They are short-lived, but they contain keys and request metadata that do not need to end up in observability pipelines.

Further reading

DPoP is one piece of a larger shift. OAuth 2.1 consolidates a decade of lessons into a single profile, and nearly every hardening step it promotes (mandatory PKCE, elimination of the implicit grant, refresh token rotation, short-lived access tokens, exact redirect URI matching) composes with DPoP rather than replacing it. For the wider picture, see our OAuth 2.1 vs OAuth 2.0 guide and our summary of RFC 9700, the current OAuth security best-practices document.

The AI-agent angle is also worth tracking if you are building against MCP. Agents are public clients, their credentials are attractive targets, and the OAuth specs that underpin MCP already point toward sender-constraining as the next layer of defense. Our MCP Authorization in 5 easy OAuth specs and How to add OAuth to your MCP server both cover the ecosystem DPoP is about to become a default in.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.