In this article
April 9, 2026
April 9, 2026

Cryptographic origin binding: How passkeys make phishing structurally impossible

A deep dive into the FIDO2/WebAuthn protocol mechanics that tie every passkey to a specific domain, making credential theft physically impossible at the cryptographic layer.

Passwords are shared secrets. A user invents one, the server stores a derivative of it, and authentication succeeds because both parties know the same value. Every link in that chain is attackable: the secret can be guessed, intercepted in transit, leaked from a breached database, or surrendered on a convincing phishing page. Passkeys, built on the FIDO2/WebAuthn standards, eliminate this entire class of vulnerability by replacing shared secrets with asymmetric cryptography and enforcing a property known as cryptographic origin binding.

This article breaks down exactly how origin binding works at the protocol level, why it renders credential phishing mathematically unviable, and where the boundaries of its protection lie.

The problem origin binding solves

Traditional phishing attacks succeed because authentication secrets are portable. If an attacker stands up a replica of bank.example.com at bank-examp1e.com and a user types their password into the fake form, the attacker now possesses a credential that works on the real site. The password has no intrinsic relationship to the domain it was created for. It is just a string of bytes.

Multi-factor authentication raised the bar, but adversary-in-the-middle (AiTM) toolkits like Evilginx and Tycoon 2FA simply relay the second factor in real time. The user thinks they are authenticating to the real service. The proxy captures the session cookie that comes back. MFA in this model adds latency for the attacker, not impossibility.

Origin binding changes the game by making the credential itself aware of which domain it belongs to. A passkey created for bank.example.com is cryptographically fused to that origin. It cannot produce a valid signature for any other domain, and the enforcement happens below the application layer, in the authenticator and the browser, where neither the user nor a phishing page can interfere.

The problem origin binding solves

Traditional phishing attacks succeed because authentication secrets are portable. If an attacker stands up a replica of bank.example.com at bank-examp1e.com and a user types their password into the fake form, the attacker now possesses a credential that works on the real site. The password has no intrinsic relationship to the domain it was created for. It is just a string of bytes.

Multi-factor authentication raised the bar, but adversary-in-the-middle (AiTM) toolkits like Evilginx and Tycoon 2FA simply relay the second factor in real time. The user thinks they are authenticating to the real service. The proxy captures the session cookie that comes back. MFA in this model adds latency for the attacker, not impossibility.

Origin binding changes the game by making the credential itself aware of which domain it belongs to. A passkey created for bank.example.com is cryptographically fused to that origin. It cannot produce a valid signature for any other domain, and the enforcement happens below the application layer, in the authenticator and the browser, where neither the user nor a phishing page can interfere.

Relying party ID and origin: two layers of domain binding

WebAuthn introduces two related but distinct domain identifiers that together form the binding mechanism.

Relying party ID (rpId) is a domain string chosen by the service at registration time. It is typically the registrable domain of the site (for example, example.com rather than login.example.com). The rpId is hashed with SHA-256 and embedded directly into the authenticator data structure that the authenticator signs during both registration and authentication ceremonies. This hash, the rpIdHash, occupies the first 32 bytes of the authData binary.

Origin is the full scheme-plus-host-plus-port tuple recorded by the browser in the clientDataJSON object (for example, https://login.example.com). The browser captures this value from its own navigation context. Application-level JavaScript cannot override it. The serialized clientDataJSON is then hashed and incorporated into the data the authenticator signs.

It is worth noting that WebAuthn only permits origins served over HTTPS. This means the request necessarily comes from a server that holds a valid TLS certificate for the corresponding domain, adding a baseline layer of transport security before any passkey cryptography is involved.

The crucial point is that both values end up under the authenticator's signature. The rpIdHash is inside authData. The origin is inside clientDataJSON. The authenticator signs the concatenation of authData and SHA-256(clientDataJSON). Tampering with either value after signing invalidates the signature, and the server will reject the response during verification.

Inside the registration ceremony

When a user creates a passkey, the following sequence takes place:

  1. The server generates a random challenge and sends it to the client alongside the relying party configuration, including rp.id and rp.name, the user's identifier, and the acceptable public key algorithms (typically ECDSA with P-256, identified as COSE algorithm -7, and optionally RSASSA-PKCS1-v1_5 with SHA-256 as -257).
  2. The browser validates that the requested rp.id is a valid registrable domain suffix of the current page's origin. If the page is hosted at https://login.example.com, an rpId of example.com is acceptable, but other.com would be rejected immediately. This check happens in the user agent before any authenticator interaction.
  3. The browser constructs a clientDataJSON object containing the challenge, the current origin as observed by the browser, and the ceremony type (webauthn.create). It serializes this to JSON and computes its SHA-256 hash, called the clientDataHash.
  4. The browser passes the clientDataHash, rpId, user information, and key parameters to the authenticator via the CTAP2 protocol (or internally, if it is a platform authenticator).
  5. The authenticator generates a new asymmetric key pair. The private key is stored in hardware-protected storage (a TPM, Secure Enclave, or equivalent) and never leaves the device. The authenticator constructs authData, which includes the rpIdHash (SHA-256 of the rpId), flags indicating user presence and user verification status, a signature counter, and the newly generated public key in COSE format. It then signs the concatenation of authData || clientDataHash using an attestation private key.
  6. The browser returns the clientDataJSON, the attestation object (containing authData, the attestation statement, and its format identifier), and the credential ID back to the server.
  7. The server decodes and verifies the response: it checks that the challenge matches, that the origin in clientDataJSON is an expected value, that the rpIdHash in authData corresponds to the configured rpId, and that the attestation signature is valid. If everything checks out, the server stores the credential's public key and credential ID, associated with the user account.

From this point forward, the public key on the server and the private key on the authenticator are bound to a specific relying party ID. The authenticator will refuse to use this key pair for any rpId other than the one recorded at creation time.

Inside the authentication ceremony

When the user returns to sign in, the protocol enforces origin binding a second time:

  1. The server generates a fresh random challenge and sends it to the client, along with the rpId and optionally a list of acceptable credential IDs (allowCredentials).
  2. The browser constructs a new clientDataJSON with the challenge, the current origin, and the type webauthn.get. It computes the clientDataHash.
  3. The browser passes the clientDataHash and the rpId to the authenticator. The authenticator looks up credentials scoped to the SHA-256 hash of this rpId. If the user is on a phishing page with a different origin, the browser will have captured that different origin in clientDataJSON, and the rpId validation step will prevent any matching credential from being found. Even if an attacker somehow manipulated the rpId in the request, the authenticator independently hashes the rpId it receives and compares it against the rpIdHash stored with each credential.
  4. The authenticator prompts the user for consent (a biometric scan, PIN, or physical touch). Upon successful verification, it increments the signature counter and signs authData || clientDataHash with the credential's private key.
  5. The server receives the signature, the authenticatorData, and the clientDataJSON. Verification involves multiple independent checks: confirming the challenge matches, confirming the origin in clientDataJSON is expected, confirming the rpIdHash in authenticatorData matches the expected rpId hash, and verifying the signature against the stored public key.

If any of these values are wrong, authentication fails. There is no fallback, no degraded mode, and no partial success.

Why phishing cannot defeat this mechanism

Consider an attacker who sets up a convincing replica of bank.example.com at bank-examp1e.com. A user navigates to the phishing site and triggers a passkey authentication flow.

The browser captures https://bank-examp1e.com as the origin and writes it into clientDataJSON. The rpId, if the attacker specifies bank.example.com, will be rejected by the browser because bank.example.com is not a registrable domain suffix of bank-examp1e.com. The ceremony fails before the authenticator is even contacted.

If the attacker instead uses their own domain as the rpId (bank-examp1e.com), the authenticator will look for credentials scoped to SHA-256("bank-examp1e.com"). No such credentials exist, because the user registered their passkey under bank.example.com. The authenticator returns nothing. Authentication fails silently.

There is no path through which the attacker can obtain a valid signature. The private key never leaves the authenticator. The browser captures the origin independently. The authenticator enforces the rpId binding independently. These are two separate enforcement points, and neither is under application-level or attacker control.

Even an AiTM proxy that sits between the user and the real server cannot help. The proxy can forward the server's challenge to the victim's browser, but the browser will record the proxy's origin in clientDataJSON, not the real server's origin. When the real server checks the origin, it will find the proxy's domain and reject the response. The signature is valid only for the data it was computed over, and that data includes the wrong origin.

Authenticator types and credential storage

The ceremonies above repeatedly reference "the authenticator" as the entity that generates key pairs, stores private keys, and produces signatures. But where does that private key actually live, and how secure is that storage? The answer depends on which type of authenticator is in play, and the differences matter for both security modeling and the strength of origin binding guarantees in practice.

Platform authenticators are built into the user's device. Examples include iCloud Keychain, Google Password Manager, Windows Hello, and 1Password. They are convenient, often include cloud backup, and typically gate access behind a biometric or device PIN. Their drawback is that they are only as secure as the device itself. If the operating system is compromised, so is the authenticator.

Roaming authenticators are separate hardware devices such as YubiKeys, Titan Security Keys, and Feitian keys. They offer stronger isolation because the private key lives on dedicated, tamper-resistant hardware that is physically separate from the client device. The tradeoff is that losing the device means losing the credentials, and most roaming authenticators have no backup mechanism. Platform authenticators that support Bluetooth can also act as roaming authenticators through the hybrid transport (caBLE) protocol, allowing a phone to authenticate a session on a laptop.

Credential storage strategies also vary. Authenticators with ample internal storage keep every credential on-device. Storage-constrained authenticators use a different approach: they encrypt the private key material and return the ciphertext to the server as the credential ID during registration. When the server later provides that credential ID during authentication, the authenticator decrypts it and recovers the key. The server is effectively storing the passkey, but since it is encrypted with a key the server does not possess, a database breach yields nothing useful.

Attestation allows authenticators to cryptographically prove facts about their origin, such as the manufacturer and model. During registration, the authenticator can produce an attestation statement backed by a certificate chain signed by the manufacturer. This lets the server verify that a credential was created by a specific class of hardware, which is valuable for enterprise deployments that require authenticators meeting particular security standards. Attestation is optional in the WebAuthn specification, and many consumer deployments skip it, but it is an important tool for high-assurance environments.

The structure of clientDataJSON

Now let's look more closely at the two data structures that carry origin binding through the wire. The clientDataJSON object is deceptively simple in structure but plays an outsized role in security. A typical example during authentication looks like this:

  
{
  "type": "webauthn.get",
  "challenge": "dGhpcyBpcyBhIHJhbmRvbSBjaGFsbGVuZ2U",
  "origin": "https://bank.example.com",
  "crossOrigin": false
}
  

During registration, the type field is set to webauthn.create instead. The challenge is the base64url-encoded value provided by the server. The origin is captured from the browser's navigation context. An optional crossOrigin boolean indicates whether the ceremony was initiated from a cross-origin iframe, and if true, a topOrigin field may also be present.

The entire JSON object is serialized to a byte string and hashed with SHA-256. That hash becomes the clientDataHash, which is concatenated with authData to form the message that the authenticator signs. Because the hash commits to every byte of the JSON, any modification to the origin, challenge, or type will produce a different hash and therefore a different (invalid) signature.

The role of authData and rpIdHash

The authData binary is structured as follows:

Offset Length Field
0 32 bytes rpIdHash (SHA-256 of rpId)
32 1 byte flags (UP, UV, AT, ED bits)
33 4 bytes signCount
37 variable attestedCredentialData (registration only)
variable variable extensions (if ED flag is set)

The rpIdHash at the start of this structure means the authenticator's assertion is always explicitly scoped to a particular relying party. The server must independently compute SHA-256(rpId) and compare it byte-for-byte to the rpIdHash in the received authData. A mismatch is an immediate rejection.

The flags byte encodes whether user presence (UP) was confirmed (the user touched the authenticator or interacted with a biometric prompt) and whether user verification (UV) was performed (a PIN was entered or a biometric matched). These flags are also covered by the signature, so the server can trust them.

The signature counter (signCount) is incremented by the authenticator on each successful authentication. The server should compare it to the last stored value. A counter that has not increased (or has decreased) may indicate a cloned authenticator.

Signature verification on the server

The server-side verification process ties everything together. In pseudocode:

  
let clientData = JSON.parse(response.clientDataJSON)
assert clientData.type == "webauthn.get"
assert clientData.challenge == expectedChallenge
assert clientData.origin in expectedOrigins

let authData = parseAuthData(response.authenticatorData)
assert authData.rpIdHash == SHA256(expectedRpId)
assert authData.flags.UP == true
assert authData.signCount > storedSignCount

let signedData = response.authenticatorData || SHA256(response.clientDataJSON)
assert verifySignature(storedPublicKey, signedData, response.signature)
  

Every check in this sequence is mandatory per the WebAuthn specification. Skipping the origin check would reintroduce phishing vulnerability. Skipping the rpIdHash check would allow cross-site credential reuse. Skipping the challenge check would enable replay attacks. The layered verification is the reason the protocol is considered phishing-resistant by specification rather than merely by intent.

Cross-domain credentials and related origin requests

One tension in the strict origin binding model is that many organizations operate across multiple domains. A company with shop.example.com, shop.example.de, and shop.example.co.uk ideally wants a single passkey to work across all three. Under the original WebAuthn specification, each domain requires its own credential.

The W3C addressed this with Related Origin Requests (ROR), introduced in recent revisions of the specification. ROR allows a primary domain to publish a /.well-known/webauthn file listing related origins that may share the same rpId. When a browser encounters a WebAuthn ceremony where the rpId does not match the current origin, it fetches this file and checks whether the current origin appears in the allowlist.

Critically, ROR does not weaken origin binding. The browser still records the true origin in clientDataJSON, and the server must still verify it against its list of expected origins. ROR simply extends the set of origins that are permitted to use a given rpId, under the explicit control of the domain owner. The trust anchor remains domain control, verified through the well-known URI path.

Browser support for ROR shipped in Chrome and Edge 128+ and Safari 18 during 2024, with a practical limit of five unique registrable domain labels in the allowlist.

What origin binding does not protect against

Origin binding is a powerful primitive, but it is not a universal defense. Several attack vectors fall outside its scope.

  • Browser compromise and malicious extensions. Many authenticators, particularly USB security keys without built-in displays, rely entirely on the browser to show the user which site they are authenticating to. If the browser itself is compromised by malware or a malicious extension, it could display one domain to the user while actually sending the authenticator a request scoped to a different domain. The authenticator would produce a valid signature for the attacker's target site because, from its perspective, the request is legitimate. Authenticators with their own display can mitigate this by showing the relying party information independently, but most consumer-grade platform authenticators do not have this capability.
  • Compromised or counterfeit authenticators. The security of the entire model depends on the authenticator protecting the private key. A counterfeit hardware key purchased from an untrustworthy source, a backdoored authenticator application, or malware that impersonates the operating system's built-in authenticator could extract or duplicate private keys silently. Attestation verification can help detect non-genuine authenticators, but only if the server enforces it and the attacker has not obtained a valid attestation certificate.
  • Session hijacking after authentication. Once the WebAuthn ceremony completes and the server issues a session token, that token is a bearer credential. If an attacker can steal it (via XSS, a compromised endpoint, or network interception of an unencrypted channel), they can impersonate the user. Origin binding protects the authentication ceremony itself, not the session that follows.
  • OAuth consent phishing. An attacker can trick a user into granting OAuth permissions to a malicious application through a legitimate consent screen. The user is not being phished for credentials. They are authorizing access through an intended mechanism. Passkeys do not help here because no authentication secret is being stolen.
  • Endpoint compromise. If malware on the user's device can intercept the session cookie or inject commands into the browser after authentication, origin binding offers no protection. The ceremony completed legitimately, and the attacker is operating after the fact. However, passkeys do serve as an effective rate limiter even in compromised-device scenarios: each signature requires a distinct user interaction with the authenticator (a biometric scan, PIN entry, or physical touch), which prevents an attacker from silently generating assertions in bulk.
  • Cross-device authentication abuse. WebAuthn supports a QR-code-based flow (caBLE/hybrid transport) for authenticating on one device using a passkey stored on another. Attackers have demonstrated spoofed QR codes that trick users into providing valid FIDO assertions to the wrong party. This is a social engineering attack on the transport layer, not a break in origin binding itself, but it is a real-world concern for high-security deployments.
  • Credential ID collisions. The WebAuthn specification requires credential IDs to be probabilistically unique, similar to UUIDs, but they are not guaranteed to be globally unique. If an attacker who knows a victim's credential ID (perhaps captured from network traffic) could register their own passkey with the same identifier, it could create authentication confusion on a poorly implemented server. A malicious authenticator could also deliberately generate duplicate credential IDs rather than following the protocol's randomness requirements. The mitigation is straightforward: servers should always reject registration attempts when the incoming credential ID already exists in the database, enforcing a first-come-first-served policy.
  • Malicious server-side JavaScript. For applications that use passkey-derived keys for client-side cryptographic operations (such as end-to-end encryption), a fundamental limitation of web cryptography applies. The server delivers the JavaScript that runs in the browser, which means a malicious or compromised server can serve tampered code that exfiltrates keys or decrypted data. This attack can be highly targeted, serving correct code to most users and malicious code to a specific victim. Subresource integrity checks and binary transparency techniques (publicly verifiable, tamper-evident logs of published code) are emerging mitigations, but this remains an open problem for browser-based cryptography.

Implementing origin binding correctly

For developers integrating WebAuthn, the protocol handles most of the cryptographic enforcement automatically, but server-side verification must be implemented without shortcuts:

  • Always validate the origin field in clientDataJSON against a strict allowlist of expected origins. Do not use substring matching or regex patterns. Compare full origin strings.
  • Always verify the rpIdHash in authenticatorData against the SHA-256 of your configured rpId.
  • Always verify the challenge to prevent replay attacks. Challenges should be cryptographically random, at least 16 bytes, and single-use.
  • Always verify the signature using the stored public key from registration.
  • Store and check the signature counter to detect potential authenticator cloning.
  • Reject registration attempts when the credential ID already exists in the database. Credential IDs are probabilistically unique, not guaranteed unique, and failing to deduplicate opens the door to authentication confusion attacks.
  • If using Related Origin Requests, verify the origin against both your primary domain and your published related origins list.

Libraries like SimpleWebAuthn (TypeScript), py_webauthn (Python), and webauthn-rs (Rust) handle much of this verification, but understanding the underlying checks is essential for auditing your implementation and reasoning about edge cases.

WebAuthn extensions: prf and largeBlob

The WebAuthn specification supports extensions that add cryptographic capabilities beyond basic authentication. Two are particularly interesting for developers building more intricate systems.

The prf (pseudorandom function) extension, built on top of the CTAP hmac-secret extension, allows an authenticator to compute HMAC-SHA-256 using a fixed, randomly generated 32-byte key that was created alongside the credential. The input to the HMAC is the SHA-256 digest of a fixed WebAuthn prefix concatenated with the value provided by the relying party. This is not flexible enough to implement full HKDF, but it can implement HKDF Extract: the authenticator's random key serves as the salt, and the website-provided input (after hashing) serves as the input key material. The resulting pseudorandom key can then feed into HKDF Expand on the client side to derive multiple symmetric keys. This makes it possible to derive stable, per-site encryption keys from a passkey without ever storing those keys on the server.

The largeBlob extension allows supporting authenticators to store an opaque blob of data that the relying party can read or write during authentication assertions. The intended use cases include storing certificates or cryptographic keys directly on the authenticator. Combined with prf, this opens the door to passkey-backed end-to-end encryption, where the authenticator both derives the encryption key and stores associated metadata.

Both extensions are optional in the specification, and support varies across browsers and authenticator hardware. Applications that depend on them must check for availability at registration time and degrade gracefully when the extensions are absent. As browser and authenticator support matures, these primitives could significantly improve key management for client-side cryptography on the web.

Recovery, backup, and the security tradeoff

Passkeys are randomly generated cryptographic key pairs. If the authenticator that holds the private key is lost, broken, or wiped, there is no mathematical path to recovery. This is an inherent property of asymmetric cryptography, and it creates a practical tension that every deployment must address.

Most platform authenticators mitigate this by synchronizing passkeys to a cloud account. iCloud Keychain syncs across Apple devices, Google Password Manager syncs across Android and Chrome, and third-party managers like 1Password provide cross-platform sync. This dramatically improves usability and protects against single-device loss, but it also expands the attack surface. An attacker who compromises the cloud account (or the recovery mechanism for that account) gains access to every synced passkey. The user's security is now bounded by the security of their cloud provider, not just their local device.

There are additional risks that users should understand. A platform ban or account suspension could lock a user out of all their synced passkeys. Platforms may support passkey sharing through family accounts or device sharing features, which means the credential is no longer truly single-holder. Accidental deletion by the platform, while unlikely, is not impossible.

For high-assurance deployments, the recommended approach is to have users register multiple passkeys: for example, one on a platform authenticator for daily use and one on a hardware security key stored securely as a backup. If a recovery flow must exist for users who have lost all their passkeys, it should require strong out-of-band identity verification (in-person verification or a recovery code generated at enrollment time). Falling back to email or SMS for recovery reintroduces exactly the phishing-susceptible channel that passkeys were designed to eliminate.

Skip the cryptography homework with WorkOS

For teams that want the security benefits of origin binding without building and maintaining the full WebAuthn ceremony stack, WorkOS provides passkey support through AuthKit, its customizable authentication UI.

AuthKit handles the complete lifecycle: key-pair generation, challenge issuance, attestation verification, credential storage, and signature validation during authentication. Passkeys are enabled via a toggle in the WorkOS dashboard and work alongside other authentication methods like passwords, social login, and enterprise SSO. And you get 1 million users per month for free, no credit card required.

WorkOS also supports progressive enrollment, a flow where existing password-based users are prompted to register a passkey on their next sign-in. Users who skip the prompt are reminded periodically and can permanently dismiss it if they prefer passwords. For users who do enroll, AuthKit treats the passkey as both a first and second factor by requiring user verification (a biometric or device PIN) at authentication time, which means a separate TOTP step is unnecessary.

Developers should configure a custom domain in AuthKit before enabling passkeys in production, as credentials are bound to the domain during registration. Changing the domain later would orphan any passkeys created under the previous one.

See the docs for the full configuration and integration path.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.