In this article
April 1, 2026
April 1, 2026

MFA for AI agents: Why traditional authentication falls short

AI agents don't have phones, fingerprints, or sessions. The identity infrastructure they need looks nothing like what we built for humans.

Multi-factor authentication has always rested on a simple idea: prove you are who you say you are by combining something you know, something you have, and something you are. A password plus a fingerprint. A one-time code sent to your phone. A push notification you tap while half-awake at 7 AM.

The entire model assumes a human is on the other end.

But in 2026, a growing share of the entities accessing your APIs, querying your databases, and triggering actions across your infrastructure are not human at all. They are AI agents. They do not have phones. They do not have fingerprints. And the traditional MFA playbook has nothing useful to say about them.

This is not a theoretical problem for next year's roadmap. It is happening now, and the gap between how fast organizations are deploying agents and how slowly they are securing them is becoming one of the most significant identity risks in enterprise software.

The non-human identity explosion

According to CyberArk's 2025 Identity Security Landscape report, machine identities now outnumber human users by more than 80 to 1 in a typical enterprise. That number was growing before the current wave of agentic AI. Now it is accelerating.

Every AI agent that connects to an external service, whether through the Model Context Protocol, a direct API call, or an internal tool integration, needs some form of credential. In practice, that credential is usually an API key, a service account token, or an OAuth access token. These are often provisioned manually, scoped too broadly, and rarely rotated.

The result is a sprawling layer of machine identities that security teams cannot see clearly and cannot govern effectively. A recent scan of nearly 2,000 publicly accessible MCP servers found that every single verified server lacked authentication. Anyone could access internal tool listings and, in some cases, exfiltrate sensitive data.

If that does not worry you, consider the trajectory. Gartner predicts that 33% of enterprise applications will include agentic AI by 2028, up from less than 1% in 2024. The identity infrastructure to support that growth does not exist yet.

Why traditional MFA does not translate

When we talk about MFA for humans, we are really talking about a set of assumptions:

  • There is a person present who can respond to an interactive challenge.
  • That person has a physical device (phone, security key, laptop with biometrics).
  • The authentication event is discrete: you log in, you get a session, you work.

AI agents break all three assumptions.

First, here is no person to respond to a challenge. An agent running a multi-step workflow at 3 AM cannot tap "approve" on a push notification. It cannot scan a QR code. It cannot enter a six-digit code from an authenticator app. Any authentication mechanism that requires human interaction in real time is a dead end for autonomous agents.

Second, the concept of "something you have" gets complicated. A human has a phone that is physically bound to them. An agent has... a runtime environment? A container? A set of environment variables? The possession factor needs to be reinterpreted as something like infrastructure attestation, where the agent proves it is running in a known, trusted environment rather than holding a physical device.

Third, agent sessions are not discrete login events. A human logs into Salesforce, works for an hour, and logs out. An agent might maintain persistent connections to a dozen services simultaneously, spawning sub-agents that inherit (or escalate) its permissions. The notion of a single authentication event granting a bounded session does not map to how agents actually operate.

This is why bolting existing MFA onto agent workflows tends to fail. It either breaks the automation (the agent cannot proceed because it is waiting for a human approval that never comes) or it gets bypassed entirely through long-lived tokens and over-permissioned service accounts, which is arguably worse than having no MFA at all.

What "multi-factor" looks like for agents

If we strip MFA down to its core principle, it is about requiring multiple independent proofs of identity before granting access. That principle still applies to agents. The factors just need to be different.

Here is how the industry is starting to think about it:

  • Workload identity attestation. Rather than "something you have" being a phone, it becomes a cryptographic proof that the agent is running in an expected environment. Cloud providers already offer mechanisms for this. AWS IAM roles, Azure managed identities, and GCP service accounts can issue environment-based attestation tokens. The challenge is that these create identity silos, and agents that need to work across providers still end up bridging them with static credentials.
  • Behavioral and contextual signals. Adaptive MFA for humans looks at factors like IP address, device fingerprint, and login patterns. The equivalent for agents includes: what tools is the agent requesting access to? Is this consistent with its declared purpose? Is it operating during expected hours, from an expected network, with expected request volumes? This is the agent equivalent of "something you are," a behavioral fingerprint rather than a biometric one.
  • Scoped, ephemeral tokens. Rather than issuing a long-lived API key with broad permissions, the emerging best practice is to mint short-lived tokens scoped to specific tasks. If an agent needs to read from a CRM and write to a ticketing system, it should get two narrow tokens, not one skeleton key. When the task is done, the tokens expire. This is not a factor in the traditional MFA sense, but it enforces the same underlying goal: limiting the blast radius of any single compromised credential.
  • Delegated human authorization. For high-stakes actions, the model is increasingly "human in the loop." The agent authenticates itself through workload identity and token-based mechanisms, but before executing a sensitive operation (transferring funds, modifying production infrastructure, sending communications on behalf of a user), it requires explicit human approval. OAuth 2.1's authorization code flow, now formally adopted in the MCP specification, supports exactly this pattern: the user grants consent through a familiar browser-based flow, and the agent receives a scoped token.

The MCP authentication story (so far)

The Model Context Protocol has become the de facto standard for connecting AI agents to external tools and services, with over 97 million monthly downloads as of March 2026. Its authentication story has evolved rapidly and is still a work in progress.

The current MCP spec uses OAuth 2.1 with PKCE for user-facing authorization flows. When an agent needs to access a protected resource on a user's behalf, it initiates a standard OAuth flow: the user is redirected to an authorization server, approves specific scopes, and the agent receives an access token. This works well for the "human delegates access to an agent" pattern.

But for agent-to-agent and machine-to-machine scenarios, where no human is present, things get murkier. The client credentials grant was included in an earlier version of MCP, then removed, and is now being reintroduced through a draft extension. The core spec intentionally stays silent on server-to-server auth to keep things simple, but that silence has created a gap that organizations are filling with insecure workarounds: hardcoded API keys, over-permissioned service accounts, and tokens that never expire.

Google's Agent-to-Agent (A2A) protocol offers a more opinionated approach to this specific problem. And there are competing proposals for how MCP should handle enterprise scenarios like SSO integration and dynamic client registration. The 2026 roadmap includes enterprise authentication with OAuth 2.1 flows, SAML/OIDC integration, and audit trails. But for teams building agent systems today, the standards are still settling.

What to do right now

If you are building or deploying AI agents, you do not have the luxury of waiting for the specs to mature. Here is a practical starting point:

  • Treat every agent as a first-class identity. Do not let agents inherit a developer's personal credentials or share a single service account across multiple workflows. Each agent (or agent type) should have its own identity, with its own permissions, its own audit trail, and its own lifecycle. When the agent is retired, its credentials should be revoked.
  • Enforce least privilege aggressively. Scope tokens to the minimum permissions required for each task. If your agent framework supports it, use just-in-time access: provision credentials only when a task begins and revoke them when it ends. Avoid the temptation to give an agent broad access "just in case."
  • Eliminate long-lived secrets. Hardcoded API keys in environment variables are the agent equivalent of writing your password on a sticky note. Use workload identity federation where possible. Where you must use tokens, set short expiration times and automate rotation.
  • Require human approval for sensitive actions. Build approval gates into your agent workflows for anything that is irreversible or high-impact. The OAuth consent model is a good pattern here: the human authorizes a specific scope, and the agent operates within it.
  • Monitor agent behavior continuously. Authentication is not a one-time event. Implement runtime monitoring that watches for anomalous patterns: unexpected API calls, permission escalations, access outside normal parameters. This is where adaptive, risk-based approaches designed for human MFA can be repurposed for agents.

The identity layer is the next battleground

The last decade of cloud security taught us that the perimeter does not matter when everything is an API. The identity layer became the new perimeter. The same shift is happening again with AI agents, but faster.

Organizations that get agent identity right will be able to deploy autonomous systems confidently, with clear accountability and bounded risk. Organizations that treat it as an afterthought will end up with an ungovernable mesh of over-permissioned, unmonitored machine identities that attackers will exploit.

MFA was built for humans, and it served us well. Now we need to carry its core principle forward, multiple independent proofs of identity before trust is granted, and apply it to a new class of actors that are already inside the building.

The agents are here. The question is whether we will authenticate them before or after something goes wrong.

At WorkOS, we are building the identity infrastructure for this shift. Whether your agents authenticate on behalf of users through OAuth via WorkOS Connect, communicate with other services using short-lived M2M tokens, or connect to tools through the Model Context Protocol, AuthKit provides a standards-compliant authorization server that supports OAuth 2.1 and integrates with enterprise identity providers out of the box. For teams that need resource-level permission control for agents, WorkOS Fine-Grained Authorization lets you scope access to specific resources rather than granting broad tenant-wide roles. If you are building AI agents that need to operate inside enterprise environments, start here.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.