In this article
June 13, 2025
June 13, 2025

Identity for AI: Who Are Your Agents and What Can They Do?

Why traditional authentication fails for AI agents and the new identity patterns—from persona shadowing to capability tokens—that will secure our agent-driven future.

You're building a SaaS product and decide to add an AI onboarding agent that talks to customers and creates user accounts. To do its job, this agent needs access to your production database—just like any other part of your application.

Now here's the million-dollar question: If this agent deletes your production database, who's responsible?

Sure, you could lock down database access. But what about Jira? Salesforce? Slack? Your agents need to work across these tools to be truly useful. Where exactly do you draw the line?

This scenario isn't hypothetical—it's happening right now as companies race to integrate AI agents into their workflows. And it's exposing a fundamental gap in how we think about identity and access management.

AI Agents Are Not Just Smart APIs

Traditional machine-to-machine (M2M) authentication was designed for predictable, narrow use cases. An API endpoint that processes payments or syncs data between known services. These integrations have well-defined scopes and limited blast radius.

AI agents are fundamentally different. They're designed to act on behalf of users across wide ranges of tools and services.

Unlike traditional integrations that connect point A to point B, agents operate more like digital employees—they need broad access to do their jobs effectively, but their actions are inherently unpredictable.

This creates an urgent challenge: companies need to adopt AI to stay competitive, but most identity infrastructure isn't ready for this new paradigm.

Why Identity for Agents Is Fundamentally Hard

Authentication Without a Login Page

How do you verify an agent is authentic when it has no ability to complete a traditional login flow? Agents need something like API tokens, but they also need to use your actual application—not just hit backend endpoints. They need persistent sessions that can last across multiple invocations, sometimes for extended periods.

Where do these credentials live? How do agents store them securely? These aren't trivial questions when your agent might be running in multiple environments or even spawning sub-agents.

The Least Privilege Paradox

Security best practices demand least privilege access—grant only the minimum permissions needed for a specific task. But agents are non-deterministic by nature. How do you scope permissions for something when you can't predict what it will need to do?

You want to lock down access, but if an agent needs to escalate its permissions dynamically based on user requests, your authorization system needs to be far more flexible than traditional role-based access control (RBAC).

Compliance in a Black Box World

If observability and logging for human users is challenging, it's exponentially harder for agents. Agents will generate far more events and perform far more actions than human users. They can operate 24/7, make decisions in milliseconds, and interact with dozens of systems simultaneously.

Every agent-initiated transaction needs to be tied to an identifiable agent identity and the end-user who delegated authority to it. For compliance frameworks like SOX, HIPAA, and GDPR, this isn't optional—it's mandatory.

SIEMs will need to differentiate between human and agent actions, understand delegation chains, and provide audit trails that humans can actually parse and investigate.

Architecture Patterns for Agent Identity

Rather than forcing agents into existing authentication patterns, we need new approaches designed for their unique characteristics:

Persona Shadowing

Instead of having agents impersonate users directly, give each agent an independent identity that "shadows" a specific user. This could be implemented as a secondary user account in your IdP, or as a service account with a carefully scoped subset of the user's privileges.

The key insight is separation: every action is explicitly tied to an AI agent identity that's linked to (but distinct from) the delegating user. The agent inherits role-based access from the user but operates under its own credentials.

Think of it like a legal power of attorney—the agent acts for the user but maintains its own distinct identity for accountability purposes.

Delegation Chains

Agents often need to call other services or spawn sub-agents to complete complex tasks. This creates chains of delegated authority that need to preserve end-to-end trust and context.

For example: User delegates to Agent A, which calls Agent B to perform a sub-task, which in turn needs to access a third-party API. Each link in this chain must carry forward the original user's authorization in a verifiable way.

This can be implemented using JWTs passed between services, and is supported by emerging standards like UMA (User-Managed Access) and OIDC-A (OpenID Connect for Agents).

Capability-Based Tokens

Instead of relying solely on roles or attributes, issue unforgeable tokens that grant specific rights. A capability token might encode: "Agent X can read Bob's calendar for the next 60 minutes."

This approach aligns perfectly with how agents work—unlike humans who log in with broad permissions, agents can receive different capability tokens for each task, encoding only the minimal rights required. These tokens can be time-bound and self-contained, simplifying verification while limiting blast radius if compromised.

Human-in-the-Loop Escalation

For sensitive actions, require explicit human approval. The challenge is balancing security with usability—too many approval requests create consent fatigue (we've all seen this with mobile app permissions).

The solution is intelligent escalation: routine actions proceed automatically, while potentially risky operations trigger human review.

Machine learning can help identify which actions require escalation based on context, user behavior, and risk models.

Emerging Standards and Protocols

The identity industry is actively developing standards to address these challenges:

OAuth 2.0 / OpenID Connect provide a foundation—they were designed for users to grant third-party apps limited access without sharing passwords, which maps well to agent scenarios. However, they need adaptation for headless authentication and machine-first workflows.

UMA (User-Managed Access) extends OAuth 2.0 to let users proactively set policies for what agents can access. Instead of baking consent into each OAuth dialog, UMA externalizes these policies to a centralized authorization server.

GNAP (Grant Negotiation and Authorization Protocol) generalizes OAuth's concepts, allowing clients like AI agents to negotiate for specific access rights and receive dynamically tailored tokens. Unlike OAuth's static scopes, GNAP was designed with non-human clients in mind.

OIDC-A (OpenID Connect for Agents) is an emerging proposal to extend OpenID Connect specifically for LLM-based agents, focusing on attestation and chains of trust.

Verifiable Credentials and Secure Credential Presentation enable agents to present cryptographically signed proof of identity or delegation. An agent could present a verifiable credential proving "Alice has delegated access X to Agent Y, valid until Z."

Industry Approaches and Current Tools

We're still in the early days, but the ecosystem is rapidly evolving:

At WorkOS, we're handling agent authentication today through AuthKit (our OAuth authorization server) and providing fine-grained authorization through WorkOS FGA (our Zanzibar-inspired authorization system). We're especially excited about securing MCP (Anthropic's Model Context Protocol), which is becoming a standard for LLM integrations.

Microsoft Entra is developing workload identities for agents, while Cloudflare is exploring Zero Trust architectures for AI systems. Identity governance vendors like ConductorOne are tackling the unique lifecycle challenges of agent identities—agents might exist for minutes, hours, or months, unlike traditional user accounts.

The audit and logging space is also heating up as companies realize that agent activity needs specialized observability tools.

What's Next: A World of Agents

We're heading toward a fundamental shift in how digital systems operate. Today's applications are roughly 95% human users and 5% automated traffic from APIs and integrations.

With AI agents, we'll first see a 50/50 split between human and agent users. Eventually, we may reach a world where 95% of system interactions are agent-driven. There will be billions of humans, but potentially trillions of agents—and entirely new categories of agent-only services.

This transformation brings both immense opportunity and significant responsibility. We need to work together as an industry to build standards that make this future safe and secure. User trust is paramount, and standards benefit everyone.

Trust used to be binary—you had trusted and untrusted apps and services. In an agent-driven world, everything becomes a spectrum. How do we verify that an agent is what it claims to be? How do we build systems robust enough to handle non-deterministic actors that even their developers can't fully predict?

These aren't just technical challenges—they're fundamental questions about how we want our digital future to work.

The Path Forward

The identity industry has always been at its best when facing paradigm shifts. We've successfully navigated the transition from on-premises to cloud, from passwords to multi-factor authentication, from perimeter security to zero trust.

The agent revolution is our next frontier. The companies and standards bodies that solve identity for AI agents will shape how the next generation of applications work.

At WorkOS, we're committed to being part of this solution. If you're grappling with these challenges, we'd love to collaborate. The ecosystem wins when we work together to build secure, interoperable standards.

The future is going to be fascinating—and we need to make sure it's secure.

This site uses cookies to improve your experience. Please accept the use of cookies on this site. You can review our cookie policy here and our privacy policy here. If you choose to refuse, functionality of this site will be limited.