OAuth governance and consent phishing: What engineers need to know
How attackers turn legitimate consent prompts into persistent backdoors, and what your team can do about it.
Modern identity systems lean heavily on OAuth 2.0 and OpenID Connect to delegate access between services. That delegation model is powerful, but it also introduces an attack surface that doesn't require stealing passwords or exploiting software vulnerabilities at all. Consent phishing, sometimes called illicit consent grants or OAuth phishing, abuses the trust users place in familiar login and authorization screens to silently hand over persistent access to an attacker-controlled application.
This article breaks down how consent phishing works, why traditional defenses miss it, and what development and security teams can do to build a governance posture around OAuth that actually holds up.
The basics: How OAuth consent works
In a standard OAuth authorization code flow, a user is redirected to an identity provider (IdP), such as Microsoft Entra ID, Google Workspace, or Okta, where they authenticate and then approve a set of permissions (scopes) requested by a third-party application. Once the user consents, the IdP issues tokens that grant the application access to resources on the user's behalf.
This is the mechanism behind every "Sign in with Google" button and every Slack integration that asks to read your channels. The key detail is that the user, not an administrator, is often the one making the access decision. The IdP trusts the user to evaluate whether the requesting application should have the permissions it is asking for.
That trust assumption is where things break down.
What consent phishing actually looks like
An attacker registers an OAuth application with a cloud identity provider. The application itself is technically legitimate in the eyes of the platform; it has a client ID, a redirect URI, and a set of requested scopes. The attacker then crafts a phishing lure, typically an email, a Teams or Slack message, or a link on a compromised page, that redirects the victim to the real IdP authorization endpoint.
The victim sees a familiar login page. They authenticate with their real credentials through the real IdP. Then they see a consent prompt listing the permissions the malicious app is requesting. Because the entire flow happens on the legitimate identity provider's domain, there is nothing visually suspicious. No fake login page, no credential harvesting endpoint, no domain spoofing. The user clicks "Accept," and the attacker's application receives an access token and, critically, a refresh token.
From that point forward, the attacker has persistent, token-based access to whatever resources the scopes permit. That might mean reading email, accessing files on OneDrive or Google Drive, sending messages as the user, or enumerating directory information. The access survives password changes because it is not tied to the user's password. It persists until the consent is explicitly revoked or the refresh token expires, which in many configurations can be months or indefinitely.
Consent phishing example
An attacker registers an app called "Contoso Secure Document Viewer" in Microsoft Entra ID. They configure it to request Mail.Read, Files.ReadWrite.All, and User.Read as delegated permissions, and they set the redirect URI to a server they control. Then they send an email to employees at a target company: "Your IT department has enabled a new document review portal. Please sign in to activate your account: [Activate now]"
The link points to something like:
Notice that the domain is login.microsoftonline.com, which is the real Microsoft login page. The employee clicks through, signs in with their real credentials, completes MFA, and sees a consent screen asking them to grant "Contoso Secure Document Viewer" permission to read their email and access their files. They click Accept.
Microsoft now issues an authorization code to attacker-server.com/callback. The attacker exchanges that code for an access token and a refresh token. From that moment, the attacker can call the Microsoft Graph API and read the employee's entire mailbox, browse their OneDrive, and pull directory data, all without ever knowing the employee's password. If the employee changes their password the next day, the attacker's refresh token still works.
The critical thing to understand is that the tokens grant the attacker's app access to the victim's resources on Microsoft's platform. The scopes like Mail.Read are permissions to call Microsoft Graph on behalf of the user who consented. The attacker calls GET https://graph.microsoft.com/v1.0/me/messages with the token, and Microsoft returns the victim's inbox, because the victim authorized exactly that.
Why this is hard to detect
Consent phishing sidesteps most of the controls that security teams rely on for traditional phishing. There is no malicious attachment. There is no credential submission to a spoofed domain. The authentication happens on a legitimate IdP endpoint, so URL filtering and safe link scanning see a trusted domain. Multi-factor authentication completes successfully because the user is logging into the real provider.
From a logging perspective, the consent grant itself may appear as a routine administrative event. In Microsoft Entra ID, for example, the event surfaces as a consent-to-application action in the audit logs, but in environments where users routinely approve integrations, it blends in. If the security team is not specifically monitoring for new OAuth grants or anomalous scope requests, the activity goes unnoticed.
The attacker also benefits from the fact that token-based access does not generate the same signals as interactive logins. API calls made with an OAuth token may not trigger conditional access policies, impossible travel alerts, or sign-in risk detections depending on how the IdP and monitoring stack are configured.
The scope problem
The severity of a consent phishing attack is directly proportional to the scopes the attacker requests and the user is able to grant. In permissive environments, a user might be able to approve scopes like Mail.Read, Files.ReadWrite.All, User.Read.All, or Directory.Read.All without any administrator involvement.
Some attackers request broad scopes upfront, betting on user inattention. Others use incremental consent, starting with minimal permissions to avoid suspicion and later prompting the user to approve additional scopes. A few sophisticated campaigns use scopes that sound benign but provide significant access. User.Read, for instance, sounds harmless but can return detailed profile and organizational information depending on the IdP's implementation.
The fundamental governance question is: who is allowed to grant what, and under what conditions? Most identity platforms ship with defaults that favor user productivity over security, meaning users can consent to a wide range of scopes without administrator approval.
Building an OAuth governance framework
Defending against consent phishing requires a combination of policy controls, monitoring, and developer hygiene. No single measure is sufficient on its own.
Restrict user consent
The single highest-impact control is limiting or disabling user consent for OAuth applications. In Microsoft Entra ID, this means configuring the "User consent settings" to either block user consent entirely or restrict it to apps from verified publishers requesting only a predefined set of low-risk permissions. Google Workspace provides similar controls through the "API access" settings in the admin console, where administrators can block or scope down third-party app access.
When user consent is restricted, users who want to connect a new application must submit a request that goes through an admin approval workflow. This introduces friction, which is exactly the point. The trade-off is operational overhead: someone has to review and approve or deny these requests in a timely manner, or users will find workarounds.
Implement an app approval workflow
Restricting consent only works if there is a functional process behind it. Build or configure an admin consent workflow that lets users request access to an application, routes the request to the appropriate reviewer (security, IT, or a platform team), and logs the decision. Most major IdPs support this natively.
The review process should evaluate the requesting application's publisher verification status, the specific scopes being requested, whether the application has a legitimate business purpose, and whether equivalent functionality already exists through an approved application.
Audit existing grants
Before tightening policies, inventory the OAuth grants that already exist in your environment. In Entra ID, the oauth2PermissionGrants and appRoleAssignments endpoints in the Microsoft Graph API provide this data. For Google Workspace, the admin SDK's token API and the security investigation tool can surface third-party app access.
Look for applications with broad scopes that were granted by individual users rather than administrators. Look for apps from unverified publishers. Look for applications that have not been used recently but still hold valid refresh tokens. Any of these are potential indicators of either a past consent phishing compromise or simply accumulated risk from organic, unmanaged OAuth sprawl.
Monitor for anomalous consent events
Set up detection rules for new OAuth application consent events, especially those involving sensitive scopes. In a SIEM or XDR platform, alert on consent grants for permissions like Mail.ReadWrite, Files.ReadWrite.All, or any application-level (as opposed to delegated) permissions granted by non-admin users.
Correlate consent events with other signals. A consent grant that follows a phishing email delivery within a short time window is a strong indicator. Similarly, consent to an application with a recently created client ID or from a publisher domain that does not match your organization's known vendors is worth investigating.
Enforce publisher verification and tenant restrictions
Identity providers increasingly support publisher verification, a mechanism where the application developer proves they control a verified domain through the Microsoft Partner Network or similar programs. Configuring your tenant to only allow consent to verified publisher apps eliminates a large class of attacker-registered applications.
Tenant restrictions can also help by ensuring that users can only authenticate to your organization's tenant and a defined set of partner tenants, preventing redirection to attacker-controlled tenants where consent policies are permissive.
Token lifetime and refresh token policies
Reduce the blast radius of a successful consent phishing attack by limiting token lifetimes. Short-lived access tokens (on the order of an hour) are standard, but the real risk is refresh tokens. Configure refresh token expiration and inactivity timeouts so that even if an attacker obtains a refresh token, it does not remain valid indefinitely. Entra ID's continuous access evaluation (CAE) and token revocation APIs can also force re-evaluation of token validity when risk signals change.
Developer-side hygiene
If your organization builds OAuth-reliant applications, adopt the principle of least privilege in scope requests. Request only the scopes your application actually needs, use incremental consent to defer scope requests until the feature that requires them is invoked, and clearly document in your authorization request why each scope is necessary.
Verify your application's publisher identity. Register redirect URIs tightly; avoid wildcard or overly broad redirect patterns that could be hijacked. Use PKCE (Proof Key for Code Exchange) in authorization code flows to prevent code interception. These are not direct mitigations for consent phishing, but they reduce the surface area for related OAuth attacks and signal trustworthiness to administrators reviewing consent requests.
Responding to a consent phishing incident
If you suspect a consent phishing compromise, the response playbook differs from a credential theft scenario. Resetting the user's password is not sufficient because the attacker's access is token-based.
The immediate steps are to identify and revoke the malicious application's consent grant, revoke all refresh tokens for the affected user, review audit logs for API activity performed by the malicious application's client ID, and assess what data was accessed or exfiltrated during the period the token was valid. In Entra ID, the Revoke-AzureADUserAllRefreshToken cmdlet (or its Microsoft Graph equivalent) handles token revocation. In Google Workspace, administrators can revoke third-party app access through the admin console or the Directory API.
After containment, investigate whether the same malicious application was consented to by other users in your tenant. Consent phishing campaigns typically target multiple users simultaneously, and one confirmed case should trigger a tenant-wide hunt.
The broader picture
Consent phishing is a symptom of a broader challenge: OAuth governance has not kept pace with OAuth adoption. Organizations that adopted cloud identity years ago may have thousands of third-party applications with user-granted access, many of which were approved without any security review. The attack surface grows organically every time a user clicks "Allow" on a consent prompt they did not fully evaluate.
Treating OAuth integrations with the same rigor as you would treat network access, endpoint software, or vendor relationships is the long-term answer. That means visibility into what applications have access, policies that control who can grant that access, detection when anomalous grants occur, and a response process tuned to the specific mechanics of token-based persistence.
The technical controls exist in most major identity platforms today. The gap is usually in awareness and operational adoption. For engineering and security teams, closing that gap starts with understanding that the consent prompt is an access control decision, and like any access control decision, it should not be left entirely to the end user without guardrails.