Light bulb Limited Spots Available: Secure Your Lifetime Subscription on Gumroad!

May 03, 2026 · 10 min read

If You're Already Logged Into Microsoft 365, This New Phishing Attack Doesn't Need Your Password to Take Over

ConsentFix v3 turned a clever OAuth trick into a turnkey, fully automated phishing pipeline. It abuses Azure CLI's first party trust inside Microsoft Entra ID, steals refresh tokens that bypass MFA and Conditional Access, and—if you already have an active Microsoft 365 session—doesn't even need to show you a login screen.

A glowing padlock icon dissolving into pixels on a darkened laptop screen with a small key floating outside, illustrating an authentication token being silently extracted from a Microsoft 365 account without a login prompt

The Attack That Skips the Login Page

Most phishing kits in 2026 try to clone the Microsoft sign in page well enough to fool the recipient. ConsentFix doesn't bother. The kit—first surfaced by Push Security in late 2025 and tracked through three iterations into May 2026—abuses a real, fully legitimate Microsoft OAuth flow and never asks the victim to type a password at all. The pitch on the lure page is "verify yourself" through what looks like a Cloudflare Turnstile challenge. The mechanic on the back end is much darker.

Push Security and subsequent reporting from BleepingComputer traced ConsentFix lures to compromised websites that surface in Google search results, with conditional loading that requires a corporate email to enter. The first filter weeds out researchers, scanners, and consumer Gmail accounts. Targets pass the filter, the page advances, and the OAuth flow opens.

Why Azure CLI Is the Perfect Cover

The brilliance and the danger of ConsentFix sit in a single design choice in Microsoft's identity stack. Azure CLI—the command line tool every Microsoft cloud admin uses—is registered inside Microsoft Entra ID as a first party application. First party means the same trust class as Outlook, OneDrive, Teams, and the Microsoft 365 admin portal. Practically, that translates to four exemptions:

  • No admin consent required. A regular user can grant Azure CLI permission to act on their behalf without an admin in the loop.
  • Excluded from third party app restrictions. Tenants that block "users may consent to apps from non verified publishers" still allow Azure CLI.
  • Pre consented to broad scopes. Azure CLI is implicitly authorized for tenant wide service permissions and access to legacy and undocumented Microsoft Graph endpoints that third party apps cannot touch.
  • Not blockable. A tenant administrator cannot disable Azure CLI's app registration without breaking real engineering teams. It is a permanent fixture.

If the attacker can convince Microsoft Entra ID that they are running Azure CLI, the rest of the security stack waves them through. They are no longer some suspicious third party "DocuSign Free Trial" app showing up in audit logs. They are Azure CLI, the same Azure CLI everyone in the tenant uses every day.

The Flow, Step by Step

Reduced to mechanics, ConsentFix takes the standard OAuth 2.0 authorization code flow and turns the victim's browser into the redirect endpoint:

  • 1. The lure page presents a fake Cloudflare verification step requiring a corporate email.
  • 2. The page invokes the real Microsoft sign in URL, with Azure CLI as the requesting client_id, in a separate window. Because Azure CLI is a public client that cannot store a secret, no client secret is required to complete the flow.
  • 3. Microsoft authenticates the victim normally. If the user is already signed into Microsoft 365 in the browser, this step happens silently—no password, no MFA prompt, no Conditional Access challenge—because the existing session is reused.
  • 4. Microsoft redirects to Azure CLI's localhost callback URL, appending a one time authorization code to the redirect URL.
  • 5. The lure page instructs the victim to copy that localhost URL from their address bar and paste it into a "verification" box on the phishing page. (In v1 this was manual paste. In v2 it became drag and drop. In v3 the page captures the URL automatically without user intervention.)
  • 6. The phishing page POSTs the URL to a Pipedream webhook, the serverless function platform that ConsentFix v3 uses for its automation backbone.
  • 7. Pipedream extracts the authorization code, calls Microsoft's token endpoint, and exchanges the code for both an access token and a refresh token. The refresh token's lifetime is up to 90 days. Every refresh produces a new access token without requiring the user to authenticate again.
  • 8. The tokens land in the operator's tooling—reportedly a "Specter Portal" client—and from that point on the attacker is the user, as far as Microsoft Entra ID is concerned.

The whole sequence completes in under 30 seconds. The victim sees a "verification successful" message and goes about their day. The attacker now has read and send access to the victim's Outlook mailbox, OneDrive files, Teams chats, and—depending on the role—administrative scopes inside Azure subscriptions.

Why MFA and Conditional Access Don't Save You

The instinctive response to "phishing in 2026" is to point at MFA. ConsentFix is the cleanest case study of why MFA is no longer sufficient on its own. Three details break the assumption:

  • The MFA prompt happened earlier. If the victim was already signed in to Microsoft 365 when they hit the lure, the existing session covers the OAuth consent. No second factor is requested for the OAuth code issuance, because the user already proved possession when they first signed in that morning.
  • Refresh tokens survive password resets. Once issued, a refresh token is independent of the password. Rotating the user's password does not invalidate it. Only an explicit token revocation—or the natural 90 day expiry, or a Conditional Access policy that triggers reauthentication—closes the window.
  • Conditional Access can be configured around device, location, and app, but Azure CLI sits in a category that most tenants exempt. Many enterprises explicitly carve out Azure CLI from "block legacy authentication" and "require compliant device" rules, because doing otherwise breaks engineering workflows. ConsentFix takes that exemption and turns it into the attack surface.

This is the same structural reason that the Tycoon2FA and Bluekit AiTM kits have eaten so much of the phishing market in the last 18 months. MFA is a binary check at one point in time. Tokens persist. Sessions persist. Once the session cookie or the refresh token is in the wrong hands, the password is no longer load bearing.

What v3 Actually Automated

The two earlier iterations of ConsentFix were already effective. What v3 adds is operational tempo. The kit now ships preconfigured with accounts on a stack of legitimate services—Outlook, Tutanota, Cloudflare, DocSend, Hunter.io, and Pipedream—each one playing a specific role:

  • Hunter.io and similar lookups resolve target email patterns at a domain. Operators upload a domain, get back the naming convention, and generate plausible inboxes for the lure list.
  • Outlook and Tutanota inboxes serve as sender accounts. Outlook's reputation among other Outlook tenants reduces the chance of getting filtered.
  • Cloudflare fronts the lure pages with valid TLS and country level routing rules. The Cloudflare Turnstile element on the front of the lure is genuine; the targeting filter sits on top of it.
  • DocSend hosts decoy documents. Many lures pretend to be a shared DocSend or SharePoint link.
  • Pipedream runs the back end automation: receives the authorization code, exchanges it for tokens, alerts the operator. No server to maintain. No fingerprint on the operator's own infrastructure.

Researchers have flagged specific phishing domains tied to ConsentFix campaigns including trustpointassurance.com, fastwaycheck.com, and previewcentral.com. The hosting infrastructure has rotated through IPs in the 12.75.0.0/16 and 182.3.0.0/16 ranges. Operators churn domains weekly; defenders chasing IOCs are always one step behind.

Detecting It After the Fact

Because the attack uses a real Microsoft OAuth flow, it leaves real telemetry in Entra ID. The signal exists; the question is whether anyone is parsing it. The high value indicators in the Microsoft Entra audit logs:

  • Azure CLI sign in events from new geographies or autonomous systems. A user in Boston whose Azure CLI suddenly authenticates from a Pipedream IP in Virginia—or from a Tor exit, or from any non corporate ASN—is the cleanest signature.
  • Refresh token issuance to Azure CLI for users who do not actually use Azure CLI. Most knowledge workers have never run az login. A finance VP whose account starts emitting Azure CLI tokens is anomalous on its face.
  • Microsoft Graph reads against unusual endpoints, particularly the legacy and internal scopes that only first party apps can reach. Most tenants do not log granular Graph activity by default; SIEM rules need to be added.
  • OAuth grants accumulating without corresponding interactive sign in events. A token without a recent interactive sign in trail in the same session is a red flag.

The mitigation steps Microsoft and the research community have converged on:

  • Block or scope down public client flows. Conditional Access policies can require token binding to a managed device for sensitive scopes, which defeats the "exfiltrated refresh token used from attacker infrastructure" pattern.
  • Reduce refresh token lifetime. The Microsoft default of 90 days is far too generous. Sensitive accounts should sit at hours, not months.
  • Enable continuous access evaluation on every supported workload. CAE forces token re evaluation when risk signals fire, instead of waiting for the token to expire naturally.
  • Phishing resistant authentication for everyone, not just admins. Hardware security keys and platform passkeys make the upstream interactive sign in unphishable, which means the OAuth code flow can no longer be hijacked at step three of the chain.

What This Means for Anyone Who Reads Email at Work

If you are not a security engineer, the takeaway is short. ConsentFix lures look like Microsoft "verify your account" or "review this shared document" pages reached from a search result or an email link. They never ask for your password. They ask you to copy a URL, click a button labeled "verify," or solve a Cloudflare challenge. Each of those gestures completes the attack.

Three rules of thumb cover most of the exposure:

  • Never paste a URL from your address bar into another web page. Legitimate verification flows do not require it. The localhost URL ConsentFix asks for contains a valid OAuth code that the attacker is one HTTP request away from cashing.
  • If a page asks you to "complete sign in" by clicking a Microsoft login link inside it, close the tab and navigate to the service directly. The lure relies on the existing session cookie in your browser. Closing the lure tab does not close that cookie, but it stops the consent step.
  • If you suspect you completed a flow like this, tell your IT team within minutes, not hours. Refresh tokens can be revoked from the Azure portal, but only if someone knows to revoke them. Every hour the token stays alive is another hour the attacker can read your mailbox.

For organizations, this is also a reminder that the perimeter for Microsoft 365 is not the password and is not the MFA prompt. It is the OAuth consent surface, and the OAuth consent surface—particularly for first party apps with implicit trust—has been a soft target for years. ConsentFix v3 is the first kit to weaponize that softness with full automation. It will not be the last.

Stop Email Tracking in Gmail

Spy pixels track when you open emails, where you are, and what device you use. Gblock blocks them automatically.

Try Gblock Free for 30 Days

No credit card required. Works with Chrome, Edge, Brave, and Arc.