Light bulb Limited Spots Available: Secure Your Lifetime Subscription on Gumroad!

Feb 21, 2026 · 5 min read

Microsoft's Copilot Was Silently Reading Your Confidential Emails for Weeks

A code defect in Microsoft 365 Copilot Chat bypassed data loss prevention policies and sensitivity labels, letting the AI summarize confidential emails from Sent Items and Drafts folders without authorization.

A corporate laptop displaying an email inbox with a translucent AI interface hovering beside it in a glass-walled office setting

What Happened

Microsoft has confirmed that a bug in its 365 Copilot Chat allowed the AI assistant to read and summarize emails marked as confidential, even when organizations had configured data loss prevention (DLP) policies specifically designed to prevent that access. The flaw persisted for weeks before Microsoft began rolling out a fix.

The issue, tracked internally as CW1226324, specifically affected the "Work Tab" in Copilot Chat. When users interacted with Copilot through Outlook desktop, the AI was pulling content from Sent Items and Drafts folders and summarizing it, completely ignoring the sensitivity labels that should have blocked access.

Microsoft's own documentation acknowledged the gap: while sensitivity labels excluded content from Copilot in specific Office apps like Word and Excel, "the content remains available to Microsoft 365 Copilot for other scenarios, for example, in Teams and in Microsoft 365 Copilot Chat."

The Timeline

The vulnerability was first detected on January 21, 2026, when customers reported anomalous Copilot behavior. Microsoft acknowledged the issue on February 3 and began deploying a fix in early February. By February 11, the company started rolling out patches and reaching out to affected users. However, as of mid February, the fix had not reached full saturation across all environments.

That means for nearly a month, organizations that had invested in Microsoft's enterprise security stack, configured sensitivity labels, and deployed DLP policies were operating under a false sense of protection. Their confidential emails were being processed by an AI system they had explicitly told to stay away.

Who Was Affected

Microsoft has not disclosed how many customers were impacted or how many confidential emails were exposed during the vulnerability window. What we do know is that the bug affected paying Microsoft 365 enterprise customers who use Copilot Chat, and that the impact was real enough for the UK's National Health Service to flag it internally as incident INC46740412.

When a national healthcare system is tracking your AI bug as a security incident, the scope is not trivial. Healthcare organizations handle some of the most sensitive data imaginable: patient records, treatment plans, internal communications about staffing and policy. Any AI system processing that content without authorization raises serious compliance questions under regulations like the UK's Data Protection Act and the EU's GDPR.

Microsoft's Response

A Microsoft spokesperson said: "This did not provide anyone access to information they weren't already authorized to see. While our access controls and data protection policies remained intact, this behavior did not meet our intended Copilot experience, which is designed to exclude protected content."

The statement attempts to draw a distinction between who could see the data and how the data was processed. The argument is that since users already had access to their own email, Copilot summarizing it was not a new exposure. But that framing misses the point: organizations deploy DLP policies precisely to control how sensitive data flows through systems, including AI systems. A user being able to read their own email is not the same as an AI ingesting, processing, and potentially storing that content.

A Pattern of AI Overreach

This is not the first time Microsoft's Copilot has overstepped its boundaries. In January 2026, researchers disclosed a "Reprompt" vulnerability that could expose sensitive files through malicious links embedded in documents. According to VentureBeat, Copilot has now ignored sensitivity labels twice in eight months, and no DLP stack caught either instance.

The pattern suggests a structural problem. As companies race to embed AI into every productivity tool, security controls are struggling to keep pace. According to Microsoft's own Cyber Pulse report, while over 80% of Fortune 500 companies deploy AI agents, only 47% have adequate security controls for managing generative AI platforms. That gap between deployment speed and security readiness is where incidents like this one live.

What This Means for Your Organization

If your organization uses Microsoft 365 with Copilot, there are several steps to consider:

  • Check the Microsoft admin center for service alert CW1226324 to confirm the fix has reached your environment
  • Audit your Copilot Chat logs to determine whether confidential content was processed during the vulnerability window
  • Review your DLP policies to understand which sensitivity labels are actually enforced across all Copilot surfaces, not just individual Office apps
  • Consider whether your organization's AI deployment has outpaced your security controls

The broader lesson is that sensitivity labels and DLP policies are only as reliable as the systems that enforce them. When an AI assistant can bypass those controls due to a code defect, the entire trust model breaks down. Organizations need to treat AI access to sensitive data with the same rigor they apply to human access, with continuous monitoring, regular audits, and a healthy skepticism about vendor promises.

The Bottom Line

Microsoft's Copilot was designed to make workers more productive. Instead, for nearly a month, it was silently processing the emails organizations had explicitly marked as off limits. The company's response, that no one gained access to data they could not already see, sidesteps the fundamental issue: AI systems are being deployed faster than security frameworks can adapt, and the controls meant to protect sensitive data are failing at the seams.

When 72% of S&P 500 companies cite AI as a material risk in their regulatory filings, incidents like this show why. The question is not whether AI will make mistakes with your data. It is whether your organization is prepared for when it does.