Light bulb Limited Spots Available: Secure Your Lifetime Subscription on Gumroad!

Mar 15, 2026 · 5 min read

Anthropic Said No to Mass Surveillance. The Pentagon Called It a Threat.

The dispute between Anthropic and the US Department of Defense has revealed how AI could be used to build detailed dossiers on Americans' private lives, and what happens when a company tries to prevent it.

The Ultimatum

In late February 2026, Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a deadline: accept the Pentagon's demand to remove restrictions on how the military could use Claude, Anthropic's AI system, or be labeled a supply chain risk. The deadline was 5 PM on Friday, February 27. Amodei refused.

The Pentagon wanted Anthropic to renegotiate its government contracts to permit "all lawful use" of Claude. Amodei objected to two specific applications: fully autonomous weapons systems and mass surveillance of American citizens. The second objection is what turned a contract dispute into a privacy crisis that revealed the government's ambitions for AI powered domestic surveillance.

Pentagon building viewed from a distance with subtle digital overlay in foreground glass

What the Government Wants to Do

According to Bloomberg's reporting, the specific surveillance concern centered on a practice that has been growing quietly for years: the US government purchasing massive commercially available datasets from data brokers and using AI to analyze them. These datasets contain information scraped from the open market, including location data from apps, browsing histories, purchase records, social media activity, and other digital traces that Americans generate every day.

Individually, these data points are mundane. But when processed by a large language model like Claude, they can be assembled into detailed dossiers on individual Americans' private lives. Anthropic's concern was that AI analysis could reveal political views, personal associations, sexual orientation, browsing habits, and other intimate details that no individual data point would expose on its own.

The legal framework, or lack of one, makes this possible. Federal agencies have argued that purchasing commercially available data is not a search under the Fourth Amendment. If the data is already being sold on the open market, the reasoning goes, there is no reasonable expectation of privacy. Courts have not definitively settled this question, and domestic privacy laws have not caught up with the reality of what AI can extract from aggregated commercial data.

The Retaliation

When Anthropic refused the Pentagon's terms, the Trump administration designated the company a supply chain risk, a label historically reserved for companies tied to foreign adversaries like Huawei. All of Anthropic's government contracts were canceled. The company was effectively blacklisted from working with any federal agency.

Anthropic filed a lawsuit on March 9 in the US District Court for the Northern District of California, calling the administration's actions unprecedented and unlawful and claiming they threaten to harm Anthropic irreparably. Time magazine described Anthropic as the most disruptive company in the world based in part on this confrontation.

Meanwhile, OpenAI, Anthropic's largest competitor, has been more willing to work within the government's terms. The Intercept reported that OpenAI's position on military surveillance and autonomous weapons amounts to "you're going to have to trust us," with fewer explicit restrictions on how its technology can be used by government clients.

Why This Matters for Everyone

The Anthropic dispute is not really about one company's contract terms. It is about whether AI will be used to create a surveillance infrastructure that can profile any American citizen from commercially available data, without a warrant, without judicial oversight, and without the subject ever knowing.

This is the same advertising data ecosystem that tracks you across websites and apps every day. The location data from your weather app. The browsing history your ISP collects. The purchase records from your loyalty cards. Individually, you might not care about any single data point. But AI changes the equation. It can connect, correlate, and infer things from aggregated data that no human analyst could.

The EFF has been tracking this convergence of advertising surveillance and government surveillance, noting in a March 2026 report that the online advertising industry has built a massive surveillance machine and the government has figured out that it does not need to build its own. It can simply buy access to the one that already exists.

The Limits of Corporate Resistance

Anthropic's refusal to enable mass surveillance is notable, but it also highlights the fragility of relying on corporate ethics as a privacy safeguard. One company said no. Another said yes. The government's surveillance ambitions are not blocked; they are merely rerouted to a more compliant vendor.

The real protection against AI powered mass surveillance cannot depend on which company wins a government contract. It requires legal restrictions: a warrant requirement for government access to commercially available data, limits on what can be inferred from aggregated datasets, and transparency about how AI is used in domestic intelligence operations.

Until those legal protections exist, the data that advertisers collect about you today could become the dossier that a government AI compiles about you tomorrow. Anthropic drew a line. But the underlying capability, and the government's desire to use it, remains.