Mar 07, 2026 · 6 min read
OpenAI Signed a Pentagon AI Deal After Anthropic Got Blacklisted for Saying No—The EFF Says It’s Full of Loopholes
The Pentagon ended its $200 million Anthropic contract when the company refused to drop restrictions on surveillance and autonomous weapons. Days later, OpenAI stepped in.
Two Companies, Two Very Different Answers
On February 27, 2026, news broke that the Pentagon had blacklisted Anthropic after the AI company refused to remove its restrictions on mass surveillance and autonomous weapons use. The $200 million contract was terminated when Anthropic would not agree to the Pentagon's terms.
Within days, OpenAI announced its own agreement with the Department of Defense. CEO Sam Altman acknowledged the timing was "definitely rushed" and that "the optics don't look good." But the deal went through. OpenAI's advanced AI systems would be deployed in classified environments, with three stated red lines: no mass domestic surveillance, no autonomous weapons direction, and no high stakes automated decisions like social credit systems.
The contrast was stark. One AI company walked away from hundreds of millions of dollars to maintain its principles. The other signed on the dotted line.
The Fine Print Problem
The Electronic Frontier Foundation published a detailed analysis of the amended agreement on March 6, 2026, and the conclusions are damning. The EFF argues that nearly every safeguard in the contract contains language broad enough to drive a surveillance program through.
Take the phrase "consistent with applicable laws." The EFF points out that intelligence agencies have historically embraced expansive interpretations of what the law allows. Programs like the NSA's bulk metadata collection operated for years under legal frameworks that most Americans would not have recognized as permitting mass surveillance.
Then there is the word "intentionally." The agreement states that OpenAI's technology shall not be "intentionally used for domestic surveillance of U.S. persons." But intelligence agencies routinely claim that capturing Americans' communications happens "incidentally" while targeting overseas targets. Under Section 702 of the Foreign Intelligence Surveillance Act, the NSA collects vast amounts of Americans' international communications and then searches through them, a practice critics call "backdoor surveillance."
The Commercially Available Data Loophole
Perhaps the biggest gap in the agreement involves commercially purchased data. The amended contract says the Pentagon will not engage in "deliberately tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information."
But federal agencies have argued for years that buying data from brokers does not constitute a "search" under the Fourth Amendment. Customs and Border Protection currently uses advertising data from everyday apps to track phones without warrants. ICE buys location data from data brokers. The FBI has purchased commercial databases of Americans' information.
Legal experts have noted that if the government buys massive commercial datasets and then uses AI to analyze them, the result is functionally identical to mass surveillance, even if no individual query counts as "deliberate tracking."
Corporate Promises Are Not Safeguards
The EFF's core argument is that private agreements between companies and intelligence agencies have never been sufficient to prevent surveillance overreach. The history of U.S. intelligence is a history of programs that technically complied with legal language while violating the spirit of constitutional protections.
As the EFF put it, "many of the world's most notorious human rights atrocities have historically been legal under existing laws at the time." Pointing to legal compliance as a safeguard only works if the laws themselves are adequate. In the case of AI powered intelligence analysis, they are not.
OpenAI staff have also pushed back internally. CNN reported that some employees are "fuming" about the Pentagon deal, viewing it as a betrayal of the company's founding mission to develop AI that benefits humanity broadly, not AI that enhances government surveillance capabilities.
What Actually Needs to Happen
The lesson from Anthropic's stand is that some companies are willing to sacrifice revenue for principles. The lesson from OpenAI's deal is that there will always be another company willing to step in. Corporate ethics alone cannot protect civil liberties when hundreds of millions of dollars are on the table.
What is needed, as the EFF argues, are enforceable legal limits on how AI can be used for surveillance, transparency requirements that go beyond corporate press releases, and genuine oversight mechanisms with teeth. Until Congress acts, the protection of Americans' privacy rests on contractual language that, by the EFF's analysis, was designed to be flexible enough to mean almost anything.
The question is not whether OpenAI's deal contains the right words. It is whether those words will mean anything when an intelligence agency decides it needs to use AI to analyze the communications of millions of Americans and calls it something other than "surveillance."