Why Anthropic wants the Pentagon to agree not to use its AI for autonomous weapons and mass surveillance — and why the Pentagon is refusing
Why Anthropic wants the Pentagon to agree not to use its AI for autonomous weapons and mass surveillance — and why the Pentagon is refusing
Andrew RomanoFri, February 27, 2026 at 8:31 PM UTC
0
In recent days, the artificial-intelligence startup Anthropic — better known as the maker of Claude, a leading ChatGPT competitor — has been clashing with the Pentagon over the dangers of deploying its powerful AI model for two controversial military purposes: building fully autonomous weapons and conducting mass surveillance of Americans.
It’s a fight about more than just Anthropic’s $200 million defense contract, and more than just drones or surveillance. It’s a fight about the future of “the most transformative technology since the splitting of the atom” — and who gets to control it, especially when it comes to killing or spying on humans.
During a closed-door meeting on Tuesday, Defense Secretary Pete Hegseth issued a blunt ultimatum to Anthropic CEO Dario Amodei: give us unfettered access to your AI model for “all lawful uses” by 5:01 pm ET on Friday, or face severe consequences.
Amodei talks about safety more than any of his fellow AI titans, and he has been resisting such demands since January. It’s not that Anthropic wants to disrupt its relationship with the Defense Department, he argued; in fact, Claude is currently the only model that the department uses in its classified systems (an arrangement that dates to 2024).
But according to Amodei, Anthropic has long seen mass domestic surveillance as an ethical red line — and Claude isn’t ready to reliably and responsibly control fully autonomous weapons without any human safeguards (at least not yet).
On Thursday, Amodei officially rejected Hegseth’s “best and final offer,” writing that while he believes “deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he also thinks that “in a narrow set of cases… AI can undermine, rather than defend, democratic values.”
The Defense Department lashed out in response. “It’s a shame that @DarioAmodei is a liar and has a God-complex,” Emil Michael, a top Pentagon official who oversees artificial intelligence, wrote late Thursday on X. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to [the] whims of any one for-profit tech company.”
The question now is how Hegseth will respond. On Tuesday, the defense secretary reportedly threatened two potential consequences. The government could force Anthropic to hand over its technology anyway by invoking the Defense Production Act — and/or it could blacklist the AI giant and block it from doing business with the Pentagon by declaring it a “supply-chain risk,” a penalty usually reserved for companies from adversarial countries such as China.
For the record, no American company has ever been declared a supply-chain risk.
Advertisement
The Pentagon’s position is relatively straightforward. Under President Trump, it has doubled down on using cutting-edge AI for military purposes. In July, the Defense Department awarded contracts worth up to $200 million each to four AI companies: Anthropic, OpenAI, Google DeepMind and Elon Musk’s xAI. The department’s goal? To transform the U.S. military into an “AI-first” force by rapidly integrating the top commercial AI models into warfighting, intelligence and support operations, according to a memo issued last month.
In the past, the government has always had the upper hand in these sorts of public-private partnerships. As chief Pentagon spokesman Sean Parnell said in an X post on Thursday, “we will not let ANY company dictate the terms regarding how we make operational decisions.”
After all, Lockheed Martin doesn’t tell the Air Force how to fly its F-22s — so Parnell’s assurances that the Pentagon “has no interest” in using AI to “conduct mass surveillance of Americans” or “develop autonomous weapons that operate without human involvement” should be enough, right?
“Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes,” Parnell wrote. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.”
But experts say AI is different from previous technologies for two reasons.
First, it’s advancing because of commerce, not because of government. “From nuclear propulsion to stealth to GPS, the state was the primary engine of discovery, and industry was the integrator and manufacturer,” Rear Admiral Lorin Selby, former chief of naval research, recently told CNBC. “Today the commercial sector is the primary driver of frontier capability … and the Department of War is no longer defining the edge of what is technically possible in artificial intelligence — it is adapting to it.”
That gives private companies like Anthropic more leverage than they might have had in the past.
Second, even AI’s creators “do not understand how our own AI creations work,” as Amodei once put it — or what they’re capable of. The risk, then, is not just that powerful AI would enable the government to “make a mockery” of the Fourth Amendment’s right to privacy by assembling “scattered, individually innocuous data [about individual Americans] into a comprehensive picture of any person’s life—automatically and at massive scale,” according to Amodei. Or that, in matters of life and death, “fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”
The risk is also that whatever the phrase “all lawful purposes” encompasses today, it can’t possibly keep up with what AI could do tomorrow.
“Demanding unconditional access before [these] systems are ready is not an assertion of authority. It is a wager that the unknowns will not matter,” Thomas Wright, a senior fellow at the Brookings Institution, explained in The Atlantic. “The danger is not that Silicon Valley will wield too much power over the military. It is that neither will fully understand the systems it is rushing to deploy — and that the consequences of that ignorance will be tested not in a laboratory, but on the world.”
Source: “AOL Breaking”