Pentagon Threatens Retaliation If Anthropic Bars Use of AI for Mass Surveillance | Truthout
The Pentagon, through Secretary of Defense Pete Hegseth, has threatened to blacklist Anthropic unless the AI company allows its tools to be used for autonomous drone attacks and mass surveillance, raising concerns about the implications for AI safety and military use. Anthropic's CEO, Dario Amodei, has expressed worries about autonomous weapons and privacy violations, and the company has recently dropped a safety policy aimed at mitigating AI risks, raising questions about whether this is related to Pentagon demands. The dispute highlights tensions over the use of AI in warfare and surveillance, with experts warning about the dangers of AI-enabled military applications.
Support justice-driven, accurate and transparent news — make a *quick donation** to Truthout today! *
Secretary of Defense Pete Hegseth has threatened Antropic with blacklisting if the AI company refuses to allow its tools to be used for autonomous drone attacks or mass surveillance – a chilling show of the Pentagon’s priorities.
In a meeting with the company on Tuesday, Hegseth said that the company must lift its demands for the safety restrictions by Friday at 5:01 pm. Otherwise, officials warned, the Pentagon will declare the company a “supply chain risk” and effectively blacklist it — or, paradoxically, it will invoke the Defense Production Act to force Antropic to comply.
Sources familiar with the meeting have said that the company’s representatives at the meeting expressed safety concerns over AI’s ability to reliably control weapons. A lack of regulations over AI use in mass surveillance could also pose risks, they reportedly told officials.
The company’s CEO, Dario Amodei, has repeatedly voiced concerns over these issues.
“I am worried about the autonomous drone swarm, right? The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders. With fully autonomous weapons, we don’t really have those protections,” Amodei said in an interview with podcaster Wes Roth.
Amodei also worries that AI could access and process private conversations captured by technology within people’s homes that could be used to label people politically and “undermine” the Fourth Amendment.
However, Anthropic announced after its meeting with Hegseth that it is dropping a central safety policy that would put guardrails on its AI development to mitigate risks posed to society by AI. It’s unclear if the changes are related to the Pentagon’s demands, but the timing raises suspicion.
Legal experts have said it’s unclear if the Trump administration could use the Defense Production Act to force Anthropic’s hand.
Anthropic is in negotiations for a contract with the Pentagon, and has reportedly previously offered to allow its AI systems to be used for missile and cyber defense. However, the Pentagon is saying that the company must allow use of its tools for all military purposes.
The company’s AI model Claude was reportedly used by the Pentagon during its operation to bombard Caracas and abduct Venezuelan President Nicolás Maduro, an operation that killed 83 people, including civilians. A Wall Street Journal report, citing sources familiar, said that the Pentagon made use of Claude through Anthropic’s partnership with Palantir, which has a contract with the U.S. government.
A Pentagon official said in a statement that Hegseth’s demands have “nothing to do with mass surveillance and autonomous weapons being used,” but the Trump administration has doggedly worked to overstep legal authorities to inflict more violence and surveillance of Americans.
“I want to clarify what responsible AI means at the Department of War. Gone are the days of equitable AI, and other DEI and social justice infusions that constrain and confuse our employment of this technology,” Hegseth said during an address at SpaceX’s headquarters in January. “We will not employ AI models that won’t allow you to fight wars.”
Experts have warned that the use of AI models for warfare is dangerous. A recent study in which a researcher pitted ChatGPT, Claude, and Gemini models against each other in 21 war scenarios found that one of the models deployed a nuclear weapon in 95 percent of the simulated games.
A terrifying moment. We appeal for your support.
In the last weeks, we have witnessed an authoritarian assault on communities in Minnesota and across the nation.
The need for truthful, grassroots reporting is urgent at this cataclysmic historical moment. Yet, Trump-aligned billionaires and other allies have taken over many legacy media outlets — the culmination of a decades-long campaign to place control of the narrative into the hands of the political right.
We refuse to let Trump’s blatant propaganda machine go unchecked. Untethered to corporate ownership or advertisers, Truthout remains fearless in our reporting and our determination to use journalism as a tool for justice.
But we need your help just to fund our basic expenses. Over 80 percent of Truthout’s funding comes from small individual donations from our community of readers, and over a third of our total budget is supported by recurring monthly donors.
Truthout’s fundraiser ends tonight! We have a goal to add 143 new monthly donors before midnight. Whether you can make a small monthly donation or a larger one-time gift, Truthout only works with your support.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.