Hegseth threatens to blackball Anthropic AI - Responsible Statecraft
The Pentagon has demanded that Anthropic drop its AI guardrails regarding autonomous military use or risk losing a $200 million contract, with Defense Secretary Pete Hegseth warning of potentially blacklisting its AI model, Claude. Anthropic advocates for restrictions on using its technology for autonomous targeting and surveillance, but the Pentagon seeks unrestricted access for lawful operations. A recent meeting between Anthropic CEO Dario Amodei and Pentagon officials highlighted disagreements over intended AI usage, especially after collaborative efforts in Venezuela and competing companies like xAI adopting fewer restrictions.
The Pentagon is demanding Anthropic drop its AI guardrails regarding its use in autonomous weapons systems and surveillance by Friday, or
loseAnthropic says that its AI technologies should not be used for completely autonomous military targeting
to “spy on Americans en masse.” But the Pentagon
orit must be able to use Anthropic’s model for “
all lawful use casesDefense Secretary Pete Hegseth warned that if Anthropic does not back down, the Pentagon will render Anthropic’s AI model, Claude, a “supply chain risk” giving it a black mark for other contracts.
In an apparent public jab at Anthropic back in January, Hegseth said, “We will not employ AI models that won’t allow you to fight wars.”
DoD officials have admitted, however, that gutting and replacing Claude within its current operations would be difficult.
Today’s ultimatum follows a meeting between DoD Secretary Pete Hegseth and Anthropic CEO Dario Amodei, where Amodei held to stated guardrails of how the company wants its tech to be used.
As part of a collaboration Anthropic has with Palantir, the U.S. military used Anthropic’s Claude model to
prepare forthat removed Venezuelan leader Nicolas Maduro from power last month. Subsequent disagreement
an operationover Claude’s use in the Maduro raid
with Palantirto a greater rift with the Pentagon.
has ledAnthropic’s competitors have not all held to its guardrails. Yesterday, xAI agreed that its tech could now be used for classified purposes, under any lawful use by the DoD.
OpenAI, Anthropic, Google, and xAI all received $200 million DoD contracts in July 2025 to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains.” Of those four companies, Anthropic
was the firstfor use in classified settings,
deployedit can work with other partners like Palantir as part of that approval.
where
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.