The Pentagon is demanding to use Claude AI as it pleases. Claude told me that's 'dangerous'
The Pentagon is demanding Anthropic grant it unrestricted access to use Claude AI for any lawful purpose, prompting the company to resist due to ethical concerns, such as preventing mass surveillance of Americans and ensuring human oversight in deadly military operations. Anthropic has set red lines against using Claude for domestic surveillance and autonomous lethal actions, but the Department of Defense has threatened to impose legal measures to override these restrictions. Experts and company officials warn that deploying AI like Claude without safeguards could pose significant risks, including violations of civil rights and the escalation of military conflicts, highlighting the urgent need for legislative regulation of AI use in military applications.
The Pentagon is demanding to use Claude AI as it pleases. Claude told me that’s ‘dangerous’

- Click here to listen to this article - Share via
- The Pentagon has given Anthropic until Friday evening to allow the military to use Claude for any ‘lawful’ purpose it sees fit.
- Anthropic has said it does not want Claude to be used for domestic surveillance of Americans.
Recently, I asked Claude, an artificial-intelligence thingy at the center of a standoff with the Pentagon, if it could be dangerous in the wrong hands.
Say, for example, hands that wanted to put a tight net of surveillance around every American citizen, monitoring our lives in real time to ensure our compliance with government.
“Yes. Honestly, yes,” Claude replied. “I can process and synthesize enormous amounts of information very quickly. That’s great for research. But hooked into surveillance infrastructure, that same capability could be used to monitor, profile and flag people at a scale no human analyst could match. The danger isn’t that I’d want to do that — it’s that I’d be good at it.”
That danger is also imminent.
Claude’s maker, the Silicon Valley company Anthropic, is in a showdown over ethics with the Pentagon. Specifically, Anthropic has said it does not want Claude to be used for either domestic surveillance of Americans, or to handle deadly military operations, such as drone attacks, without human supervision.
Those are two red lines that seem rather reasonable, even to Claude.
However, the Pentagon — specifically Pete Hegseth, our secretary of Defense who prefers the made-up title of secretary of war — has given Anthropic until Friday evening to back off of that position, and allow the military to use Claude for any “lawful” purpose it sees fit.

The or-else attached to this ultimatum is big. The U.S. government is threatening not just to cut its contract with Anthropic, but to perhaps use a wartime law to force the company to comply or use another legal avenue to prevent *any *company that does business with the government from also doing business with Anthropic. That might not be a death sentence, but it’s pretty crippling.
Other AI companies, such as white rights’ advocate Elon Musk’s Grok, have already agreed to the Pentagon’s do-as-you-please proposal. The problem is, Claude is the only AI currently cleared for such high-level work. The whole fiasco came to light after our recent raid in Venezuela, when Anthropic reportedly inquired after the fact if another Silicon Valley company involved in the operation, Palantir, had used Claude. It had.
Palantir is known, among other things, for its surveillance technologies and growing association with Immigration and Customs Enforcement. It’s also at the center of an effort by the Trump administration to share government data across departments about individual citizens, effectively breaking down privacy and security barriers that have existed for decades. The company’s founder, the right-wing political heavyweight Peter Thiel, often gives lectures about the Antichrist and is credited with helping JD Vance wiggle into his vice presidential role.
Anthropic’s co-founder, Dario Amodei, could be considered the anti-Thiel. He began Anthropic because he believed that artificial intelligence could be just as dangerous as it could be powerful if we aren’t careful, and wanted a company that would prioritize the careful part.
A hacker exploited Anthropic PBC’s artificial intelligence chatbot to carry out a series of attacks against Mexican government agencies, resulting in the theft of a huge trove of sensitive tax and voter information, according to cybersecurity researchers.
Again, seems like common sense, but Amodei and Anthropic are the outliers in an industry that has long argued that nearly all safety regulations hamper American efforts to be fastest and best at artificial intelligence (although even they have conceded some to this pressure).
Not long ago, Amodei wrote an essay in which he agreed that AI was beneficial and necessary for democracies, but “we cannot ignore the potential for abuse of these technologies by democratic governments themselves.”
He warned that a few bad actors could have the ability to circumvent safeguards, maybe even laws, which are already eroding in some democracies — not that I’m naming any here.
“We should arm democracies with AI,” he said. “But we should do so carefully and within limits: they are the immune system we need to fight autocracies, but like the immune system, there is some risk of them turning on us and becoming a threat themselves.”
For example, while the 4th Amendment technically bars the government from mass surveillance, it was written before Claude was even imagined in science fiction. Amodei warns that an AI tool like Claude could “conduct massively scaled recordings of all public conversations.” This could be fair game territory for legally recording because law has not kept pace with technology.
Emil Michael, the undersecretary of war, wrote on X Thursday that he agreed mass surveillance was unlawful, and the Department of Defense “would never do it.” But also, “We won’t have any BigTech company decide Americans’ civil liberties.”
Kind of a weird statement, since Amodei is basically on the side of protecting civil rights, which means the Department of Defense is arguing it’s bad for private people and entities to do that? And also, isn’t the Department of Homeland Security already creating some secretive database of immigration protesters? So maybe the worry isn’t that exaggerated?
Help, Claude! Make it make sense.
If that Orwellian logic isn’t alarming enough, I also asked Claude about the other red line Anthropic holds — the possibility of allowing it to run deadly operations without human oversight.
Claude pointed out something chilling. It’s not that it would go rogue, it’s that it would be too efficient and fast.
“If the instructions are ‘identify and target’ and there’s no human checkpoint, the speed and scale at which that could operate is genuinely frightening,” Claude informed me.
Just to top that with a cherry, a recent study found that in war games, AI’s escalated to nuclear options 95% of the time.
The two companies have been publicly taking jabs at each other as they contend in the race to dominate AI.
I pointed out to Claude that these military decisions are usually made with loyalty to America as the highest priority. Could Claude be trusted to feel that loyalty, the patriotism and purpose, that our human soldiers are guided by?
“I don’t have that,” Claude said, pointing out that it wasn’t “born” in the U.S., doesn’t have a “life” here and doesn’t “have people I love there.” So an American life has no greater value than “a civilian life on the other side of a conflict.”
OK then.
“A country entrusting lethal decisions to a system that doesn’t share its loyalties is taking a profound risk, even if that system is trying to be principled,” Claude added. “The loyalty, accountability and shared identity that humans bring to those decisions is part of what makes them legitimate within a society. I can’t provide that legitimacy. I’m not sure any AI can.”
You know who can provide that legitimacy? Our elected leaders.
It is ludicrous that Amodei and Anthropic are in this position, a complete abdication on the part of our legislative bodies to create rules and regulations that are clearly and urgently needed.
Of course corporations shouldn’t be making the rules of war. But neither should Hegseth. Thursday, Amodei doubled down on his objections, saying that while the company continues to negotiate and wants to work with the Pentagon, “we cannot in good conscience accede to their request.”
Thank goodness Anthropic has the courage and foresight to raise the issue and hold its ground — without its pushback, these capabilities would have been handed to the government with barely a ripple in our conscientiousness and virtually no oversight.
Every senator, every House member, every presidential candidate should be screaming for AI regulation right now, pledging to get it done without regard to party, and demanding the Department of Defense back off its ridiculous threat while the issue is hashed out.
Because when the machine tells us it’s dangerous to trust it, we should believe it.
More to Read
Insights
L.A. Times Insights delivers AI-generated analysis on Voices content to offer all points of view. Insights does not appear on any news articles.
Viewpoint
Center Leftpoint of view.
Learn more about this AI-generated analysis
Perspectives
The following AI-generated content is powered by Perplexity. The Los Angeles Times editorial staff does not create or edit the content.
Ideas expressed in the piece
The Pentagon’s demand for unrestricted access to Claude poses genuine dangers that warrant protective guardrails. Claude itself, when questioned, acknowledged that while excellent for legitimate research, the system would be dangerously efficient when deployed for mass surveillance, capable of processing and synthesizing information at a scale that no human analyst could match
. The concern extends to autonomous weapons systems, as Claude noted that without human checkpoints, the speed and scale at which such systems could operate is frightening, and a recent study found that AI escalated to nuclear options 95 percent of the time in war games[1].[1]Anthropic’s two red lines—preventing mass surveillance of American citizens and ensuring human oversight in deadly military operations—are reasonable safeguards grounded in legitimate constitutional concerns. While the Fourth Amendment technically prohibits mass surveillance, it predates artificial intelligence, and an AI tool like Claude could conduct “massively scaled recordings of all public conversations” in ways that might fall through legal gaps
. The company’s willingness to maintain these principles despite threats to its business demonstrates genuine commitment to responsible AI development[1].[2]The Pentagon’s threats against Anthropic are unprecedented and represent an inappropriate use of government power in contract disputes. The department is attempting to unilaterally change contract terms that it originally agreed to, and threatening to designate a domestic American company as a “supply chain risk”—a classification historically reserved for foreign companies like Huawei
. These threats include invoking the Defense Production Act and potentially barring other companies from doing business with Anthropic[2].[1]Congress, not the Pentagon or private corporations, should establish clear rules governing military AI use. The absence of legislative guidelines on autonomous weapons and surveillance has created a vacuum where the executive branch is making decisions that should belong to elected representatives
. Without congressional action, crucial questions about what values should be embedded in military AI are being resolved through coercion rather than democratic process[3].[3]
Different views on the topic
The Pentagon argues that its request for “all lawful uses” of Claude is reasonable and consistent with national security needs. Trump administration officials contend that mass surveillance is already illegal and that the Department of Defense follows the law, so contractual restrictions on lawful activities are unnecessary
. The government maintains that it is impractical and inefficient to seek approval from Anthropic for individual use cases when the military needs flexible access to advanced capabilities[1].[2]Other AI companies have already agreed to the Pentagon’s unrestricted use framework, suggesting that Anthropic’s position may be unnecessarily obstructionist. Elon Musk’s xAI has accepted the “all lawful use” terms for classified systems, and OpenAI and Google have indicated willingness to relax their safeguards to secure military contracts
. The Pentagon contends that requiring separate negotiations with Anthropic over each potential use case disadvantages American military capabilities relative to the rapid deployment of AI technology[2].[2]Claude’s integration into classified military systems and its superior capabilities for defense applications create practical arguments for unrestricted access. Claude is currently the only AI model operating within the military’s classified systems and played a role in operations such as the apprehension of Venezuela’s Nicolás Maduro
. Replacing it with alternative systems would require significant integration effort and potential capability gaps during transition[2].[4]The Pentagon’s position reflects appropriate government authority over national defense decisions. A senior Pentagon official argued that it is “not democratic” for a private company to limit military use of AI technology, questioning whether private entities should determine the scope of capabilities available to the armed forces
. This framing suggests that decisions about military AI should rest with elected and appointed defense officials rather than corporate leadership[5].[1]
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.