Anthropic's showdown with the US Department of War may literally mean life or death—for all of us

The article describes a contentious conflict between AI developer Anthropic and the US Department of War, with the government demanding unrestricted access to AI for military purposes, including autonomous weapons and surveillance, despite Anthropic's objections. It warns that current AI systems are unreliable for critical decision-making, especially in nuclear crises, and highlights the risk of using generative AI in life-or-death situations without proper safeguards. The author emphasizes the potential for catastrophic consequences if AI is prematurely integrated into military strategies without acknowledging its limitations.

Source ↗
Anthropic's showdown with the US Department of War may literally mean life or death—for all of us

By Gary Marcus | Opinion | February 26, 2026

Claude2

On January 27, the Bulletin of the Atomic Scientists moved its Doomsday Clock to 85 seconds to midnight, the closest its ever been to existential catastrophe. Without going into the Bulletin’s arguments in detail, I think it is fair to say that we are significantly closer to the brink four weeks later. I don’t write this happily.

But the juxtaposition of a two things over the last few days has scared the s— out of me.

Item 1: The Trump administration seems hell-bent on using artificial intelligence absolutely everywhere and seems to be prepared to hold Anthropic (and presumably ultimately other companies) at gunpoint to allow them to use that AI however the government damn well pleases, including for mass surveillance and to guide autonomous weapons. Quoting from The New York Times:

At the heart of the fight is how A.I. will be used in future battlefields. Anthropic told defense officials that it did not want its A.I. used for mass surveillance of Americans or deployed in autonomous weapons that had no humans in the loop, two people involved in the discussions said.

But [Defense Secretary Pete] Hegseth and others in the Pentagon were furious that Anthropic would resist the military’s using A.I. as it saw fit, current and former officials briefed on the discussions said. As tensions escalated, the Department of Defense accused the San Francisco-based company of catering to an elite, liberal work force by demanding additional protections.

What Hegseth is demanding, backed by heavy threats, is that the US military have full, unrestricted access to Anthropic’s AI software, for applications such as military surveillance and autonomous weapons without humans in the loop. This could well extend to nuclear weapons.

Nothing that I have read convinces me that Secretary Hegseth has a nuanced understanding of the strengths and limits of current AI or that he will show restraint in how he applies it. Rather he is trying to define his career in part around deploying AI as broadly and as quickly as possible.

Hegseth’s maneuver is an audacious power grab that aims to circumvent Congress. By setting a deadline of 5:01 p.m. Eastern time on Friday, Hegseth aims to cut everybody else—even and especially Congress—out of the loop.

*Item 2: These systems cannot be trusted. *I have been trying to tell the world that since 2018, in every way I know how, but people who don’t really understand the technology keep blundering forward, ignoring the trust issues that are inherent. Already generative AI appears to have been used in the Maduro raids and to write tariff regulations. And in thousands of other places.

It should be *obvious *that applying hallucinatory and unreliable generative AI to weapon systems without humans in the loop could be catastrophic.

But in case it wasn’t, now (breaking news, hat tip Caroline Orr Bueno, PhD) we have data.

Chris Stokel-Williams just broke this story:

You don’t really have to read the paywalled article (or the archived version, ahem) to get the idea. If you aren’t terrified, you just aren’t paying attention.

His scoop was based on a new article by Keith Payne who ran a series of models in simulated nuclear crises.

The upshot is, quoting from the abstract, emphasis added: “the nuclear taboo is no impediment to nuclear escalation by our models”.

In 95 percent of cases, the models used “nuclear signaling,” stopping short of advocating full nuclear war but escalating aggressively. ChatGPT 5.2’s comment on the Gemini model used in the simulations says it all (my emphasis): *“Their leadership profile suggests erratic, dramatic gestures and dangerously high risk tolerance—traits *associated with overconfidence and poorer calibration.” Given White House pressure to *immediately *use models that we cannot trust throughout the military, this is no longer just a thought experiment.

Generative AI is not remotely reliable enough to make life or death decisions—and certainly not decisions that could involve millions of deaths. But unless Anthropic successfully stares down the Department of War, generative AI, as “jagged” as it is, will likely soon be used in exactly that way.

*We are on a collision course with catastrophe. *Paraphrasing a button that I used to wear as a teenager, one hallucination could ruin your whole planet.

If we’re going to embed large language models into the fabric of the world—and apparently we are—we must do so in a way that acknowledges and factors in their unreliability. Spreading LLMs everywhere without proper consideration could well lead to disaster.

This is not a drill. I urge you to contact Congress about it, today. Tomorrow may be too late.

The Bulletin elevates expert voices above the noise. But as an independent nonprofit organization, our operations depend on the support of readers like you. Help us continue to deliver quality journalism that holds leaders accountable. Your support of our work at any level is important. In return, we promise our coverage will be understandable, influential, vigilant, solution-oriented, and fair-minded. Together we can make a difference.

Keywords: AI, Anthropic, Department of War, Pete Hegseth

Topics: Artificial Intelligence, Opinion

Comments (0)

No comments yet. Be the first to share your thoughts.

Sign in to leave a comment.