Anthropic wanted the Pentagon to agree not to use its AI for autonomous weapons and ... - Yahoo

President Trump ordered all federal agencies to cease using Anthropic's AI technology, escalating tensions over military and surveillance applications. Anthropic refused to provide unfettered access to its model, citing ethical concerns about mass domestic surveillance and autonomous weapons, which led to the Department of Defense designating it a "Supply-Chain Risk to National Security." The dispute highlights broader issues about private sector control of advanced AI, its ethical use, and the challenges of integrating such technology into military operations.

Source ↗
Anthropic wanted the Pentagon to agree not to use its AI for autonomous weapons and ... - Yahoo

Anthropic wanted the Pentagon to agree not to use its AI for autonomous weapons and mass surveillance. So Trump ordered the government to stop using it altogether.

Making sense of the clash over who gets to control cutting-edge AI technology: the military or the companies that create it.

Andrew Romano, Reporter

President Trump ordered every federal agency on Friday to “IMMEDIATELY CEASE all use” of technology from the artificial intelligence startup Anthropic, massively escalating an ongoing feud between the company and the Department of Defense.

In recent days, Anthropic — better known as the maker of Claude, a leading ChatGPT competitor — has been clashing with the Pentagon over the dangers of deploying its powerful AI model for two controversial military purposes: building fully autonomous weapons and conducting mass surveillance of Americans.

“THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!” Trump wrote on social media. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution.”

But the fight between Anthropic and the Pentagon is about more than just a $200 million defense contract, and more than just drones or surveillance. It’s also a fight about the future of “the most transformative technology since the splitting of the atom” — and who gets to control it, especially when it comes to killing or spying on humans.

During a closed-door meeting on Tuesday, Defense Secretary Pete Hegseth issued a blunt ultimatum to Anthropic CEO Dario Amodei: Give us unfettered access to your AI model for “all lawful uses” by 5:01 pm ET on Friday, or face severe consequences.

Amodei talks about safety more than any of his fellow AI titans, and he has been resisting such demands since January. Anthropic wants to work with the Defense Department, he insisted; in fact, Claude is currently the only model that the department uses in its classified systems, an arrangement that dates to 2024.

But according to Amodei, his company has long seen mass domestic surveillance as an ethical red line — and Claude isn’t ready to reliably and responsibly control fully autonomous weapons without any human safeguards (at least not yet).

On Thursday, Amodei officially rejected Hegseth’s “best and final offer,” writing that while he believes “deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he also thinks that “in a narrow set of cases … AI can undermine, rather than defend, democratic values.”

The Defense Department lashed out in response. “It’s a shame that @DarioAmodei is a liar and has a God-complex,” Emil Michael, a top Pentagon official who oversees artificial intelligence, wrote late Thursday on X. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to [the] whims of any one for-profit tech company.”

Before Trump weighed in, observers were unsure how Hegseth would respond. On Tuesday, the defense secretary reportedly threatened two potential consequences. The government could force Anthropic to hand over its technology anyway by invoking the Defense Production Act — and/or it could blacklist the AI giant and block it from doing business with the Pentagon by declaring it a “supply-chain risk,” a penalty usually reserved for companies from adversarial countries such as China.

But then Trump moved to unilaterally banning Anthropic from the entire federal government instead. “We don’t need it, we don’t want it, and will not do business with them again!” he wrote on social media, adding that “there will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels.”

The president also warned of “major civil and criminal consequences” if Anthropic is not “helpful during this phase out period.” (Amodei previously vowed to “work to enable a smooth transition to another provider,” if it came to that.)

More in U.S.

Shortly after Friday’s 5:01 pm deadline, Hegseth followed through on Tuesday’s threat and directed the Defense Department to designate Anthropic “a Supply-Chain Risk to National Security.”

“Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Hegseth wrote on X.

No other American company has ever been declared a supply-chain risk.

The Pentagon’s position is relatively straightforward. Under Trump, it has doubled down on using cutting-edge AI for military purposes. In July, the Defense Department awarded contracts worth up to $200 million each to four AI companies: Anthropic, OpenAI, Google DeepMind and Elon Musk’s xAI. The department’s goal? To transform the U.S. military into an “AI-first” force by rapidly integrating the top commercial AI models into war fighting, intelligence and support operations, according to a memo issued last month.

In the past, the government has always had the upper hand in these sorts of public-private partnerships. As chief Pentagon spokesman Sean Parnell said in an X post on Thursday, “we will not let ANY company dictate the terms regarding how we make operational decisions.”

After all, Lockheed Martin doesn’t tell the Air Force how to fly its F-22s — so Parnell’s assurances that the Pentagon “has no interest” in using AI to “conduct mass surveillance of Americans” or “develop autonomous weapons that operate without human involvement” should be enough, right?

“Here's what we're asking: Allow the Pentagon to use Anthropic's model for all lawful purposes,” Parnell wrote. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk.”

But experts say AI is different from previous technologies for two reasons.

First, it’s advancing because of commerce, not because of government. “From nuclear propulsion to stealth to GPS, the state was the primary engine of discovery, and industry was the integrator and manufacturer,” Rear Adm. Lorin Selby, former chief of naval research, recently told CNBC. “Today the commercial sector is the primary driver of frontier capability … and the Department of War is no longer defining the edge of what is technically possible in artificial intelligence — it is adapting to it.”

That gives private companies like Anthropic more leverage than they might have had in the past.

Second, even AI’s creators “do not understand how our own AI creations work,” as Amodei once put it — or what they’re capable of. The risk, then, is not just that powerful AI would enable the government to “make a mockery” of the Fourth Amendment’s right to privacy by assembling “scattered, individually innocuous data [about individual Americans] into a comprehensive picture of any person’s life—automatically and at massive scale,” according to Amodei. Or that, in matters of life and death, “fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.”

The risk is also that whatever the phrase “all lawful purposes” encompasses today, it can’t possibly keep up with what AI could do tomorrow.

“Demanding unconditional access before [these] systems are ready is not an assertion of authority. It is a wager that the unknowns will not matter,” Thomas Wright, a senior fellow at the Brookings Institution, explained in the Atlantic. “The danger is not that Silicon Valley will wield too much power over the military. It is that neither will fully understand the systems it is rushing to deploy — and that the consequences of that ignorance will be tested not in a laboratory, but on the world.”

Filed under: Foreign Entanglements

Comments (0)

No comments yet. Be the first to share your thoughts.

Sign in to leave a comment.