Pentagon gives Anthropic ultimatum on AI technology: Sources - ABC News
The Pentagon has given AI company Anthropic an ultimatum to agree to its terms for military AI usage by Friday or face compelled compliance through legal measures, including potentially invoking the Defense Production Act. The dispute centers on Anthropic's red lines concerning autonomous weapons and domestic surveillance, although the Pentagon claims its actions are lawful. If Anthropic does not comply, the Pentagon may also designate it a "supply chain risk" to halt its business with military partners. The situation remains sensitive, with ongoing negotiations and concerns over AI's role in military operations.
Pentagon gives Anthropic ultimatum on AI technology: Sources
The two are in a dispute over how the Pentagon uses Anthropic's technology.
The Defense Department is giving AI company Anthropic an ultimatum to agree to its terms for the use of AI technology in the military by a Friday deadline or the Pentagon will force it to comply, according to a Pentagon official and a source familiar with the matter.
The Pentagon has been using Anthropic's technology as part of a $200 million pilot program in the Department of Defense. But during a Tuesday meeting with** **Defense Secretary Pete Hegseth, Anthropic CEO Dario Amodei walked through Anthropic's red lines, which include autonomous weapons -- where AI, not humans, makes final targeting decisions -- and mass domestic surveillance of American citizens, the source said.
The Pentagon official says the dispute with Anthropic has nothing to do with mass surveillance and autonomous weapons being used, adding that the Pentagon has always followed the law.
"Legality is the Pentagon's responsibility as the end user," the official said.

Anthropic in a statement on the meeting said, "Dario expressed appreciation for the Department's work and thanked the Secretary for his service."
"We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."
Axios was the first to report on the meeting.
Elon Musk's xAI, Google, OpenAI are also part of the program. The Pentagon source said Grok -- the AI tool developed by Musk's xAI, is on board with being used in a classified setting, while the rest of the companies are close.
If Anthropic doesn't comply, Hegseth said in the meeting he would invoke the Defense Production Act, a Cold War-era law that gives the president emergency authority to direct private companies to prioritize orders from the federal government, and force the company to allow its model to be used for the military's needs, the Pentagon official told ABC News.

The Defense Production Act was invoked by both President Donald Trump in his first term and then-President Joe Biden during the COVID pandemic.
The Pentagon official added that Hegseth said he would also label Anthropic a "supply chain risk," a penalty usually reserved for foreign adversaries that would halt the company's business with the Pentagon and its partners.
A source familiar with the meeting said it was respectful with neither side raising their voices, but it ended with the Pentagon making clear it would punish Anthropic if it didn't agree to its terms.
In response to ABC News' reporting last week that Anthropic and the Pentagon were at odds in negotiations over the continued and future use of Anthropic's AI on the Pentagon's classified and unclassified systems, chief Pentagon spokesperson Sean Parnell said in a statement: "The Department of War's relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people."
The Wall Street Journal has reported that Anthropic's AI tool Claude** **was used in the U.S. military's operation to capture former Venezuelan President Nicolás Maduro, which included bombing several sites.
"We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise," an Anthropic spokesperson told ABC News.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.