The Pentagon's AI Triangle: How Anthropic, OpenAI and the U.S. military landed in a tech ...
A dispute over surveillance, autonomous weapons and “lawful use” contracts has triggered a rare showdown between the U.S. military and leading AI labs, reshaping the balance of power between governments and artificial intelligence companies.
ADVERTISEMENT
The battle over artificial intelligence is no longer confined to Silicon Valley boardrooms. It has now reached the highest levels of government and national security. In recent weeks, a dramatic confrontation between the U.S. military and some of the world’s most influential AI companies has exposed a fundamental question shaping the future of technology: who should ultimately control powerful AI systems governments or the companies that build them.
At the centre of this unfolding conflict are two rival AI labs, Anthropic and OpenAI, and the U.S. defence establishment led by the United States Department of Defense, which the Trump administration has recently begun referring to as the “Department of War.” What began as negotiations over how artificial intelligence could be used in military systems has quickly escalated into a government blacklist, a legal battle and a wider debate across the tech industry about ethics, national security and the limits of corporate control.
The origins of the dispute lie in the rapidly expanding role of artificial intelligence inside modern military operations. Over the past year, AI models have increasingly been integrated into defence workflows for tasks such as intelligence analysis, cyber operations, battlefield simulations and large-scale data interpretation. Among the systems used in these contexts was Claude, the flagship AI model developed by Anthropic. The company had built a reputation as one of the more safety-focused players in the industry and had introduced strict guardrails governing how its technology could be deployed.
Anthropic to challenge US ‘supply chain risk’ label from Department of War
These guardrails included contractual red lines that prohibited Claude from being used for mass domestic surveillance or for fully autonomous weapons systems that operate without meaningful human oversight. Anthropic’s leadership argued that advanced AI models should never be allowed to independently make lethal decisions and that their use in monitoring civilian populations raised serious ethical concerns.
Tensions began to rise when U.S. defence officials sought broader access to these systems. As military reliance on AI tools increased, the Pentagon pushed for contracts that would grant the government far greater flexibility in how these models could be used. According to reports, officials asked Anthropic to accept language allowing “any lawful use” of its AI systems. In practical terms, this would remove the explicit safeguards the company had insisted on, leaving it up to the government to determine what types of applications were legally permissible.
For Anthropic’s leadership, this demand crossed a clear boundary. Chief executive Dario Amodei publicly stated that the company could not allow its technology to be deployed in ways that might enable mass surveillance of American citizens or remove human oversight from lethal weapons systems. The company refused to modify its contractual protections, arguing that the risks associated with unrestricted AI deployment were too significant to ignore.
The refusal quickly escalated the situation into a direct confrontation with the U.S. government. Defence officials reportedly issued a deadline for Anthropic to reconsider its position. When the company declined to change its terms, the Pentagon responded with an unprecedented move. Anthropic was formally designated a “supply chain risk”, a classification typically used for foreign technology companies considered potential national security threats.
The decision had immediate consequences. Government agencies were instructed to stop using Anthropic’s technology, and defence contractors began removing the company’s AI tools from systems connected to military infrastructure. Companies such as Lockheed Martin began phasing out Anthropic-based technology in response to the directive, effectively pushing the AI developer out of the U.S. defence ecosystem.
Yet the space created by Anthropic’s exclusion was filled almost immediately. Within hours of the negotiations collapsing, OpenAI announced a new agreement with the U.S. military that would allow its AI systems to be deployed inside classified government networks. The deal positioned the company as a key partner in the military’s expanding AI strategy.
OpenAI’s stance on the issue differed significantly from Anthropic’s. Instead of insisting on strict contractual restrictions governing military applications, the company initially accepted language allowing “any lawful use” of its technology. Chief executive Sam Altman defended the decision by arguing that governments ultimately have legitimate authority over national security matters. In his view, it would be inappropriate for a private technology company to attempt to dictate how a democratically elected government uses its tools.
The timing of the announcement, however, quickly generated controversy. Critics argued that OpenAI appeared to have stepped into a deal that another company had refused on ethical grounds. The optics of the agreement triggered criticism across social media and within the technology community. A consumer-driven backlash soon emerged, with online campaigns encouraging users to cancel their subscriptions to OpenAI services.
The criticism also reached inside the company. Reports suggested that the deal sparked internal debate among OpenAI employees, some of whom questioned whether the partnership aligned with the organisation’s long-standing mission of ensuring artificial intelligence benefits humanity. Altman later acknowledged that the rollout of the announcement had been rushed and that the optics surrounding the deal had been poorly handled.
In response to the backlash, OpenAI moved to revise parts of its agreement with the government. Updated language in the contract introduced more explicit safeguards, including provisions prohibiting the use of its AI systems for domestic surveillance of U.S. citizens. The revisions effectively brought the agreement closer to some of the protections Anthropic had originally tried to secure in its own negotiations.
Anthropic, meanwhile, chose to challenge the government’s decision through legal action. The company announced that it would contest the “supply chain risk” designation in court, arguing that the label had been applied without a legitimate national security basis and was instead used as retaliation for refusing to alter its ethical safeguards. If the challenge succeeds, the case could establish an important precedent regarding how governments interact with domestic AI companies and whether they can effectively blacklist them over disagreements about military use.
Beyond the immediate dispute, the confrontation has revealed a deeper philosophical divide emerging within the AI industry. One view, represented by Anthropic, holds that companies developing powerful AI systems must maintain control over the ethical guardrails embedded in their technology. The opposing perspective, articulated by OpenAI’s leadership, argues that governments should ultimately determine how such technologies are deployed, particularly when national security is involved.
The stakes in this debate are enormous. Artificial intelligence is rapidly becoming a core component of military capability, intelligence gathering and geopolitical competition. Governments increasingly see access to advanced AI as essential to maintaining technological superiority, while the companies building these systems now wield unprecedented influence over how the technology evolves.
The Pentagon’s clash with Anthropic and OpenAI therefore represents more than a disagreement over one contract. It is an early signal of a much larger struggle emerging in the age of artificial intelligence, one that will determine whether the power to shape the world’s most transformative technology ultimately lies with governments, private companies or the legal frameworks that attempt to regulate both.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.