Contractor Takeaways from the Anthropic Case - GovWin IQ

The unfolding conflict between Anthropic and the federal government highlights questions surrounding AI innovation, national security and procurement regulations.

Source ↗
Only Clowns Are Orange

Contractor Takeaways from the Anthropic Case

Published: March 04, 2026

Federal Market AnalysisArtificial Intelligence/Machine LearningDefense & AerospaceOther Transaction Agreements (OTAs)Procurement

The unfolding conflict between Anthropic and the federal government highlights questions surrounding AI innovation, national security and procurement regulations.

The standoff between AI developer Anthropic and the Department of Defense/War escalated last Friday after weeks of negotiation. In a post on Truth Social, President Trump directed federal agencies to cease using Anthropic’s AI tools and begin a six?month phase?out period. Shortly after, Secretary Hegseth announced on X that Anthropic had been deemed a “Supply-Chain Risk to National Security,” declaring that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.”

The fallout originates from a $200M other transaction agreement (OTA) between Anthropic and DOD to pilot advanced AI capabilities for national security. The dispute centers on the boundaries of acceptable use. Anthropic maintains it cannot, in good conscience, allow its technology to support mass domestic surveillance or fully autonomous weapons missions. The Defense Department, however, argues that its use of AI will remain within lawful purposes and views Anthropic’s proposed limitations as overly restrictive.

Accordingly, several agencies including Treasury, State and HHS announced their intent to stop using Anthropic technology products. Likewise, GSA announced the removal of Anthropic from the AI federal sandbox tool, USAI.gov, as well as from the Multiple Award Schedule (MAS) program. Moreover, PSC officials during a briefing confirmed that DOD acquisition officials have been instructed to begin assessing the use of Anthropic and wind down the use of its AI products.

The unprecedented nature of the situation raises several questions in the contractor community around the legality of the issue and the future of AI procurement. Though I am no legal expert in procurement law, the question-and-answer format below pulls from some legal experts’ advice on how the Anthropic vs. USG clash may impact contractors moving forward.

To what extent is Anthropic able to negotiate with the Department of Defense over how its products may be used in national security applications?

According to an article by Jessica Tillipman, given that OTAs are exempt from FAR requirements, the terms of the agreement are set by negotiation rather than regulation. Accordingly, this specific acquisition method provides Anthropic and DOD bandwidth to secure usage rights of the AI products.

AI technologies introduce entirely new challenges for defining and managing usage rights. Unlike traditional software, AI systems generate dynamic outputs, making authority, oversight, and acceptable?use policies central to procurement decisions. The Anthropic situation brings to light the longstanding gaps in federal acquisition frameworks when it comes to AI technologies and intellectual property and product liability. The outcome of the current dispute will likely influence how future AI procurement standards are shaped.

What does a supply-chain risk designation mean?

The Federal Acquisition Supply Chain Security Act of 2018 (FACSA) describes “supply chain risk” – the language Secretary Hegseth used for Anthropic – as,* “the risk that any person may sabotage, maliciously introduce unwanted function, extract data, or otherwise manipulate the design, integrity, manufacturing, production, distribution, installation, operation, maintenance, disposition, or retirement of covered articles so as to surveil, deny, disrupt, or otherwise manipulate the function, use, or operation of the covered articles or information stored or transmitted on the covered articles.”*

FACSA describes the removal order of a supply chain risk designation as the elimination of the covered article from federal information systems and the exclusion of the sources from agency procurement actions. The removal orders apply only to contractors’ federal work, and do not include the product’s internal enterprise or commercial use.

A publication from Mayer|Brown points out that a formal FASCA order requires the Federal Acquisition Security Council (FASC) to, “complete a supply chain risk assessment, consider less intrusive alternatives, and provide notice to the affected source before issuing a recommendation.” It is unclear whether DOD has submitted a formal FASCA order to the FASC at this point.

What actions should my company take if we use Anthropic products under federal contracts?

Again, it is important to distinguish that the supply chain risk designation strictly alludes to the use of Anthropic products under federally contracted work. According to an article by HaystackID, federal contracting organizations should begin to assess their exposure and use of Anthropic products in federal work. Should the supply chain risk designation on Anthropic come to fruition, contractors should have a clear understanding of where and how Anthropic’s models are being used. The article proceeds to encourage contractors to map out “plausible disruption scenarios” in the event that Anthropic products are banned in an official capacity.

What is the significance of invoking the Defense Production Act to compel AI providers to relax safeguards?

Prior to last Friday’s fallout, Defense Secretary Hegseth threatened to invoke the Defense Production Act (DPA) during negotiations with Anthropic. The DPA provides the federal government with the authority to direct and use private industry products and services in the name of national defense. Using the DPA on Anthropic essentially forces the company to adapt its AI models to DOD needs without safeguards in place. According to the HaystackID article, invoking the DPA in this context sends the message that advanced AI models are strategic assets to be used at the disposal of the government. The article points out a key question contractors may face, “how will your assessments change if your supplier can be compelled by law to alter safeguards, prioritize certain use cases, or share access in ways that do not align with your current policy framework?”

Another viewpoint to consider. With the DPA up for reauthorization this coming September, lawmakers will certainly take note of AI governance in the reauthorization debate, according to a Federal News Network article written by Wyatte Gantham-Philips.

What long?term effects could the government’s fallout with Anthropic have on the contractor community?

The government’s recent actions towards Anthropic have been notably assertive. The entire situation brings to light how the federal government will explore its full authority over private AI capabilities moving forward. “It chills private companies’ ability to engage frankly with the government about appropriate uses of their technology, which is especially important in national security settings that so often have reduced public visibility…Retaliating against a company for setting tailored, principled conditions on its product’s use undermines basic market freedoms and makes us all less safe,” according to Alexandra Givens, CEO and president of the Center for Democracy and Technology, in an emailed statement to FedScoop.

Looking ahead, contractors will need to focus on AI documentation and auditability even more, according to the HaystackID article. Embedded AI-related clauses among vendors that will require ongoing updates of safety-policy changes, third-party audits and incidents can ensure added transparency for vendors.

The Mayer|Brown article also advises contractors to review any FASCA-implementing clauses in their existing government contracts and subcontracts to understand and prepare for any ramifications that may result from the recent use of supply chain risk designation on Anthropic.

Final Thoughts

This remains a fluid situation that could ultimately lead to litigation or a negotiated agreement between Anthropic and the federal government. Although the supply?chain risk designation is unprecedented and appears severe, its practical impact may be limited. The legal basis for the supply chain risk designation under FACSA is uncertain, and much of the government’s strongest messaging has come through social media rather than formal channels. Regardless, contractors should take a cautious approach by reviewing how and where Anthropic tools are used within their portfolios to ensure preparedness for any potential developments.

Comments (0)

No comments yet. Be the first to share your thoughts.

Sign in to leave a comment.