[2026-02-27] Van Hollen, Markey Demand Hegseth Stop Pressure Campaign...
U.S. Senators Van Hollen and Markey criticized the Department of Defense for threatening to cancel its contract with AI company Anthropic and use the Defense Production Act, after the company sought safeguards against mass surveillance and autonomous weapons deployment. They argued the DOD's threats represent an abuse of government power and urged the department to engage in good-faith negotiations rather than coercive tactics. The senators highlighted concerns over expanding federal surveillance and autonomous weapons risks and called for protections to uphold civil liberties and safety.
Van Hollen, Markey Demand Hegseth Stop Pressure Campaign Against Anthropic for Refusing to Enable Mass Surveillance and Autonomous Warfare
Today, U.S. Senators Chris Van Hollen (D-Md.), member of the Appropriations Committee, and Edward J. Markey (D-Mass.), member of the Commerce, Science, and Transportation Committee, sent a letter to Secretary of Defense Pete Hegseth urging the Department of Defense (DOD) to drop its week-long intimidation campaign against Anthropic and engage in good-faith negotiations with all of its AI contractors. The Department of Defense has threatened to terminate its contract with Anthropic, label the company a “supply-chain risk,” and potentially invoke the Defense Production Act against it if, by 5:00 PM Friday, February 27, Anthropic does not drop its requests for safeguards against DOD’s potential use of Anthropic’s AI model for mass surveillance and autonomous-weapons deployment.
In the letter the senators wrote, “These DOD threats are an extraordinary and deeply alarming abuse of government power. DOD is deploying its strongest statutory tools to compel a private contractor to agree to its preferred contractual terms, using false and incoherent arguments about national security to justify its action — all because the contractor, Anthropic, has requested two reasonable safeguards in its contract with DOD. This is retaliation, and it is unacceptable. It is an attempt to bully an American company to surrender critical safety and security safeguards — the very protections the federal government should itself be mandating — and instead provide the Department unlimited power over a powerful AI model.”
The senators continued, “The Department’s resistance to restrictions on mass domestic surveillance is particularly alarming because the Trump administration is already weaponizing federal surveillance tools to chill speech and suppress dissent. … The Department’s threats against the one company willing to push back makes clear that the Trump administration has no intention of curtailing its expanding surveillance state. … The Department should end this coercive campaign and instead commit to good-faith negotiations with all its AI contractors to establish enforceable safeguards that protect the American people.”
Text of the letter is available here and below:
Dear Secretary Hegseth,
The Department of Defense’s (DOD) threats to punish an American AI company for refusing to surrender basic safeguards on the use of its AI model represent a chilling abuse of government power. DOD has demanded that its AI contractors replace their standard terms of service with sweeping authorization permitting DOD to use their models for “all lawful purposes.” When one company, Anthropic, sought to retain narrow limits against mass surveillance and autonomous weapons deployment, the Department threatened to cancel the contract, designate the company a “supply chain risk,” and invoke the Defense Production Act to compel compliance. But DOD has it backwards. Our Constitution guarantees these safeguards, and the federal government must uphold them, not weaponize national security designations to evade them. Rather than participate in coercive pressure campaigns, DOD should instead engage in good-faith negotiations with all its AI contractors to establish enforceable safeguards that protect the American people from the misuse of AI.
Over the past year, the DOD awarded contracts of up to $200 million each to several major AI companies, including Anthropic, OpenAI, Google, and xAI. Thereafter, the Department entered into negotiations over AI model usage terms, ultimately asking all four companies to replace their standard corporate terms of service with sweeping authorization that the Department could use their AI models for “all lawful purposes.” Anthropic — the only AI company that had already deployed its model within classified DOD systems — requested two limitations before signing: no use of its AI model for mass domestic surveillance or for autonomous kinetic operations (that is, weapons systems capable of engaging targets without meaningful human decision-making). These were not radical demands; they are baseline guardrails that responsible AI governance requires.
The Department’s rejection of Anthropic’s two modest requests and the threats levied against it are both outrageous and illogical. DOD threatened to discontinue its contract with Anthropic and designate the company as a “supply chain risk” — a label typically reserved for entities linked to foreign adversaries. Such a designation would carry severe consequences, potentially jeopardizing the company’s ability to maintain contracts with other firms that do business with the DOD and inflicting substantial reputational and financial harm. The Department further threatened to invoke the Defense Production Act (DPA) to compel Anthropic to accept sweeping usage terms under the guise of national defense necessity. Notably, the latter two threats — to label Anthropic a supply chain risk and use the DPA against Anthropic — illustrate DOD’s incoherent and extreme position. As one commentator explained: “Either Anthropic is a risk to the DOD and should be expelled from their systems because of that danger or it is so essential to the DOD that our national security would be at risk without unrestrained access to it. It cannot be both.” That DOD continues to make both threats demonstrates that its real goal is unrelated to national security. It’s to force Anthropic to surrender.
These DOD threats are an extraordinary and deeply alarming abuse of government power. DOD is deploying its strongest statutory tools to compel a private contractor to agree to its preferred contractual terms, using false and incoherent arguments about national security to justify its action — all because the contractor, Anthropic, has requested two reasonable safeguards in its contract with DOD. This is retaliation, and it is unacceptable. It is an attempt to bully an American company to surrender critical safety and security safeguards — the very protections the federal government should itself be mandating — and instead provide the Department unlimited power over a powerful AI model.
The Department’s resistance to restrictions on mass domestic surveillance is particularly alarming because the Trump administration is already weaponizing federal surveillance tools to chill speech and suppress dissent. Since the start of President Donald Trump’s second term, we have repeatedly sounded the alarm about the federal government’s expanding surveillance state. Over the past few months, we have written letters to DHS and the Department of State on their arsenal of surveillance technologies that include license plate readers, social media scanning, AI tools, facial recognition technology, and an alleged “domestic terrorist” database, yet we have received practically no answers. The pattern is unmistakable: the federal government is deploying powerful surveillance technologies with no transparency, no accountability, and no regard for civil liberties. The Department’s threats against the one company willing to push back makes clear that the Trump administration has no intention of curtailing its expanding surveillance state.
The Department’s resistance to limits on autonomous weapons is equally alarming and places human lives at risk, particularly in the hands of this administration, which has credibly been accused of war crimes and extrajudicial killings. As lawmakers grapple with requirements for meaningful human control over AI used in warfare, the Department’s hostility to those very safeguards raises serious concerns about where it intends to deploy AI without human oversight. Life-and-death decisions to use deadly force must remain in human hands, and the Department must embrace autonomous weapons safeguards rather than threatening companies that try to require them.
The federal government has an obligation to uphold the public’s safety, privacy, and civil liberties. Retaliating against companies that seek to implement reasonable protective measures to those ends is inconsistent with that obligation and undermines the very standards the government should be enforcing. The Department should end this coercive campaign and instead commit to good-faith negotiations with all its AI contractors to establish enforceable safeguards that protect the American people. Thank you for your attention to this important matter.
Sincerely,
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.