Trump Admin Seeks to Strip Contractors of Control Over Government AI Use
The Trump administration is pushing draft policy language that would limit private contractors' ability to dictate how their AI technology is used by federal agencies. This move aims to ensure government control over AI deployments, signaling a shift toward tighter oversight amid concerns over security and ethical use.
The Trump administration is quietly crafting policy language designed to curtail the influence private contractors have over how their artificial intelligence (AI) technologies are deployed within federal agencies. Sources familiar with the ongoing drafting process told Nextgov/FCW that the language emphasizes the government’s authority to determine lawful and appropriate uses of AI products it acquires, rather than leaving those decisions solely to the companies providing the technology.
While it remains uncertain whether this language will be formalized into an executive order or another policy directive, the effort reflects a growing desire within the administration—particularly the Department of Defense—to assert stronger control over AI applications in government missions. A White House official has declined to confirm these developments, stating that any policy announcements would come directly from the President.
This initiative emerges in the wake of a contentious dispute between the Pentagon and AI company Anthropic. The Department of Defense designated Anthropic as a supply chain risk, forcing federal agencies to remove the company’s AI products from their operations. This move sparked backlash from lawmakers who viewed it as retaliatory, prompting legislative efforts to prevent government blacklisting of contractors.
The administration’s new policy language appears aimed at preventing similar conflicts by explicitly establishing that the government, not vendors, has the final say in how AI tools are used—especially in sensitive areas like autonomous weaponry and domestic surveillance. Draft documents also address managing cybersecurity threats posed by advanced AI models such as Anthropic’s Mythos Preview and OpenAI’s GPT 5.5.
Pentagon officials, including Emil Michael, undersecretary of defense for research and engineering, have publicly acknowledged the need for guardrails on AI use that align with national security priorities. Michael emphasized that any restrictions must be consistent with government values and operational mandates, underscoring the administration’s intent to maintain tight control over AI deployed on defense networks.
This policy shift marks a notable departure from earlier Trump-era stances that favored a more permissive environment for AI innovation. Instead, the administration is signaling a more hands-on approach—one that prioritizes government oversight and control in the rapidly evolving AI landscape, even if that means sidelining contractor preferences.
As AI technology becomes increasingly central to national security and government operations, these developments highlight the administration’s focus on consolidating power over the tools shaping America’s future. For contractors and the tech industry, the message is clear: the government intends to call the shots on how AI is used within its walls.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.