Trump Administration Does a Full 180 on AI Oversight, Embracing Rules It Once Mocked
After years of mocking AI regulation as a burden, the Trump administration is suddenly pushing for government oversight of advanced AI models, citing national security fears. This pivot includes resurrecting and rebranding a Biden-era AI safety institute and proposing executive orders for pre-release AI evaluations — a stunning reversal that exposes the administration’s opportunistic approach to tech policy.
The Trump administration, long known for its hostility to AI oversight and regulation, is now embracing many of the very policies it once ridiculed. Reports reveal a sharp pivot driven by national security concerns around the latest AI technologies, especially Anthropic’s “Mythos” model, which can identify cybersecurity vulnerabilities and pose risks of misuse.
For years, Trump’s tech advisors, including former “AI and crypto czar” David Sacks, derided AI safety efforts as burdensome and anti-innovation. But now the administration is reportedly preparing an executive order to establish a government-industry working group tasked with evaluating frontier AI systems before they hit the public. This marks a dramatic reversal from the previous hands-off stance.
Central to this new effort is the Center for AI Standards and Innovation (CAISI), a rebranded version of the Biden-era United States AI Safety Institute. CAISI has inked partnerships with tech giants like Google, Microsoft, and xAI to conduct government evaluations of AI models both before and after deployment. According to agency statements, CAISI has already completed over 40 such assessments, including on unreleased cutting-edge models.
White House National Economic Council Director Kevin Hassett framed the move as creating “a clear road map” akin to FDA drug safety protocols to ensure AI systems are safe before release. “Mythos is the first, but it’s incumbent on us to build a system so U.S. AI can be the leader in AI and be safe at the same time,” Hassett said on Fox Business.
This sudden embrace of AI oversight echoes the Biden administration’s 2023 AI Executive Order, which created the original AI Safety Institute and invoked the Defense Production Act to compel safety testing transparency from major AI companies. Yet Trump’s team scrapped the “safety” language and ousted the institute’s inaugural director shortly after taking office in 2025. The new CAISI is now led by Chris Fall, a former Energy Department official and MITRE research VP, replacing Collin Burns, who was abruptly dismissed.
Experts see this as a stark about-face. Rumman Chowdhury, CEO of Humane Intelligence and former US Science Envoy for AI, called it “a 180” for an administration that actively blocked state AI regulations and opposed federal oversight. However, she cautions that “evaluations are a policy tool, not data-driven” and warns the administration may weaponize oversight as a political instrument.
The Trump White House’s renewed focus is less about AI ethics or existential risks and more about immediate national security threats. Anthropic’s refusal to grant the Pentagon unrestricted access to Mythos led to its designation as a national security threat, a label the company is now contesting in court. Ironically, Trump recently softened his tone toward Anthropic, suggesting future cooperation.
Despite the fanfare, questions remain about whether CAISI has adequate funding and authority to effectively police frontier AI. A 2024 Washington Post investigation highlighted gaps in the government’s AI oversight infrastructure that have yet to be addressed.
In short, the Trump administration’s sudden pivot on AI oversight reveals a pattern of reactive, politically motivated policy shifts rather than consistent leadership. After years of dismissing AI safety as a regulatory nightmare, the administration now claims national security demands a tighter grip on advanced AI — a move that will shape the future of AI governance and U.S. tech dominance amid growing global competition.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.