White House AI Vetting Plan Is More Smoke Than Safety
The Trump administration is proposing a government review of AI models before release, but this move falls far short of real oversight. Handing the job to intelligence agencies and tech giants ensures the same conflicts of interest persist, leaving public safety on the chopping block.
The Trump White House is pushing an executive order to create a working group tasked with vetting new AI models before they hit the public. According to The New York Times, this group could include the NSA, the Office of the National Cyber Director, and the director of national intelligence. But don’t be fooled — this is not the kind of independent, rigorous safety check the public desperately needs.
This proposal marks a shift from the administration’s earlier efforts to dismantle Biden-era AI safety frameworks. Yet, simply reviewing AI models without any real enforcement power or transparency is a hollow gesture. Even worse, the working group would be co-designed with the very companies whose products they are supposed to police. That’s like asking foxes to guard the henhouse.
The real problem is that nearly 80 percent of global AI computing power and the bulk of AI research talent are locked inside private companies. These corporations decide what gets built, what gets tested, and what risks get disclosed. When Anthropic withheld its Mythos model citing cybersecurity risks, it underscored how much we rely on the discretion of a few CEOs to protect public safety — a system that is inherently unstable and prone to conflicts of interest.
The administration’s obsession with outpacing China in AI dominance has accelerated this consolidation of power. Last year’s executive order to block state-level AI regulations only tightens the grip of these tech giants. Unlike other industries where independent regulators ensure safety — think FDA clinical trials for drugs — AI companies are left to police themselves behind closed doors. Third-party audits are rare and limited by lack of access and resources.
The federal government already has a tool in place: the Center for AI Standards and Innovation (CAISI). But under Trump, CAISI has been sidelined and repurposed as a voluntary industry partner focused on narrow national security threats, not broad public safety. Its recent deals with Google DeepMind, Microsoft, and others show it remains too cozy with industry to be a true watchdog.
What’s needed is a network of genuinely independent research labs, funded and equipped to test AI systems comprehensively, transparently, and without industry strings attached. Groups like METR are trying to fill this gap, but without full access to models and data, their work is hamstrung.
The White House’s plan is a band-aid on a bullet wound. Real AI safety demands independent oversight that puts public interest ahead of corporate profits and geopolitical posturing. Until then, we’re left trusting the same tech giants who built the problem to fix it — and that’s a risk we can’t afford.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.