US Defense Secretary Pete Hegseth sets deadline for Anthropic over military AI access
US Defense Secretary Pete Hegseth has issued a deadline to Anthropic to grant unrestricted military access to its AI technology, warning that failure to comply by February 27 could threaten its federal contract and integration into the US military’s AI network. Anthropic, founded by former OpenAI researchers, has refused to participate in military deployment of autonomous weapons and surveillance activities, citing ethical concerns. The dispute highlights broader tensions over AI’s role in national security and the balance between technological advancement, ethics, and civil liberties.
In a major development tied to the expanding role of artificial intelligence in defense operations, US Defense Secretary Pete Hegseth has reportedly issued a firm deadline to Anthropic, pressing the AI company to permit unrestricted military access to its technology. During a meeting on Tuesday (February 24) with Anthropic CEO Dario Amodei, Hegseth made clear that failure to cooperate by Friday (February 27) could jeopardize the company’s standing in a key federal contract.
Sources familiar with the discussion described the conversation as cordial but resolute. Amodei maintained Anthropic’s long-standing ethical boundaries, particularly objecting to the deployment of fully autonomous weapons systems and the use of AI for surveillance targeting American citizens. He has previously voiced strong reservations about handing governments unchecked control over advanced AI capabilities.
Anthropic is currently the only major AI developer that has not integrated its systems into the US military’s newly established internal AI network. Defense officials have indicated that if the company refuses to participate, it could face serious consequences, including designation as a supply chain vulnerability. In more extreme circumstances, the Pentagon could invoke the Defense Production Act, granting authority to compel access to Anthropic’s technology for defense-related use.
The standoff reflects broader tensions surrounding AI’s expanding footprint in national security. Critics argue that the growing reliance on AI for battlefield decision-making and intelligence gathering demands stricter oversight. Hegseth, however, has consistently advocated for rapid modernization, emphasizing the need for AI systems that operate without what he describes as ideological or cultural constraints within defense institutions.
Founded by former OpenAI researchers, Anthropic has positioned itself as a proponent of responsible AI development. Amodei has frequently warned of the risks posed by autonomous weapons, AI-directed drones, and large-scale surveillance programs, urging policymakers to establish stronger regulatory safeguards before expanding military deployment. While Anthropic initially received approval to operate within classified defense systems, other firms, such as Google, OpenAI, and xAI, have so far operated in more limited, unclassified capacities. Recently, Pentagon leadership has shown increased preference for companies willing to align fully with its evolving AI strategy.
Trending Stories
Last summer, the Department of Defense awarded contracts to four AI companies, each potentially valued at up to $200 million. The uncertainty surrounding Anthropic’s participation now places the company at a crossroads, balancing lucrative government partnerships against its publicly stated ethical commitments. The situation echoes earlier controversies, including backlash over Project Maven, when technology employees protested involvement in military drone and surveillance initiatives.
As defense agencies accelerate AI integration, experts continue to call for clearer regulatory standards to prevent misuse—particularly in areas involving lethal force and domestic monitoring.
The unfolding dispute underscores the complex intersection of technological innovation, military readiness, corporate ethics, and civil liberties, all at a time when AI is rapidly reshaping the global security landscape.
Comments (0)
No comments yet. Be the first to share your thoughts.
Sign in to leave a comment.