Pentagon and AI: A Failure of Values

The Pentagon ended its contract with AI developer Anthropic due to disagreements over the ethical use of AI in autonomous weapons and surveillance, with President Trump criticizing the company’s political stance. The dispute highlights broader issues of moral accountability in defense practices, as the department seeks to control AI applications amid concerns about legal and ethical implications. Additionally, recent military actions, such as using lasers to down drones without FAA approval, raise questions about coordination and adherence to legal standards. Overall, the article underscores ongoing tensions between national security interests, ethical considerations, and regulatory oversight in AI and military operations.

Source ↗
Pentagon and AI: A Failure of Values

The dispute between the Pentagon and artificial intelligence developer Anthropic that ended in divorce late yesterday had been portrayed as a tug-of-war over who controls how weapons technology can be used, and the ugly, if contradictory pressures being brought to bear on the company.

More correctly, however, it feels to be a debate about both constraints on power and the continuing failures of this department to take moral values into account in any of its practices.

The Defense/War Department has been using Claude, the basic Anthropic AI package, in an increasing number of ways since signing a contract in 2024. It has become enmeshed with projects as varied as surveillance by the National Security Agency to deployment as part of the recent live attacks by the military in Venezuela.

Anthropic’s leadership says it wants guarantees that its artificial intelligence won’t be aimed at domestic surveillance or deployed in autonomous weapons that have no humans in the loop. The Pentagon’s argument is simple: It says it can use whatever it buys however it chooses without comment from its developer, particularly one with “woke” concerns. Nevertheless, the Pentagon has said it currently does not plan either bad outcome.

So, in one corner we have a corporate developer, Dario Amodei, who wrote in an essay that it is “illegitimate” to him to use AI for “domestic mass surveillance and mass propaganda” and that AI-automated weapons could greatly increase the risks “of democratic governments turning them against their own people to seize power.” In the other, we have the ever-angry Defense Secretary Pete Hegseth who shuns all rules and accountability towards maintaining a “lethal” military.

The Dispute

Late yesterday, Trump provided the answer: The government will stop using Anthropic AI altogether, complicating its own entrenched use of the software in national intelligence and defense work. Trump said Anthropic was a “radical Left AI company run by people who have no idea what the real World is all about” led by “leftwing nut jobs.”

For Trump and Hegseth, it was about power and what should happen to anyone who bucks the Pentagon or Trump’s government.

Though there must have been a million ways to address the differences here, Hegseth and Trump only came up with threats – even contradictory threats. Hegseth both wanted to cut the company off from government business by declaring it a supply chain “threat,” or force it to provide its AI models without restrictions through the Defense Production Act.

But the bigger issue is that this Pentagon, this war/defense secretary cannot seem to handle serious questions about values and morality.

This contract with Anthropic aside, we have seen the same stolid refusal to acknowledge values debate over his efforts to oust women and non-White general officers and show disdain for anything smacking of “diversity” in a significantly mixed-race military, over the legal and moral ramifications of killing survivors at sea, over the deployment of armed troops into U.S. cities, over the value of shared alliances and treaties, and over efforts to demote Sen. Mark Kelly, D-Ariz., over remarks that mirror the military’s own code of justice.

Perhaps like Donald Trump himself, Hegseth has decided that inviting debate over law and morality is beneath him, not an essential part of his job. He pursues an individualistic personal code that he insists must overrule questions from within the military, from Congress, from the public.

It’s an attitude that we have seen reflected in congressional appearances by Attorney General Pam Bondi and Justice officials, by Homeland Security Secretary Kristi Noem and Health and Human Services Secretary Robert F. Kennedy Jr. They simply do not recognize questions that they regard as challenges.

The AI Marketplace

Obviously, Anthropic is not the only AI developer around, but it is the developer that the Pentagon could choose as a partner and changing developers could present significant technology problems and delays. Just yesterday, a competitor announced an infusion of investment money in the hundreds of millions of dollars.

What Anthropic is raising with the Defense Department could as easily be raised with a host of other federal agencies or in the general marketplace. To what degree are there any guarantees about how a product could be used for nefarious purposes, and what are the corporate responsibilities that result?

For sure, Homeland Security wants to use artificial intelligence to identify and trace undocumented migrants and their families, for example, combining vast troves of personal IRS information that a federal judge said this week were shared illegally among federal agencies. The FBI and policing agencies increasingly want to use artificial intelligence to advance their search for criminals.

The Trump administration sued five more states this week to obtain voter rolls replete with personal information for use in “election security” efforts that are never identified or outlined.

Meanwhile, Congress has failed completely to address regulations to govern AI or any of its uses in education, government, military, finance, health and medicine or immigration.

In this matter, Sen. Mark Warner, D-Va., top Democrat on the Senate Intelligence Committee, posted a video on social media in which he said companies need to make some concessions with the government, but indicated he thought Anthropic’s concerns about surveillance and autonomous drones held merit.

Any action by the Pentagon to label the company a supply chain risk or to force it to comply with the Defense Production Act is sure to prompt a lawsuit. Plus, blocking Anthropic from doing business with the government could have effects for intelligence agencies, because Anthropic’s Claude has been the primary A.I. program used in classified systems.

Drone Idiocy

Is it too much to ask that the world’s foremost military use the phone?

For a second time, the Pentagon used a high-energy laser to down a drone belonging to Customs and Border Protection near El Paso, forcing closure of the airspace above Fort Hancock in Texas, 35 miles from El Paso.

Earlier this month, it was Customs and Border Protecting using a similar laser against what turned out to be a metallic balloon, resulting in a shutdown of nearby El Paso International Airport.

In neither case, did anyone call or communicate with the FAA. Without FAA approval, the action may well violate federal law.

We’re already neck deep in a congressional division over communications between the military and the FAA or civilian airports after last year’s fatal collision of a military helicopter and an arriving commercial flight. It also comes as the Pentagon is in a crazy argument with an AI developer over whether it would allow machines to make decisions about firing weapons, among other issues.

Even the word from Texas relied on news sources. The agencies involved merely said they were seeking better ways to coordinate.

Mark R. Ditlevson was pressed at a Senate hearing about becoming the assistant secretary of defense for homeland defense and Americas security affairs on why the Defense Department allowed high-energy lasers over the objections of the FAA. The top Democrats on three panels overseeing aviation and homeland security expressed outrage at the news that the Pentagon had shot down a drone belonging to another branch of government.

Comments (0)

No comments yet. Be the first to share your thoughts.

Sign in to leave a comment.