President Donald Trump ordered the entire federal government to stop using products from the AI company Anthropic on Friday to stop what he called a “radical left, woke company” from encroaching on the military’s decision-making.
The public feud between the Pentagon and Anthropic which resulted in the firm’s blacklisting has become effectively a proxy for the larger battle over the future governance of AI.
The coverage has focused on Anthropic’s refusal to budge off its two “red lines” — using its product in mass domestic surveillance or to power fully autonomous weapons — and whether Defense Secretary Pete Hegseth’s Pentagon can be trusted to use powerful software with a looser requirement to only use it in a “lawful” manner, as the administration demands.
But, according to reports this week, the confrontation that sparked the feud actually focused on a different but related issue: how AI might be used in the event of a nuclear attack on the United States.
Semafor and the Washington Post have reported that in early December, Under Secretary of Defense for Research and Engineering Emil Michael asked Anthropic’s Dario Amodei whether, in a scenario where nuclear missiles were flying toward the US, the company would “refuse to help its country due to Anthropic’s prohibition on using its tech in conjunction with autonomous weapons.” Administration sources say Michael was infuriated when Amodei said the Pentagon should reach out and check with Anthropic. Anthropic denies the story and says it was willing to create a carve-out for missile defense, but either way, the conversation poisoned relations between the two institutions. (Disclosure: Vox’s Future Perfect is funded in part by the BEMC Foundation, whose major funder was also an early investor in Anthropic; they don’t have any editorial input into our content.)
As I reported for Vox in November, there’s an active and ongoing debate over whether and how artificial intelligence should be integrated into nuclear command and control systems. We don’t know to what extent it already is, but we do know that the US military is actively looking at ways AI and machine learning can be used “to enable and accelerate human decision-making.”
Discussions around nuclear weapons and AI tend to focus on whether machines would ever be given control of the ability to launch nuclear weapons, and the imperative to keep a “human in the loop” for discussions of the use of humanity’s deadly weapons. But many experts and officials say that debate is the low-hanging fruit: Neither the US, nor any other country, is likely to ever hand over decisions on whether to order a nuclear strike to AI.
A much trickier question is the degree to which AI should be relied on for functions like “strategic warning” — synthesizing the massive amount of data collected by satellites, radar, and other sensor systems to detect potential threats as soon as possible.
This is the sort of hypothetical use case that it sounds like Michael was proposing to Amodei. If the system is only being used to give us a better chance of shooting down an incoming missile, it might seem like a no-brainer.
But in a scenario where the US was under attack by ballistic missiles, the president would immediately be faced with a decision — which would have to be made in only minutes — about whether to retaliate, potentially setting off a full-blown nuclear war.
The lives of millions of people might rely on the system getting it right — and there are plenty of examples from the history of nuclear weapons of detection systems leading to near-misses that were only averted by human intuition.
The technology to do that kind of threat detection likely doesn’t exist yet, which, given the stakes, may have been one reason Amodei was reluctant to commit to this scenario.
Retired Lt. Gen. Jack Shanahan, who flew nuclear missions in the Air Force and was later the head of the Pentagon’s Joint Artificial Intelligence Center, told Vox that if nuclear threat detection and response were turned over to artificial intelligence agents, “I don’t want to say it’s certain that there’s going to be a catastrophe, but I think you’re heading down that path.”
He pointed to a widely-reported study released this week from a researcher at King’s College London which found that AI models including Claude, ChatGPT, and Google Gemini were far more likely than human participants to recommend nuclear options in simulated war games. In this scenario, an AI might not be launching a weapon, but a president would have to overrule a panicked-sounding multibillion-dollar system’s prescription under extreme pressure.
One factor that makes military use of AI different from previous technologies with obvious national security uses is that in this case, much of the cutting edge research was done by private firms that initially had an eye on the commercial market, rather than companies responding to demand from the military. (An example of the latter case would be the internet, which evolved from Defense Department and academic projects long before companies found commercial uses for it.)
The new dynamic is bound to lead to culture clashes, particularly between a company like Anthropic that, though it has been happy until now to let its product be used by the Pentagon, has built its public image around its concerns about AI safety, and Pete Hegseth’s “anti-woke” Pentagon.
“Boeing would never object to building anything the government would ask them to build,” said Shanahan, who led the Pentagon’s controversial 2018 partnership with Google, Project Maven, a previous DC-Silicon Valley culture clash. “It’s a defense-industrial base company. [AI is] being born in a very different world with a group of people who don’t see things the way employees of Lockheed may have seen the Cold War. It’s Mars-Venus to an extent.”
How the clash plays out, and whether other companies are willing to let their models be deployed with fewer questions asked, may go a long way toward determining what role AI might play in a hypothetical nuclear war.
This story was produced in partnership with Outrider Foundation and Journalism Funding Partners.
























