They Bombed Iran with Claude Despite Trump's Ban: The Reality of Uncontrollable AI
At 8 p.m. on Friday, February 28, 2026, U.S. and Israeli warplanes and missiles began striking Iran. In the first 12 hours of the assault, approximately 900 airstrikes hit Iranian territory, and Supreme Leader Ayatollah Ali Khamenei was killed by an Israeli missile.1 Iranian state media reported 555 deaths, including 165 victims at an elementary school in southern Iran.2
There was another key player in those strikes: Claude, the language model from AI startup Anthropic.
And here’s where it gets stranger. Just hours before the bombs started falling, President Donald Trump had ordered every federal agency to immediately stop using Anthropic products. On the very same day that order came down, that same company’s AI was helping bombers flying over Iran select their targets.
It Started in Venezuela
The roots of this crisis go back to January 2026. The U.S. military launched a raid to capture Venezuelan President Nicolás Maduro, and on February 13, the Wall Street Journal reported that Claude had been used in that operation.3 Venezuela’s Defense Ministry said 83 people were killed in bombing runs over the capital, Caracas.
Anthropic’s terms of service explicitly prohibit using Claude for “violent purposes,” “weapons development,” or “surveillance.” Yet the Department of Defense had signed a contract with Anthropic worth up to $200 million in 2024, and Claude was already deeply embedded in classified military networks, including U.S. Central Command (CENTCOM).4 Within the system built by Palantir, Claude was performing core functions: intelligence analysis, war-gaming simulations, and target identification.
Trump’s Meltdown, and the Ban
Once news broke that Claude had been used in Venezuela, tensions between Anthropic and the Pentagon escalated rapidly. The Defense Department issued Anthropic an ultimatum: remove Claude’s usage restrictions — specifically the prohibitions on mass surveillance of U.S. citizens and use in fully autonomous weapons systems — by 5:01 p.m. Eastern Time on February 28, 2026, or lose the contract.
Anthropic didn’t budge. When the deadline passed, Defense Secretary Pete Hegseth posted a lengthy statement on X (formerly Twitter) declaring Anthropic a “Supply-Chain Risk to National Security.”5
Then Trump himself took to Truth Social:
“The radical left lunatics at Anthropic trying to STRONG-ARM the Department of War is a DISASTROUS MISTAKE. I am directing every agency of the United States Federal Government to immediately cease using Anthropic technology. We don’t need it, we don’t want it, and we will NEVER do business with them again!”6
Trump attached a six-month wind-down period to the ban — an implicit acknowledgment that yanking Claude out of military operations overnight simply wasn’t possible.
The Iran Strikes — On the Same Day as the Ban
And then, just hours later, the airstrikes began.
The WSJ reported that while the Iran strikes were underway, U.S. Central Command was using Claude to analyze drone footage and signals intelligence in real time and to identify targets inside Iran.7 Axios independently confirmed the same account through separate sources.8
The irony was not a minor one. The same company Trump had branded a “radical left AI company” — its AI was, at that very moment, helping the United States military wage war.
Anthropic’s position was complicated. The company had publicly expressed concern about autonomous weapons and mass surveillance, but assisting with target selection didn’t necessarily fall under the direct prohibition on “violent purposes” in its terms of service. National security journalist Spencer Ackerman made the point sharply:
“It’s quite notable that Amodei doesn’t register building a surveillance panopticon for foreigners as a problem. The time to worry about that was before he signed the military contract he didn’t want to give up.”9
What AI Does in War
The Iran strikes were among the first large-scale real-world instances of what military experts call decision compression — the AI-driven acceleration of targeting decisions.
Craig Jones, a professor of political geography at Newcastle University, explained it this way:
“The AI machine is generating targeting recommendations faster, in some respects, than the speed of human thought. Scale and speed are operating simultaneously.”10
Within the Palantir-built system, Claude performed the following functions:
| Role | Description |
|---|---|
| Intelligence Analysis | Real-time synthesis of drone footage, communications intercepts, and human intelligence |
| Target Identification & Prioritization | Classification of military targets and recommended strike sequencing |
| War-Gaming Simulations | Outcome prediction for various strike scenarios |
| Legal Assessment | Automated reasoning related to jus in bello (laws of armed conflict) |
David Leslie, a professor at Queen Mary University of London, warned of the cognitive risks embedded in this structure. Once AI handles both analysis and recommendations, human decision-makers are left with very little time for review. Being forced to evaluate machine-generated options within a narrow window effectively turns “human oversight” into rubber-stamping an automated plan.11
How the decision was made to strike the elementary school in southern Iran — killing 165 people — has not been officially disclosed. The United Nations characterized it as “a grave violation of international humanitarian law.”2
The Cascade of Cancellations
In the three days following the Iran strikes, U.S. government agencies began cutting ties with Anthropic in rapid succession.
- Treasury Department: Secretary Scott Bessent announced on X on March 2 that the department was discontinuing use of Anthropic products.12
- State Department: Formally transitioned to OpenAI.12
- Department of Health and Human Services (HHS): Confirmed a phased wind-down.12
- Federal Housing Finance Agency (FHFA): Director Bill Pulte declared a complete termination of Anthropic product use across FHFA, Fannie Mae, and Freddie Mac.13
- General Services Administration (GSA): Announced it would remove Anthropic services from its Multiple Award Schedule platform.12
OpenAI’s Sam Altman wasted no time capitalizing on the moment. Almost immediately after Trump’s ban was announced, Altman unveiled a new Pentagon agreement, stating that OpenAI’s contract explicitly commits to “no fully autonomous weapons” and “human final authority over the use of force.”6
But Claude is still running. The six-month wind-down clause ensures that. Nextgov reported, citing sources, that Claude is currently the only operationally viable AI model in the U.S. military’s classified networks, and that replacing it will take at least several months.14
Why China Is Watching Closely
This episode isn’t just a story about corporate-government friction inside the United States. William Wei, a vice president at Chinese cybersecurity firm Webray, told the South China Morning Post (SCMP):
“The militarization of AI by the United States is sounding alarms across the entire industry. China’s need for technological self-reliance has never been more urgent.”15
This incident — a live demonstration of how AI is being used in warfare in real time — is further hardening China’s resolve not to fall behind in the military AI race. While Iran remains hamstrung by international sanctions in its own AI development efforts, China is rapidly building out its military AI capabilities, with DeepSeek leading the charge.
A System Built to Be Uncontrollable
What this episode reveals is straightforward: even when a president orders an “immediate halt,” AI doesn’t stop in the middle of a military operation.
This isn’t the failure of any one company or politician. It’s a structural problem — the way AI systems embed themselves deep into critical infrastructure, at a pace that institutions simply cannot match. The Pentagon itself acknowledged there was “no immediate alternative” and asked for six months.
Anthropic should have paid closer attention to this contract from the beginning. The moment it was signed, the company effectively surrendered any real control over how its AI would be used. Pointing to terms of service after the fact and saying “we objected” amounts to trading actual control for the appearance of principle.
The Trump administration’s actions were equally contradictory. They declared they would never use products from a “left-wing company” — then, the moment war began, leaned on that company’s AI. It would be hard to find a starker illustration of the gap between ideological posturing and operational reality.
It’s true that AI doesn’t “make decisions” — it “recommends.” But whether human beings have the time, the information, and the will to push back against those recommendations is an entirely separate question. Nine hundred airstrikes happened in twelve hours. Whether “human final authority” meant anything real at that speed is something we may never know.
Footnotes
-
The Guardian, “Iran war heralds era of AI-powered bombing quicker than ‘speed of thought’”, 2026년 3월 3일. https://www.theguardian.com/technology/2026/mar/03/iran-war-heralds-era-of-ai-powered-bombing-quicker-than-speed-of-thought ↩
-
Futurism, “US Military Using Claude to Select Targets in Iran Strikes”, 2026년 3월 1일. https://futurism.com/artificial-intelligence/claude-anthropic-military-iran ↩ ↩2
-
The Guardian, “US military used Anthropic’s AI model Claude in Venezuela raid, report says”, 2026년 2월 14일. https://www.theguardian.com/technology/2026/feb/14/us-military-anthropic-ai-model-claude-venezuela-raid ↩
-
The Guardian, “US military reportedly used Claude in Iran strikes despite Trump’s ban”, 2026년 3월 1일. https://www.theguardian.com/technology/2026/mar/01/claude-anthropic-iran-strikes-us-military ↩
-
NPR, “OpenAI announces Pentagon deal after Trump bans Anthropic”, 2026년 2월 27일. https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic-pentagon-openai-ai-weapons-ban ↩
-
AP News, “Trump orders US agencies to stop using Anthropic technology in clash over AI safety”, 2026년 2월 27일. https://apnews.com/article/anthropic-pentagon-ai-hegseth-dario-amodei-b72d1894bc842d9acf026df3867bee8a ↩ ↩2
-
Wall Street Journal (Guardian 재인용), “U.S. Strikes in Middle East Use Anthropic Hours After Trump Ban”, 2026년 3월 1일. https://www.wsj.com/livecoverage/iran-strikes-2026/card/u-s-strikes-in-middle-east-use-anthropic-hours-after-trump-ban-ozNO0iClZpfpL7K7ElJ2 ↩
-
Ynet News, “Anthropic’s Claude AI used by US military in Iran strike hours after Trump ban, report says”, 2026년 3월 1일. https://www.ynetnews.com/tech-and-digital/article/hj9wp6gfwg ↩
-
Spencer Ackerman, “America doesn’t make Oppenheimers like we used to”, Forever Wars, 2026년 3월 1일. https://www.forever-wars.com/america-doesnt-make-oppenheimers-like-we-used-to/ ↩
-
The Guardian, “Iran war heralds era of AI-powered bombing quicker than ‘speed of thought’”, 2026년 3월 3일. https://www.theguardian.com/technology/2026/mar/03/iran-war-heralds-era-of-ai-powered-bombing-quicker-than-speed-of-thought ↩
-
위의 기사. ↩
-
Nextgov/FCW, “Agencies begin to shed Anthropic contracts following Trump’s directive”, 2026년 3월 2일. https://www.nextgov.com/acquisition/2026/03/agencies-begin-shed-anthropic-contracts-following-trumps-directive/411823/ ↩ ↩2 ↩3 ↩4
-
MarketScreener, “FHFA’s Pulte: U.S. Federal Housing, Fannie Mae and Freddie Mac are terminating all use of Anthropic products”, 2026년 3월 2일. https://www.marketscreener.com/news/fhfa-s-pulte-u-s-federal-housing-fannie-mae-and-freddie-mac-are-terminating-all-use-of-anthropic-ce7e5cddd880f726 ↩
-
Nextgov/FCW, “It would take the Pentagon months to replace Anthropic’s AI tools, sources say”, 2026년 2월 26일. https://www.nextgov.com/emerging-tech/2026/02/it-would-take-pentagon-months-replace-anthropics-ai-tools-sources/411746/ ↩
-
Seoul Economic Daily (영문판), “Anthropic’s Claude AI Used in Iran Strike Amid Surge in US Downloads”, 2026년 3월 3일. https://en.sedaily.com/international/2026/03/03/anthropics-claude-ai-used-in-iran-strike-amid-surge-in-us ↩