ChatGPT Uninstalls Surge 295%: The Backlash Against OpenAI's Military Deal
On Saturday, February 28, 2026, smartphone users began deleting the ChatGPT app in droves. Data from market research firm Sensor Tower showed that U.S. mobile uninstalls of ChatGPT spiked 295% compared to the day before.1 Just 24 hours earlier, the app had been seeing a 14% uptick in new downloads — then it reversed in a single day. The trigger was OpenAI’s announcement of a military AI contract with the U.S. Department of Defense (rebranded as the “Department of War” under the Trump administration).
How It Started: Anthropic Out, OpenAI In
To understand the backlash, you have to go back one day — to Friday, February 27. At 5:01 PM, President Trump signed an executive order directing federal agencies to fully discontinue use of Anthropic’s AI technology within six months. Defense Secretary Pete Hegseth designated Anthropic a “supply chain risk.” The reason: Anthropic had held firm on two conditions during military contract negotiations — no use of AI for mass domestic civilian surveillance, and no deployment in fully autonomous weapons systems not yet deemed sufficiently safe. The DoD refused to accept those terms, and talks fell apart.2
That same Friday night, OpenAI announced it had signed a contract with the Department of Defense. The timing was perfect — perhaps too perfect. Within hours of Anthropic’s ouster, OpenAI had stepped into its place.
The Numbers Behind the Backlash
The surge in uninstalls was just one data point. Sensor Tower’s figures painted a more layered picture of the public response:1
| Metric | Change | Date |
|---|---|---|
| App uninstalls | +295% | Sat. Feb 28 vs. prior day |
| New downloads | −13% | Sat. Feb 28 vs. prior day |
| New downloads | −5% additional | Sun. Mar 1 vs. prior day |
| 1-star reviews | +775% | Sat. Feb 28 |
| Anthropic Claude downloads | +37% | Fri. Feb 27 |
| Anthropic Claude downloads | +51% | Sat. Feb 28 |
This wasn’t just reflexive outrage. That weekend, Claude climbed to the #1 free app on the U.S. App Store — a jump of more than 20 places compared to a week earlier (February 22).3 Claude also topped the free iPhone app charts in six countries, including Canada, Germany, and Switzerland. Users weren’t just deleting ChatGPT — they were making a deliberate choice to move elsewhere.
Sam Altman’s Apology: “It Looked Opportunistic and Sloppy”
As the backlash intensified, OpenAI CEO Sam Altman posted an internal memo to X (formerly Twitter) on Monday, March 3.4 The key passage read:
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
He also acknowledged: “We should not have rushed to announce that contract on Friday.” The admission was telling — Altman was conceding to his own company that whatever the intent, the execution had left a bad impression.
Alongside the memo, Altman disclosed amendments to the contract. Three stood out: First, an explicit prohibition against OpenAI’s AI systems being “intentionally used for domestic surveillance of U.S. citizens and nationals” under the Fourth Amendment, the National Security Act of 1947, and FISA. Second, a provision barring defense intelligence agencies — including the NSA, NGA, and DIA — from using OpenAI systems without separate contract modifications. Third, explicit restrictions on the use of commercially purchased data such as location information and fitness app data.5
Legal experts quickly raised questions about how enforceable these provisions would actually be in practice.
What the Contract Said — and Why It Sparked Outrage
The day after announcing the contract, on March 1, OpenAI released additional details.6 Under the agreement, OpenAI’s AI models would be deployed in classified environments. The company claimed the deal included “more safeguards than any previous classified AI deployment contract” and offered stronger protections than even Anthropic’s arrangement.
But critics pointed to a different problem. Since its founding in 2022, OpenAI’s usage policies had explicitly prohibited use for “weapons development” and “military and warfare purposes.” Then, quietly, in early 2024, those prohibitions were removed from the policy. And in February 2026, the DoD contract was signed. The through-line was hard to miss.
What angered users wasn’t simply the fact that OpenAI had signed a military contract. It was that a company that had spent years building public trust on a platform of “AI safety” had moved swiftly to fill the exact vacancy created by a competitor that had refused to compromise on safety conditions. The timing said everything.
The NATO Angle: Military Expansion Continues
Just six days after the Anthropic ban announcement, the DoD contract signing, and the amended contract disclosure, another story broke. On March 4, Sam Altman told an internal meeting that OpenAI was exploring deployment of AI across NATO’s “all classified networks.” An OpenAI spokesperson later walked that back, clarifying Altman had “misspoken” and that the actual discussions involved “non-classified networks.”7
Classified or not, the direction was clear. OpenAI was now targeting not just the U.S. military but NATO — the 32-nation military alliance’s network infrastructure. Reuters reported the story, and user reaction flared again.
Anthropic’s Paradox: Banned, but Thriving
Amid all the turmoil, something ironic was unfolding. The company that had been publicly expelled by the Trump administration was having one of its best weeks ever.
On March 4, 2026, Anthropic CEO Dario Amodei announced at the Morgan Stanley TMT (Technology, Media & Telecommunications) Conference that Anthropic’s ARR (annual recurring revenue) had surpassed $19 billion.8 February alone had added $6 billion. That was more than double the $9 billion ARR reported just three months earlier in December 2025, and $20 billion was only a matter of time.
App store rankings, revenue metrics — both pointed in the same direction. While OpenAI was charging toward the military market through its DoD deal, Anthropic was proving that saying “no” could itself become a brand asset.
Defense Contractors’ Dilemma: Removing Anthropic
But Anthropic’s troubles weren’t over. The DoD issued instructions to companies with military contracts to remove all Anthropic AI tools from their supply chains. Major defense contractors including Lockheed Martin began directing employees to stop using Claude and transition to other AI models.9
Lockheed Martin and others framed the move as “a compliance decision, not a reflection of Claude’s functional shortcomings.” In other words, it was a political call, not a performance one.
Anthropic pushed back immediately. The company’s legal team argued that “the DoD has no legal authority to prohibit its contractors from using Claude.”10 Legal experts in government contracting and technology law noted the dispute could potentially end up in court, given the conflicting interpretations of how far the DoD’s supply chain control authority actually extended.
AI Companies and Military Contracts: A Question of Trust
Framing the ChatGPT uninstall wave as a simple PR failure or bad timing misses the point. The episode exposed a fundamental tension now at the heart of the AI industry.
AI companies had spent years earning public trust through messages about “responsible AI” and “safety first.” OpenAI in particular had built its identity around this — safety was its founding principle, it had started as a nonprofit, and it had been one of the loudest voices warning about the dangers of AGI.
And yet here was that same company, in the very moment a competitor was expelled for refusing to compromise on safety conditions, signing a contract to supply AI for classified military systems. For users, this wasn’t just a business decision. It was a statement about values.
Google had faced a similar reckoning in 2018, when internal opposition to Project Maven — using AI to analyze drone imagery for the military — led more than 4,000 employees to sign a protest letter. Google ultimately chose not to renew that contract, and the decision was recorded as a landmark moment in AI ethics. Now, in 2026, the same debate was playing out again. Seven years on, the companies had changed, and the military applications of AI had grown incomparably more sophisticated.
Altman’s phrase in the internal memo — “looked opportunistic and sloppy” — was an honest piece of self-awareness. But the problem wasn’t one of appearances. What users were really asking was: “Is the AI tool I use every day being deployed for purposes I never agreed to?”
The Questions That Remain
The questions this episode raised aren’t going away.
Can the domestic surveillance prohibition OpenAI added to the contract actually be enforced? Legal experts were skeptical. The mechanism by which a private AI company can impose and enforce use restrictions in military contracts remained unclear.
If the NATO deal becomes reality, what role will AI play across the military networks of 32 nations? The “non-classified” qualifier was noted, but in practice, how meaningful is the line between classified and non-classified when it comes to military operational support?
Anthropic achieved short-term growth through refusal. But if DoD pressure to purge Anthropic from supply chains continues, will enterprise customers be able to keep choosing Claude? Military and intelligence agencies are becoming an increasingly significant customer segment across the AI industry.
And the most fundamental question of all: Is there a guaranteed separation between the service AI companies provide to everyday users and the service they provide to governments and militaries? OpenAI’s amended contract didn’t offer a clear answer to that one either.
The 295% figure from February 28 was users answering that question with their fingers — by deleting the app. Sam Altman said he should not have rushed. But the users who uninstalled? They weren’t rushing at all.
Footnotes
-
Aisha Malik, “ChatGPT uninstalls surged by 295% after DoD deal,” TechCrunch, March 2, 2026. https://techcrunch.com/2026/03/02/chatgpt-uninstalls-surged-by-295-after-dod-deal/ ↩ ↩2
-
Cade Metz & Karen Weise, “OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash,” The New York Times, February 27, 2026. https://www.nytimes.com/2026/02/27/technology/openai-agreement-pentagon-ai.html ↩
-
Aisha Malik, “Anthropic’s Claude rises to No. 2 in the App Store following Pentagon dispute,” TechCrunch, March 1, 2026. https://techcrunch.com/2026/03/01/anthropics-claude-rises-to-no-2-in-the-app-store-following-pentagon-dispute/ ↩
-
Sam Altman (@sama), internal memo reposted on X, March 3, 2026. Original quote: “We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.” ↩
-
Jennifer Elias, “OpenAI’s Altman admits defense deal ‘looked opportunistic and sloppy’ amid backlash,” CNBC, March 3, 2026. https://www.cnbc.com/2026/03/03/openai-sam-altman-pentagon-deal-amended-surveillance-limits.html ↩
-
Devin Coldewey, “OpenAI reveals more details about its agreement with the Pentagon,” TechCrunch, March 1, 2026. https://techcrunch.com/2026/03/01/openai-shares-more-details-about-its-agreement-with-the-pentagon/ ↩
-
Reuters, “OpenAI looking at contract with NATO, source says,” March 4, 2026. https://www.reuters.com/technology/openai-looking-contract-with-nato-source-says-2026-03-04/ ↩
-
Investing.com, “Anthropic ARR surges to $19 billion on Claude Code strength,” Yahoo Finance, March 4, 2026. https://uk.finance.yahoo.com/news/anthropic-arr-surges-19-billion-151210607.html ↩
-
Reuters, “Defense contractors, like Lockheed, seen removing Anthropic’s AI after Trump ban,” March 4, 2026. https://www.reuters.com/sustainability/society-equity/defense-contractors-like-lockheed-seen-removing-anthropics-ai-after-trump-ban-2026-03-04/ ↩
-
Jennifer Elias, “Defense tech companies are dropping Claude after Pentagon’s Anthropic blacklist,” CNBC, March 4, 2026. https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html ↩