President Trump bans Anthropic from use in government systems

2026-02-2721:40331216www.npr.org

In a social media post calling the artificial intelligence lab a "A RADICAL LEFT, WOKE COMPANY," President Trump ordered the federal government to stop using Anthropic's products.

The Pentagon is seen from an airplane, Monday, Feb. 2, 2026, in Washington.

President Trump ordered the U.S. government on Friday to stop using the artificial intelligence company Anthropic's products.

The decision, announced in a Truth Social post, follows a dispute between Anthropic and the Pentagon over whether the company could prohibit its tools from being used in mass surveillance of American citizens or to create autonomous weapon systems, as part of a military contract worth up to $200 million.

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution," Trump wrote in his post. "Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic's technology. We don't need it, we don't want it, and will not do business with them again!"

He said there would be a six-month phaseout of Anthropic's products.

The announcement came about an hour before a deadline set by the Pentagon, which had called on Anthropic to back down.

And it happened as at least one other AI firm said it had similar concerns about the military uses of AI. Earlier in the day, OpenAI CEO Sam Altman said he shares Anthropic's "red lines" restricting military use of AI.

The Pentagon said it had nothing to add to the president's announcement. Anthropic didn't respond to an immediate request for comment.

Ban comes as Anthropic plans an IPO

Defense Department officials had given Anthropic a deadline of 5:01 p.m. ET on Friday to drop restrictions on its AI model, Claude, from being used for domestic mass surveillance or entirely autonomous weapons. The Pentagon has said it doesn't intend to use AI in those ways, but requires AI companies to allow their models to be used "for all lawful purposes."

The government had also threatened to invoke the Korean War-era Defense Production Act to compel Anthropic to allow use of its tools and, at the same time, warned it would label Anthropic a "supply chain risk," potentially blacklisting it from lucrative government contracts. President Trump made no mention of either threat in his Truth Social post.

The government ban comes at a time when Anthropic is under heightened scrutiny, since the company, which is valued at $380 billion, is planning to go public this year. While the Pentagon contract worth as much as $200 million is a relatively small portion of Anthropic's $14 billion in revenue, it's unclear how the friction with the administration will sit with investors or affect other deals the company has to license its AI model. Anthropic CEO Dario Amodei has pointed out that the company's valuation and revenue have only grown since it took a stand against Trump officials over how AI can be deployed on the battlefield.

Whether AI companies can set restrictions on how the government uses their technology has emerged as a major sticking point in recent months between Anthropic and the Trump administration.

On Thursday, Amodei said the company would not budge in the face of the Pentagon's threats. "We cannot in good conscience accede to their request," he wrote in a lengthy statement.

A 2024 file photo of Dario Amodei, CEO and cofounder of Anthropic.

"Anthropic understands that the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner," he said, using the Pentagon's "Department of War" rebranding. But, he added, domestic mass surveillance and fully autonomous weapons are uses that are "simply outside the bounds of what today's technology can safely and reliably do."

Emil Michael, the Pentagon's undersecretary for research and engineering, shot back in a post on X, accusing Amodei of lying and having a "God-complex."

"He wants nothing more than to try to personally control the US Military and is ok putting our nation's safety at risk," Michael wrote. "The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company," he wrote.

In an interview with CBS News, Michael said federal law and Pentagon policies already bar the use of AI for domestic mass surveillance and autonomous weapons."

"At some level, you have to trust your military to do the right thing," he said.

OpenAI expressed similar concerns

Any effort by the Pentagon to replace Anthropic could be complicated by Altman's support for the safety guardrails Anthropic set. OpenAI, Google, and xAI also have Defense Department contracts and have agreed to allow their AI tools to be used in any "lawful" scenarios. This week, xAI became the second company after Anthropic to be approved for use in classified settings.

Altman told CNBC on Friday morning that it's important for companies to work with the military "as long as it is going to comply with legal protections" and "the few red lines" that "we share with Anthropic and that other companies also independently agree with."

Sam Altman, co-founder and CEO of OpenAI, testifying before a Senate committee in 2025.

In an internal note sent to staff on Thursday evening, Altman said OpenAI was seeking to negotiate a deal with the Pentagon to deploy its models in classified systems with exclusions preventing use for surveillance in the U.S. or to power autonomous weapons without human approval, according to a person familiar with the message who was not authorized to speak publicly. The Wall Street Journal first reported Altman's note to staff.

The Defense Department didn't respond to a request for comment on Altman's statements.

Independent experts say the standoff is highly unusual in the world of Pentagon contracting.

"This is different for sure," said Jerry McGinn, director of the Center for the Industrial Base at the Center for Strategic and International Studies, a Washington DC think tank. Pentagon contractors don't usually get to tell the Defense Department how their products and services can be used, he notes "because otherwise you'd be negotiating use cases for every contract, and that's not reasonable to expect."

At the same time, McGinn noted, artificial intelligence is a new and largely untested technology. "This is a very unusual, very public fight," he said. "I think it's reflective of the nature of AI."

NPR's Bobby Allyn contributed to this report.


Read the original article

Comments

  • By paozac 2026-02-2722:149 reply

    This might actually make Anthropic very popular among those who do not support the current US presidency, a significant market share.

    • By esperent 2026-02-283:311 reply

      For the last couple of months all the top models have been from the US. I don't expect that to last - or even if it does the gap will gradually diminish to the point that "top" is largely irrelevant outside of marketing.

      But at the moment I must use a US model for the best results for complex queries. So I'm glad that there's one company I'm at least somewhat ok with supporting. I'm not even that picky. All I want is a reasonable guarantee that I'm not supporting a company who's tools are used for autonomous drone warfare in American wars, and a few other basic things like that.

      I guess someone might feel moved to respond to this by pointing out all the other companies outside of AI that I should be avoiding too. Please do! I'm actively trying to be more mindful of the companies I support rather than just chasing the lowest bills. I'm in the process of migrating my company away from MS 365 to Nextcloud on Hetzner, which is going slow but well.

      • By re-thc 2026-02-2811:19

        > For the last couple of months all the top models have been from the US

        The commenter you responded said presidency not country. There are multiple parties and possibilities here.

    • By smalltorch 2026-02-2722:213 reply

      Take notes other AI companies. This is a totally fine line to draw and it's not even that high bar.

      Dont allow systems to be built with your AI that automate mass surveillance or automate kill decisions.

      • By monkpit 2026-02-283:381 reply

        The disappointing this is someone _will_ do it, you know it’s going to happen.

        • By ffsm8 2026-02-288:571 reply

          Someone? You say it like it's unknown that OpenAI got the contract

          • By monkpit 2026-02-2817:10

            I meant in the general sense - you could have all the major players take a moral stance and say “no” but someone would say yes eventually.

      • By chii 2026-02-285:501 reply

        > Dont allow systems to be built with your AI

        those systems will be built regardless. That type of boycott being asked from companies is essentially asking companies to not make profit where there's profit to be made, when those doing the asking is not also taking in any sacrifices for this boycott.

        Instead of asking companies to be altruistic, those wanting such systems to be illegal should be using the civic system we have today to make it so - yes, this costs effort, resources and time. Like all hard things.

        • By LarsDu88 2026-02-286:01

          The thing is, other companies can pick up that slack. Heck even distillation attacks against claude can be used to create such models

      • By YZF 2026-02-286:051 reply

        With the price of tokens I think mass surveillance with AI is not a realistic use case.

        There already is a mass surveillance. Presumably most electronic communication is monitored. I guess LLMs can likely do a somewhat better job but probably not worth the cost for the marginal benefit over existing technologies?

        Similarly for "Terminators" or other AI killing machines... Isn't it cheaper to use a human? We have autonomous weapons already, like cruise missiles... Other than the movies what does a reality with LLMs pulling triggers look like? Cars are also "killing machines" and we're letting computers drive them...

        Unfortunately if these things do start making sense for whatever reason they're probably going to happen. Private companies in general have no way to prevent their technology from being used for "defense" applications. Once that genie is out of the bottle it's not going back in.

    • By j_bizzle 2026-02-2723:582 reply

      My first thought here was that Anthropic just became the first call for Europe to discuss AI services and infrastructure.

      I wonder if a US company has ever wholesale emigrated before?

      • By john-h-k 2026-02-285:181 reply

        Europe is not particularly anti surveillance

        • By EagnaIonat 2026-02-286:24

          Actually the EU passed an AI act back in 2024 that tackles that.

      • By Aperocky 2026-02-283:542 reply

        Please note that mass surveillance of Europeans is fine from Anthropic's statement.

        • By mindslight 2026-02-285:33

          It's called political correctness. There is a longstanding undercurrent in American politics of treating Constitutional rights (aka natural rights) as only applicable to Americans [0]. Framing the issue in terms of lofty universal ideals would be politically suicidal. And with the current precarious situation, giving more energy to overly-simplistic jingoist chants is not what anybody needs.

          [0] this seems to be a bit of proto-fascism that helped set the stage for the overt dynamic we've now got

        • By riffraff 2026-02-285:381 reply

          Yeah it's a shitty statement "we're totally fine accidently targeting foreigners but come on, not 'mericans" because it's well known that once you have the capability it will be aimed at everyone anyway.

          • By salawat 2026-02-2818:56

            Folks; there isn't a geopolity agnostic equivalent of incorporation. There is no Anthropic without that grant of incorporation. Even if they wanted to take the most principled stance, they can only push so far, and to have a system that is incompatible with operating a mass surveillance network, it has to be capable of monitoring itself to recognize whether it has been configured in such a way as it's activity is aligned to the activity of mass surveillance. You'd have to build the mass surveillance oracle, which is in and of itself, a mass surveillance machine.

    • By YZF 2026-02-285:561 reply

      Claude Code seems to be the best at programming right now. I think if Anthropic can maintain or increase their lead they'll have no shortage of customers. I imagine Anthropic's business is driven by business customers rather than individual paying customers at this point.

      • By braebo 2026-02-286:20

        It’s the best at everything. OpenAI models are dangerously stupid enough as it is. Not much can phase me these days, but a sycophantic ChatGPT in a kill chain is nightmare fuel.

    • By jsbisviewtiful 2026-02-282:27

      Personally, I removed ChatGPT from my phone for Claude last week. This reenforces my choice.

    • By camillomiller 2026-02-285:31

      Europe is a great market. To be fair, given Dario’s nationality, we should make a massive offer for Anthropic to relocate somewhere in Europe like San Marino or such. Levying taxes and letting them have all they need. (Joking, but to a point)

    • By ErigmolCt 2026-02-2811:251 reply

      It probably boosts their reputation with one segment of the market while making them much less attractive to another (just my thoughts)

      • By kivle 2026-02-2812:04

        And one of those segments is about 50% of the US population, and the other is about 50% of the US population + the rest of the world.

        Source: A Norwegian that just cancelled his ChatGPT plus subscription and will consider Gemini or Claude instead.

    • By tombert 2026-02-285:54

      They got at least one more subscriber as of about twenty minutes ago since I just canceled my ChatGPT Pro subscription and moved to Anthropic.

      Sam Altman immediately capitulating to the Trump administration after bragging like four hours ago about he wouldn't shows a distinct lack of integrity. It's not like ChatGPT is categorically better than Claude, I just didn't bother change to Claude before purely out of my previous inertia with ChatGPT.

    • By api 2026-02-284:40

      Does Anthropic make money yet, or like a lot of AI are they selling dollars for fifty cents each? Can they keep going without a lot of investment from administration-aligned oligarchs like the Saudis, or without these circular stock-for-compute deals?

  • By drivingmenuts 2026-02-2722:38

    That's a solid gold recommendation for Anthropic, far as I'm concerned.

  • By motbus3 2026-02-287:131 reply

    Is it weird that out a failed negotiation, there is a threat against a private entity publicly?

    Isn't it supposed to be freedom of something? Does this trigger any laws or something? Just for curiosity sake

    • By croes 2026-02-287:411 reply

      > Isn't it supposed to be freedom of something?

      Oh, you fell for the ads

      • By motbus3 2026-02-2816:211 reply

        To be fair. This level of weirdness is not seen since the 60s/70s

        • By olyjohn 2026-03-0121:31

          Its been there but its just been less public. The CEO of Qwest was ousted/and sent to prison because he refused to allow the government to tap all of their phone systems for NSA mass surveillance. This was like 25 years ago.

HackerNews