Statement on the comments from Secretary of War Pete Hegseth

2026-02-281:201170357www.anthropic.com

Anthropic's response to the Secretary of War and advice for customers

Earlier today, Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk. This action follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons.

We have not yet received direct communication from the Department of War or the White House on the status of our negotiations.

We have tried in good faith to reach an agreement with the Department of War, making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions above. To the best of our knowledge, these exceptions have not affected a single government mission to date.

We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.

Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so.

We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.

No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.

What this means for our customers

Secretary Hegseth has implied this designation would restrict anyone who does business with the military from doing business with Anthropic. The Secretary does not have the statutory authority to back up this statement. Legally, a supply chain risk designation under 10 USC 3252 can only extend to the use of Claude as part of Department of War contracts—it cannot affect how contractors use Claude to serve other customers.

In practice, this means:

  • If you are an individual customer or hold a commercial contract with Anthropic, your access to Claude—through our API, claude.ai, or any of our products—is completely unaffected.
  • If you are a Department of War contractor, this designation—if formally adopted—would only affect your use of Claude on Department of War contract work. Your use for any other purpose is unaffected.

Our sales and support teams are standing by to answer any questions you may have.

We are deeply grateful to our users, and to the industry peers, policymakers, veterans, and members of the public who have voiced their support in recent days. Thank you. Above all else, our priorities are to protect our customers from any disruption caused by these extraordinary events and to work with the Department of War to ensure a smooth transition—for them, for our troops, and for American military operations.


Read the original article

Comments

  • By lebovic 2026-02-282:4215 reply

    I used to work at Anthropic, and I wrote a comment on a thread earlier this week about Anthropic's first response and the RSP update [1][2].

    I think many people on HN have a cynical reaction to Anthropic's actions due to of their own lived experiences with tech companies. Sometimes, that holds: my part of the company looked like Meta or Stripe, and it's hard not to regress to the mean as you scale. But not every pattern repeats, and the Anthropic of today is still driven by people who will risk losing a seat at the table to make principled decisions.

    I do not think this is a calculated ploy that's driven by making money. I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

    [1]: https://news.ycombinator.com/item?id=47174423

    [2]: https://news.ycombinator.com/item?id=47149908

    • By lich_king 2026-02-283:328 reply

      My lived experience with tech companies is that principles are easy when they're free - i.e., when you're telling others what to do, or taking principled stances when a competitor is not breathing down your neck.

      So, with all respect, when someone tells me that the people they worked with were well-intentioned and driven by values, I take it with a grain of salt. Been there, said the same things, and then when the company needed to make tough calls, it all fell apart.

      However, in this instance, it does seem that Anthropic is walking away from money. I think that, in itself, is a pretty strong signal that you might be right.

      • By clutter55561 2026-02-284:453 reply

        HN is pretty polarised about this - they are either “the good guys” or “doing it for positive marketing”.

        I’m on the camp “the world is so fucked up, take the good when you can find it”.

        Beggars can’t be choosers when it comes to taking a stand against dictatorships.

        • By abustamam 2026-02-284:502 reply

          Yeah, the alternative is be OK with their product being used for surveillance.

          Not sure why it's controversial that they said no, regardless of the reasoning. Yeah there's a lot of marketing speak and things to cover their asses. Let's call them out on that later. Right now let's applaud them for doing the right thing.

          FWIW I do not think they are the "good guys" (if I had a dollar for every company that had a policy of not being evil...). But they are certainly not siding with the bad guys here.

          • By louisbourgault 2026-02-2810:141 reply

            > Let's call them out on that later. Right now let's applaud them for doing the right thing.

            Yes, yes, yes. When I first read the stuff about this yesterday, my immediate thought was "wait, these are the only two things they have a problem with?"

            But they made a stand, and that still matters. We shouldn't let the perfect be the enemy of the good. At least it's not Grok.

            • By pklausler 2026-02-2813:261 reply

              If one really wants to take a stand against this crazy administration, they shouldn’t start it by referring to Hegseth with his assumed title.

              • By sgarland 2026-02-2814:08

                I thought that too, but then wondered if they thought better of deliberately antagonizing a very powerful bully.

          • By pcthrowaway 2026-03-0122:22

            > the alternative is be OK with their product being used for surveillance.

            Their statement didn't indicate they object to their product being used for surveillance, just for domestic surveillance of U.S. citizens

        • By scrubs 2026-02-286:22

          It's gotta be thus.

          For if you don't the next step is cynicism maximally operationlized: what you're not doing game/political BS to get ahead? What are you? A chump? An idiot?

          That kind of stuff spreads like wild fire making corporate America ... something else to put it politely.

          Doing the right thing has cost me big time here and there. I don't care. Simultaneously orgs are not all bad; thats another distortion we can do without.

        • By twelvechairs 2026-02-285:521 reply

          No surprises many people on YCs site align with Sam Altmans view of the world - right or wrong.

          • By peyton 2026-02-289:221 reply

            I’m just surprised the alignment guy is struggling with alignment. Dodged a bullet I guess.

            • By naruhodo 2026-02-2811:13

              If I remember my D&D, Lawful Evil is an alignment.

      • By mkozlows 2026-02-284:081 reply

        I think it's definitely true that you should never count on a company to do principled things forever. But that doesn't mean that nothing is real or good.

        Like Google's support for the open web: They very sincerely did support it, they did a lot of good things for it. And then later, they decided that they didn't care as much. It was wrong to put your faith in them forever, but also wrong to treat that earlier sincerity as lies.

        In this case, Anthropic was doing a good thing, and they got punished for it, and if you agree with their stand, you should take their side.

        • By eucyclos 2026-02-287:50

          Google's support for the open week is a great example because it was obviously a good thing but also obviously built into their business model that they'd take that position. That made them a much more trustworthy company in those days, because abandoning that position would have required not just losing money for a while but changing their internal structure.

      • By Davidzheng 2026-02-288:08

        How much value is there in individual values?

        Many of us remember that OpenAI was also started by people with strong personal values. Their charter said that they would not monetize after reaching AGI, their fiduciary duty is to humanity, and the non-profit board would curtail the ambitions of the for-profit incentives. Was this not also believed by a sizeable portion of the employees there at the time? And what is left of these values after the financial incentives grew?

        The market forces from the huge economic upside of AI devalues individual values in two ways. It rewards those that choose whatever accelerates AI the most over any individuals who are more careful and act on individual values--the latter simply loses power in the long run until their virtue has no influence. As Anthropic says in their mission statements, it is not of much use to humanity to be virtuous if you are irrelevant. The latter, as is true for many technologies, is that economic prosperity is deeply linked to human welfare. And slowing or limiting progress leads to real immediate harm to the human population. And thus any government regulations which are against AI progress will always be unpopular, because those values which are arguing future harm of AIs is fighting against the values of saving people from diseases and starvation today.

      • By ajam1507 2026-02-284:315 reply

        > However, in this instance, it does seem that Anthropic is walking away from money.

        The supply chain risk designation will be overturned in court, and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers. Not to mention that giving in would mean they lose lots of their employees who would refuse to work under those terms. In this case, the principles are less than free.

        • By Stratoscope 2026-02-285:122 reply

          > ...the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

          In fact, a friend heard about this and immediately signed up for a $200/year Claude Pro plan. This is someone who has been only a very occasional user of ChatGPT and never used Claude before.

          I told my friend "You could just sign up for the free plan and upgrade after you try it out."

          "No, I want to send them this tangible message of support right now!"

          • By layer8 2026-02-286:094 reply

            Still, you’d need a million people to do that to compensate the $200M military contract.

            • By AznHisoka 2026-02-2810:49

              As an aside, there are probably lots of companies that serve the government seriously considering cutting the government as a customer.

              Simply because the money/efficienct they will lose from cutting Claude will surpass the revenue they get from the gov

            • By Tepix 2026-02-286:471 reply

              Does the military pay $200m per month?

              • By layer8 2026-02-286:502 reply

                As the parent stated, the Claude Pro plan is $200 per year, not per month.

                • By Tepix 2026-02-286:51

                  Gotcha, mixed it up with the Max plan.

                • By pineaux 2026-02-289:17

                  Is the government contract 200m per year? Or for a longer period?

            • By blurbleblurble 2026-02-287:44

              Not all that many people

            • By pcthrowaway 2026-03-0122:28

              I don't think it's easy to compare how this might affect their bottom line.

              Anthropic may gain customers, but OpenAI may lose customers also (or they may even gain customers).

              Maybe OpenAI also has to pay their employees more now for "moral flexibility". Or maybe right-wing devs are more inclined to work there, I don't know.

          • By yencabulator 2026-03-0117:38

            I'm seeing a lot of "QuitGPT" posts. It seems your friend has friends.

        • By tom_m 2026-03-1015:26

          I wouldn't be so sure about the courts overturning it. This is yet another opportunity for this administration to test its power. Even if the courts do, it'll be very time consuming and expensive.

          Unfortunately this is really bad for Anthropic. Given how quickly the other providers jumped on the opportunity, you can tell how fast things move here and ultimately that could mean the difference between survival in this industry.

          I hope something changes, but it can get a lot worse. Individual developers signing up won't help Anthropic. If things get worse, you can rule out Anthropic in most enterprise situations. Supply chain risk means you can't even build software with the thing. Forget about using AI as part of the product, as a user facing feature - people won't be able to build with it as it's part of the supply chain.

        • By aoeusnth1 2026-02-285:141 reply

          Unclear how much damage the designation will do to their dealmaking ability in the meantime. How long will it take for the court to reverse order?

          • By altairprime 2026-02-285:20

            The longer it takes, the better the impact on their reputation.

        • By mech422 2026-02-284:581 reply

          The consumer goodwill is working then - it pushed me to upgrade my plan on march 1st... (do they bill on rolling 30 day cycle ? or calendar-month to calendar-month?)

          • By heroh 2026-02-285:101 reply

            It’s not rolling 30 days. Lost 2 days of use by subscribing in February.

            • By mech422 2026-02-288:11

              Thanks! I appreciate the heads up!

        • By skissane 2026-02-285:461 reply

          > The supply chain risk designation will be overturned in court,

          I'm honestly uncertain how the courts will rule. You could be right, but it isn't guaranteed. I think a judicial narrowing of it is more likely than a complete overturn.

          OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer. I don't think Hegseth actually wants to put them in that position – he probably honestly doesn't realise that's what he's potentially doing. In any event, Microsoft/AWS/etc's lobbyists will talk him out of it.

          And the more the government waters it down, the greater the likelihood the courts will ultimately uphold it.

          > and the financial fallout from losing the government contracts will pale in comparison to the goodwill from consumers.

          Maybe. The problem is B2B/enterprise is arguably a much bigger market than B2C. And the US federal contracting ban may have a chilling effect on B2B firms who also do business with the federal government, who may worry that their use of Claude might have some negative impact on their ability to win US federal deals, and may view OpenAI/xAI (and maybe Google too) as safer options.

          I guess the issue is nobody yet knows exactly how wide or narrow the US government is going to interpret their "ban on Anthropic". And even if they decide to interpret it relatively narrowly, there is always the risk they might shift to a broader reading in the future. Possibly, some of Anthropic's competitors may end up quietly lobbying behind the scenes for the Trump admin to adopt broader readings of it.

          • By ajam1507 2026-02-285:551 reply

            > OTOH, I think almost guaranteed it will be watered-down by the government. Because read expansively, it could force Microsoft and AWS to choose between stopping reselling Claude vs dropping the Pentagon as a customer.

            A tweet does not have the force of law. Being designated a supply chain risk does not mean that companies who do business with the government cannot do business with Anthropic. Hegseth just has the law wrong. The government does not have the power to prevent companies from doing business with Anthropic.

            • By skissane 2026-02-286:161 reply

              The issue is, even if the Trump admin is misrepresenting what the law actually says, federal contractors may decide it is safer to comply with the administration’s reading. The risk is the administration may use their reading to reject a bid. And even if they could potentially challenge that in court and win, they may decide the cheaper and less risky option is to choose OpenAI (or whoever) instead

              • By ajam1507 2026-02-289:051 reply

                They would have a very good case against the government if that were to happen. I suspect that the supply chain risk designation will not last long (if it goes into effect).

                • By skissane 2026-02-2810:061 reply

                  Some vendors will decide to sue the government. Others may decide that switching to another LLM supplier is cheaper and lower risk.

                  And I'm not sure your confidence in how the courts will rule is justified. Learning Resources Inc v Trump (the IEEPA tariffs case) proves the SCOTUS conservatives – or at least a large enough subset of them to join with the liberals to produce a majority – are willing sometimes to push back on Trump. Yet there are plenty of other cases in which they've let him have his way. Are you sure you know how they'll judge this case?

                  • By ajam1507 2026-02-2810:53

                    > Are you sure you know how they'll judge this case?

                    I'm not even sure it will get that far. There's a million different ways that this could go that mean it won't ever come before the supreme court. The designation isn't even in effect yet.

                    I do think if it goes into effect it will be eventually overturned (Supreme Court or otherwise) There just isn't a serious argument to make that they qualify as a supply chain risk and there is no precedent for it.

      • By rustystump 2026-02-284:35

        I call this being ethically convenient. I think anthropic is playing to the crowd. This admin will be gone soon enough so no need dragging the brand into mud. Just need to hold out. They have enough money that walking away from the money isnt impressive. But pissing off the gov is pretty fun and far more interesting.

      • By conartist6 2026-02-284:442 reply

        That's what worries me so much about the development that OpenAI is stepping in. OpenAI's claim is that they have the same principles as Anthropic, but that claim is easy because it's free now because the govt wants to sell the "old bad, new good" story.

        Surely OpenAI cannot but notice that those values, held firmly, make you an enemy of the state?

        • By mojoe 2026-02-285:16

          My reading is that OpenAI is paying lip service. Altman is basically saying "OF COURSE we don't want to spy on Americans or murderdrone randos, but OF COURSE the government would never do that, they just told me so (except for the fact that they just cut ties with Anthropic because Anthropic wouldn't let them do that)"

        • By AznHisoka 2026-02-2810:52

          Its much simpler than that. OpenAI is losing significant market share and this is a Hail Mary that the government will forcr troves of companies to leavr Anthropic

      • By cperciva 2026-02-284:13

        principles are easy when they're free

        Indeed. If everything is a priority, nothing is a priority; you only know that something is a real priority when you get an answer to the question "what will you sacrifice for this".

      • By benny20twenty 2026-02-284:21

        [dead]

    • By msla 2026-02-2811:121 reply

      If you're going to be cynical, at least credit them with some brains:

      MAGA isn't going to last forever, and when it collapses, the ones who publicly stood up to it will be better positioned to, I don't know, not face massive legal problems under whatever administration comes next. A government elected by middle-aged moms who use "Fuck ICE" as a friendly greeting isn't going to have any incentives to go easy on Palantir and Tesla.

      • By dust42 2026-02-2811:18

        Cynical or not, I think it was an absolutely brilliant move: "Mass domestic surveillance of Americans constitutes a violation of fundamental rights". I think they placed their bets on Sama signing a contract with the DoD and here we are, one day later the news that OpenAI signed a contract is out. An absolute PR disaster for OpenAI. And an absolute PR victory for Anthropic.

        I think OpenAI's IPO will be interesting. Not even the conservative media will be happy about mass surveillance of Americans.

        For non-Americans not much change, they don't really care about your rights more than about a pile of dog poo.

    • By BatFastard 2026-02-283:121 reply

      I applaud Anthropic choice. Choosing principle over money is a hard choice. I love Anthropic's products and wish them success!

      • By randunel 2026-02-2812:07

        You applaud anthropic's choice to enhance mass surveillance of non-US people? If anthropic want mass surveillance, they should limit it to their own country, not to all other countries IMO.

    • By array_key_first 2026-02-289:061 reply

      Anthropics principles are extraordinarily weak from an absolute point of view.

      Don't surviel the US populace? Don't automate killing, make sure a human is in the loop? No, sorry, don't automate killing yet.

      Yeah dude, I'm sure just about any burglar I pull out of prison will agree.

      Listen yes, it's good compared to like 99% of US companies. But that really speaks more to the absolute moral bankruptcy of most companies, and not to Anthropics principles.

      That being said, yes we should applaud anthropic. Because yes this is rare and yes this is a step in the right direction. I just think we all need to acknowledge where we are right now, which is... not a good place.

      • By pcthrowaway 2026-03-0122:30

        > Because yes this is rare and yes this is a step in the right direction

        ehh.. I'd say it's a smaller step in the wrong direction than it could be.

    • By stouset 2026-02-284:471 reply

      > I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.

      The entire problem is that this lasts as long as those people are in charge. Every incentive is aligned with eventually replacing them with people who don’t share those values, or eventually Anthropic will be out-competed by people who have no hesitation to put profit before principle.

      • By scrubs 2026-02-286:29

        I mean, yah. How else could it be? Xerox, GE, IBM (1990 Gerstner) and a zillions of other rock stars fell hard. And had to be over hauled. Thats why continuous improvement is a thing, and why a platonic take on the world was never a thing.

    • By sensanaty 2026-02-2810:491 reply

      The funniest or perhaps saddest (depending on your view) is that the "principles" we're talking about and apparently celebrating here are that they don't want to do DOMESTIC surveillance, and they don't want FULLY autonomous kill bots... Yet, because according to the CEO the models aren't there yet.

      Meaning, they're a-okay with:

      - Mass surveillance of non-US peoples (and let's be completely real here, they're in bed with Palantir already, so they're obviously okay with mass surveillance of everyone as long as they're not the ones that will be held culpable)

      - Autonomous murder bots. For now they want a human in the loop to rubberstamp things, but eventually "when the models improve" enough, they're just fine and dandy with their AIs being used as autonomous weapons.

      What the fuck are the principles we're talking about here? Why are they being celebrated for this psychotic viewpoint, exactly?

    • By eh-tk 2026-02-283:47

      I also think this will ultimately benefit anthropic in the long run. Outlined in this article: https://open.substack.com/pub/zeitgeistml/p/murder-is-coming...

    • By maddmann 2026-02-2812:45

      This is an absolute rarity these days. Very appreciative of the true leadership on display here

    • By 19123127 2026-02-283:237 reply

      Why did they work with Palantir then, which is the integrator in the DoD? It does not take a genius to figure out where this was going.

      I don't know why a personal testimony to the effect that "these are the good guys" needs to be at the top of every Anthropic thread. With respect to astroturfing and stealth marketing they are clearly the bad guys.

      • By margalabargala 2026-02-283:47

        Anthropic's stance is "we believe in the use of our tools, with safeguards, to assist the defense of the US".

        So of course they would work with Palantir to deploy those tools.

        The issue we're seeing is because the DoW decided they no longer like the "with safeguards" part of the above and is trying to force Anthropic to remove them.

      • By oefrha 2026-02-285:02

        They are pretty clear about this:

        > the mass domestic surveillance of Americans

        This they say they don’t like. The qualifiers tell you they’re totally fine with mass surveillance of Palestinians, or anyone else really, otherwise they could have said “mass surveillance”.

        > fully autonomous weapons

        And they’re pretty obviously fine with killing machines using their AI as long as they’re not fully autonomous (at the moment, they say the tech is not there yet).

        All things considered they’re still a bit better than their competitors, I suppose.

      • By lebovic 2026-02-284:421 reply

        Others have addressed the first half of your comment, so I'll focus on the astroturfing claim.

        While I've talked a lot about Anthropic this week, if I was astroturfing for a positive image, I'd be very bad at it [1][2][3].

        [1]: https://news.ycombinator.com/item?id=47150170

        [2]: https://news.ycombinator.com/item?id=47163143

        [3]: https://news.ycombinator.com/item?id=47174814

        • By jrflowers 2026-02-285:00

          It doesn’t seem like anybody has addressed “If they are the good guys with principles why did they work with Palantir?”

          There’s a comment that’s sort of handwaving and saying “because America”, but I would imagine that someone with direct knowledge of the people involved would have something more substantive than “thems the breaks” when it comes to working with Palantir

      • By prescriptivist 2026-02-284:03

        Anthropic makes it kind of clear in all of their statements that they are not opposed to working with the surveillance state, with the military industrial complex, etc. Their central philosophy, it seems, is not incongruent with working with entities, public or private, that can be construed as imperialist or capitalistic or a combination of both. I actually appreciate their honesty here.

        They exist within the regime of capital and imperialism that all of us who are American citizens exist within. This isn't a cop-out or cope. It's just the reality of the world that we live in. If you are an American and somehow above it, let me know how you live.

      • By jimmydoe 2026-02-283:382 reply

        The further away from God, the more need to believe there are good guys.

        • By Sl1mb0 2026-02-285:43

          God has been used as a justification for a lot of human suffering.

          My personal belief is that the closer to god you are; the more easily you can justify evil. How could you not? If my entire belief system is derived from faith, then there are *no* conclusions I could not come to, and therefore anything can be justified.

        • By scrubs 2026-02-286:32

          >further away from god

          What is that? Some new bit you're working on?

    • By qsera 2026-02-284:141 reply

      >driven by values

      Would the people who have invested in the company like that? Or would they like the company to make some money? Are they going to piss off their investors by being "driven by values"?

      I mean, please explain it to me how "driven by values" can be done when you are riding investor money. Or may be I am wrong and this company does not take investments.

      So in the end you are either

      1. funding yourselves, then you are in control, so there is at least a justification for someone to believe you when you say that the company is "driven by values".

      2. Or have taken investments, then you are NOT in control, then anyone who trusts you when you say the company is "driven by values", is plain stupid.

      In other words, when you start taking investment, you forego your right to claim virtuous. The only claim that you can expect anyone to believe is "MY COMPANY WILL MAKE A TRUCKLOAD OF MONEY !!!!"

      • By cornel_io 2026-02-284:311 reply

        As an investor in Anthropic, I'd say that anyone who wasn't aware of where they stood on various values issues the whole time should not have been putting money in, it was not hidden.

        • By qsera 2026-02-284:441 reply

          How much is your investment (you don't have to be exact)?

          The bottom line is that if the investment is not profitable, then there will be less and less investment, because only fewer and fewer can afford to lose money and stick to their values, until no one will be investing; how ever high your values might be...

          Sticking to your values when it cost growth is not sustainable for publicly traded companies...

          • By Kim_Bruning 2026-02-2812:041 reply

            Anthropic is a public benefit corporation. Investors who put money in knew this. It's in the corporate charter. The corporate charter is a public document.

            Fiduciary duty means the board and officers must act in accordance with the governing documents of the corporation.

            Even a regular corporation doesn't need to be just for the purpose of "money goes up". The board has discretion on how they create value.

            • By qsera 2026-02-2813:54

              > public benefit corporation > The board has discretion on how they create value.

              It does not make much of a difference. If the investors don't get their investment returned with interest (as $$$), the majority of them are not going to invest further. That is from the set of investors who invest based on the companies ethical stand, which is probably only a small fraction of all the investments it has received.

    • By jmount 2026-02-282:491 reply

      So many tech companies have the "high values" screed that it really just seems like a standard step in the money plan.

      • By ParentiSoundSys 2026-02-283:042 reply

        Practically the entire tech industry, including many of the higher ups currently camping out on the right, used to be firmly in a sort of centrist-with-social-justice-characteristics camp. Then many of those same people enthusiastically stood with Trump at his inauguration. It's completely reasonable that people have their doubts now.

        It's also completely reasonable to expect that if Anthropic is the real deal and opposed to where the current agenda setters want to take things, they'll be destroyed for it.

        • By rustystump 2026-02-284:38

          Destroyed? No. But a new sharif is gonna show up while the existing exit stage left with big bags of nuts.

        • By teiferer 2026-02-285:311 reply

          > enthusiastically stood with Trump

          I think "enthusiastically" looks different. They had to choose between kissing Trumps butt to make good business for 4 years or see their companies at a severe disadvantage. I'm not saying what they did was good, nor do I support it. But from a business angle it's not hard to see why they chose to do that. If you'd ask them privately off the record then I'm sure most of them would tell you that Trump is an idiot and dangerous.

          • By ParentiSoundSys 2026-02-286:24

            Mark Zuckerberg was in a big hurry to call Trump a "badass" in the wake of the Butler hoax, and is clearly trying to appeal to the right with his cultivated jiu jitsu Chad image. It doesn't mean a damn thing what these CEOs are willing to say behind closed doors when their public decisions are to remain in lockstep with the agenda and fire anyone who asks questions about whether it's the right one.

    • By white_dragon88 2026-02-283:08

      [dead]

    • By Rapzid 2026-02-282:451 reply

      [flagged]

      • By educasean 2026-02-282:491 reply

        All corporations are to an extent. It’s a question of magnitude, not absolutes.

        You, too, are driven by money. Yet I’m certain you maintain a set of principles and values. Let’s keep the discussion productive yeah?

        • By Rapzid 2026-02-282:522 reply

          Sure, where is your productive output? Cause that's drivel.

          Anthropic kept referring to Hegseth as "Secretary of War" and the DoD as "Department of War". Which is horseshit. This whole thing is Anthropic flailing.

          • By solenoid0937 2026-02-282:591 reply

            Come on. That is because this is a negotiation between Anthropic and the DoD and they understandably don't want to burn bridges.

            Do you just expect Anthropic to totally blow up all bridges to the government? What do you actually want them to do?

            Reading your comment history I'm not sure they could do anything to satisfy you.

            • By Rapzid 2026-02-283:04

              I'm not the one claiming they have principles so.. No? I expect them to do whatever they think they need to at any given moment to enrich themselves.

              Their "moat" is nothing more than momentum at this point. They are AOL on an accelerated timeline.

          • By ParentiSoundSys 2026-02-283:01

            Even as someone pretty staunchly opposed to this stupid "Gulf of America" Jahr Null bullshit from the Trump administration, I actually think the new labels are more honest about these institutions and their intended purpose.

    • By arjie 2026-02-282:524 reply

      This is a pretty classic mistake most people who are in high-profile companies make. They think that some degree of appealing to people who were their erstwhile opponents will win them allies. But modern popular ethics are the Grim Trigger and the Copenhagen Interpretation of Ethics. You cannot pass the purity test. One might even speculate that passing the purity test wouldn't do anything to get you acceptance.

      Personally, I wish that the political alignment I favour was as Big Tent as Donald Trump's administration is. I think he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants. But it just so happens that the other side isn't so. So such is life. We lose and our allies dwindle since anyone who would make an overture to us, we punish for the sin of not having been born a steadfast believer.

      Our ideals are "If you weren't born supporting this cause, we will punish you for joining it as if you were an opponent". I don't think that's the path to getting what one wants.

      • By fladrif 2026-02-283:051 reply

        > political alignment I favour was as Big Tent as Donald Trump's administration is

        I'm not sure how accurate this sentiment is. Your desire is to embrace your enemy without resolving the differences, and get what you want. It's not clear here if you're advocating compromise and negotiation, or just embracing for the sake of embracing while just doing what you wanted all along.

        And evaluating Trump's actions against this sentiment doesn't seem to be the negotiation and compromise interpretation. Given the situation with tariffs and ICE enforcement, there is no indication of negotiation or compromise other than complete fealty/domination.

        So as grandiose and noble your sentiment is, Donald Trump is hardly the epitome of it as you seem to suggest.

        • By arjie 2026-02-283:162 reply

          I think the differences in this situation were that I do not want AI used in domestic surveillance or autonomous weapons, and Anthropic holds to that position.

          I think Donald Trump has pretty much let Zohran Mamdani operate without applying the kind of political pressure he has applied to other people, notably his predecessor Eric Adams. Also, I think saying "let people be your allies when they take your position" is less "grandiose and noble" than demanding someone agree on all counts before you will accept any political alignment. But it's fine if everyone else disagrees. It's possible there really just isn't a political group which will accept my views and while that's unfortunate because it means I can't get all that I want, I think it'll be okay.

          One could reasonably argue that the meta-position is to either join the Republicans full-bore (somewhat unavailable to me) or to at least play the purity test game solely because that's the only way to have any influence on outcomes. If it comes to that, I'll do it.

          • By dralley 2026-02-283:45

            You are making a mistake in thinking that Trump thinks of these things in political terms. Trump sees a charismatic and popular politician and he wants to associate with them on that basis alone, because Trump has a deep psychological need to be liked. Mamdani understands his psychology and is able to exploit it well by playing his own attributes to his advantage.

            Politically, it's not like Trump tolerates dissent within the Republican party, he constantly threatens and berates anyone who shows defiance into submission. It's precisely because Mamdani is not in his tent and not really much of a threat to his power that he is willing to deal with him that way.

          • By fladrif 2026-02-283:231 reply

            I don't understand, your position is the same as Anthropic, yet you disagree with their stance?

            And I wouldn't take the case of Trump and Mamdani as the exemplar of Trump's overall behavior towards opponents. The amount of evidence to the contrary is overwhelming.

            • By arjie 2026-02-283:47

              Anthropic's adherence to their stated principles was never tested and their willingness to work with DoD made it seem like they didn't stand by them strongly so I wasn't happy with that. This action shows that they are willing to lose big contracts in order to stand by their stated principles. I like that.

              In any case, I think I've said all there is for me to say on the subject and everyone seems to disagree. I'll take the hint.

      • By ParentiSoundSys 2026-02-283:181 reply

        Zohran Mamdani has yet to demonstrate that he poses any serious impediment to Trump and the agenda of Trump's owners.

        • By arjie 2026-02-283:201 reply

          I think there is a marked difference in Trump's rhetoric v Mamdani prior to the meeting at the White House and after.

          • By bigyabai 2026-02-283:251 reply

            I think you are extrapolating a bit too far from an outlier data point. Trump has had several other meetings (eg. Zelenskyy) go sideways for no apparent reason.

            • By dnautics 2026-02-283:34

              and he has had several meetings change his opinion of the other party for no apparent reason (eg zelensky

              extrapolation is futile

      • By zephen 2026-02-283:221 reply

        Your contention that Trump's administration is big tent is risible.

        Political witch hunts, women and minorities forced out of the military, and kicking out all the allied countries that used to be in the tent with us?

        Bullshit of the finest caliber.

        • By ragall 2026-02-286:34

          Yes, the Trump administration is big tent of politicians who hold incompatible opinions and are allowed to stay as long as they display personal allegiance to Trump.

      • By moogly 2026-02-286:37

        > he can get Zohran Mamdani in the room and say "it's fine; say you think I'm a fascist" and then nonetheless get what he wants.

        Is your perception that warped? Mamdani is the one who knows how to play Trump as a fiddle, and the one who walks away with something from the exchange.

    • By prng2021 2026-02-285:53

      “I think the decision was made because the people making this decision at Anthropic are well-intentioned, driven by values, and motivated by trying to make the transition to powerful AI to go well.”

      Every single one of these CEOs happily pirated unimaginable amounts of copyrighted content. That directly hurt millions of real human beings. Not just the prior creators but also crushing the future potential for success of future ones.

      https://www.susmangodfrey.com/wins/susman-godfrey-secures-1-...

  • By parl_match 2026-02-281:444 reply

    Anthropic's stance here is admirable. If nothing else, their acknowledgement of not being able to predict how these powerful technologies can be abused is a bold and intelligent position to take.

    • By dmix 2026-02-282:232 reply

      It’s not just admirable it’s the obvious position to take and any alternative is head scratching.

      It’s clear that this is mostly a glorified loyalty test over a practical ask by the administration. Strangely reminiscent of Soviet or Chinese policies where being agreeable to authority was more important than providing value to the state.

      • By kyle-rb 2026-02-282:544 reply

        If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.

        If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.

        • By dmix 2026-02-283:011 reply

          This administration has repeatedly shown it will try to bully or take an outrageous negotiating position just to gain featly. Whether they get anything or whether the dispute is actually what the label says should always be treated with skepticism, especially these days with social media information wars. That’s the benefit of realpolitik when you’re a superpower, you often don’t actually need anything, you can just make an example of people to keep the flock in check.

          • By kyle-rb 2026-02-283:193 reply

            It seems like they'd have a stronger negotiating position if they had an alternative contractor waiting in the wings before they accused Anthropic of being woke traitors, as opposed to a threat to migrate away over the next 6 months.

            But again, the sophistication of their strategery might also have a negative correlation with Hegseth's BAC.

            • By ModernMech 2026-02-285:47

              No one accused them of being competent negotiators. Remember, the secret behind the "Art of the Deal" is to be obstinate and abusive until everyone settles just to stop dealing with you.

            • By Grimburger 2026-02-284:32

              Grok was approved for DoD work only a few days ago, they have an alternative if they want.

              The Pentagon, much like everyone else, will only want to use the best model available though.

        • By jart 2026-02-284:291 reply

          They're not threatening to do that. They just did. Read the tweet linked in the article.

          > In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. https://x.com/SecWar/status/2027507717469049070?s=20

          This has never happened before. It just goes to show how overextended the USG is these days. America is broke. Anthropic is about to IPO. Most stock market money comes from foreign countries like Japan these days. All those people are going to trust Anthropic more if they believe the company is neutral among nations and acting as a check and balance to power.

          • By politician 2026-02-284:491 reply

            "This has never happened before." US could compel Anthropic to act; simply not doing business with them is restraint, not escalation.

            • By jart 2026-02-286:09

              U.S. authorities labeled them a supply chain risk. The military went on Twitter and basically labeled Anthropic an enemy of the state. The most popular company on Earth. They did that. If USG was able to issue some kind of secret court order compelling them to act and keep it covert then they would have done it.

        • By ordu 2026-02-284:27

          > If it's a loyalty test then you'd think the DoD would be willing to let them "fail" and simply drop the contract, but instead they're threatening to label Anthropic a supply chain risk.

          It is not just a test, it is PR of sorts. They want to bully everyone into loyalty.

          > If we're going by Occam's razor: it's Friday so Pete probably started drinking ~10:30-11am.

          If we're going by Occam's razor, then we should cut away the drinks. USSR started its terror not because someone was drunk, it was a deliberate action to make everyone afraid to do anything. They targeted people at random and executed them accusing them of counterrevolution or espionage. The goal was to instill fear.

          Now Putin regime does the same, they are instilling fear in people. It is a basic authoritarian reflex to make people afraid of being marked as disloyal. They prefer to do it in unpredictable ways to create an uncertainty of where the red lines are so people don't try even to toeing them.

          Trump is not very skilled in the mechanics of terror. He is predictable which is unfortunate for a would-be dictator. It is an incompetence, and if a hypothesis resort to it, it is a bad sign for a hypothesis. But AFAIK no hypotheses explaining Trump can avoid introducing his incompetence into the picture. In this light the reliance of a hypothesis on incompetence loses its discriminatory power.

        • By TheGRS 2026-02-285:27

          Everyone in the administration is completely drunk on power, they truly believe the government should be allowed to do whatever they please, despite being vehemently against previous governments telling their constituents what to do. Such nonsense, they hold no values, they only want complete power.

          I don't know how the business leadership community could watch this whole affair and still be in support of them AT ALL. This is well past getting a crappy twitter rant from Trump on the weekend that maybe one could ignore until the next rant.

      • By ilaksh 2026-02-286:061 reply

        My interpretation is that this is what happens when you make a Fox News host Secretary of Defense.

        I think he is just too dumb to figure out a way to "finesse" the situation so the NSA etc. can use it however they want, or at least to know that it's politically intractable to make it a public fight.

    • By stavros 2026-02-281:57

      I'd admire them if they took a principled or moral stance on AI. As it stands, they're saying "we don't want fully autonomous weapons because they might kill too many Americans by accident while trying to kill non-Americans" and "we don't want AI to surveil Americans, but anyone else, sure".

    • By by364 2026-02-282:51

      [flagged]

    • By Rapzid 2026-02-282:28

      [flagged]

  • By hank2000 2026-02-281:383 reply

    Stay strong Anthropic. We just like you more for this.

    • By abtinf 2026-02-282:213 reply

      I don't know if I like Anthropic more, but I certainly like their competitors much less now.

      The new thing that I know about leading AI companies that aren't Anthropic (i.e. OpenAI, Google, Grok, etc) is that they knowingly support using their tools for domestic mass surveillance and in fully autonomous weapon systems.

      • By SilverElfin 2026-02-282:32

        Exactly - the implication is that every other company is absolutely open to surveilling you and killing you. They’re complicit. They participate in whatever the regime calls for.

      • By phpnode 2026-02-282:332 reply

        Is that actually the case? or are they just not supplying LLMs to DoW and Anthropic is?

HackerNews