Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’

2026-03-0423:51803425techcrunch.com

Anthropic gave up its contract with the Pentagon over AI safety disagreements -- then, OpenAI swooped in.

Anthropic co-founder and CEO Dario Amodei is not happy — perhaps predictably so — with OpenAI chief Sam Altman. In a memo to staff, reported by The Information, Amodei referred to OpenAI’s dealings with the Department of Defense as “safety theater.”

“The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses,” Amodei wrote.

Last week, Anthropic and the U.S. Department of Defense (DoD) failed to come to an agreement over the military’s request for unrestricted access to the AI company’s technology. Anthropic, which already had a $200 million contract with the military, insisted the DoD affirm that it would not use the company’s AI to enable domestic mass surveillance or autonomous weaponry.

Instead, the DoD — known under the Trump administration as the Department of War — struck a deal with OpenAI. Altman stated that his company’s new defense contract would include protections against the same red lines that Anthropic had asserted.

In a letter to staff, Amodei refers to OpenAI’s messaging as “straight up lies,” stating that Altman is falsely “presenting himself as a peacemaker and dealmaker.”

Amodei might not be speaking solely from a position of bitterness, here. Anthropic specifically took issue with the DoD’s insistence on the company’s AI being available for “any lawful use.” OpenAI said in a blog post that its contract allows use of its AI systems for “all lawful purposes.”

“It was clear in our interaction that the DoW considers mass domestic surveillance illegal and was not planning to use it for this purpose,” OpenAI’s blog post stated. “We ensured that the fact that it is not covered under lawful use was made explicit in our contract.”

Techcrunch event

San Francisco, CA | October 13-15, 2026

Critics have pointed out that the law is subject to change, and what is considered illegal now might end up being allowed in the future.

And the public seems to be siding with Anthropic. ChatGPT uninstalls jumped 295% after OpenAI made its deal with the DoD.

“I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with the DoW as sketchy or suspicious, and see us as the heroes (we’re #2 in the App Store now!),” Amodei wrote to his staff. “It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees.”


Read the original article

Comments

  • By mrandish 2026-03-051:3714 reply

    When @sama announced within hours that OAI was replacing Anthropic with the "same conditions ", it was clear that either the DoW or OAI (or both) were fudging. DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.

    And sure enough, my reading of it left the impression the OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

    • By _heimdall 2026-03-053:024 reply

      I'd have money on OpenAI hiding behind the "all lawful use" phrasing to claim high levels of protection.

      He also claimed that they would build rules into the model the DoD would use, preventing misuse. Aka he claims OpenAI will quickly solve alignment and build it right in...I wouldn't hold my breath.

      • By conception 2026-03-053:101 reply

        All lawful use. And then they followed up with “intentionally doing illegal things.” If they happen to accidentally do illegal things, OpenAI is ok with it.

        • By aardvarkr 2026-03-053:152 reply

          I hate this so much. The nsa’s spying on everyone in 2010 was “legal” and I can only imagine how much worse it is now with AI to follow your digital footprint around everywhere. Too bad we don’t have any more whistleblowers like Snowden

          • By lukan 2026-03-058:331 reply

            "Too bad we don’t have any more whistleblowers like Snowden"

            Probably because most don't want to end up in russia?

            • By Imustaskforhelp 2026-03-0511:331 reply

              I feel so sad about snowden sometimes. I tried reading his book's first few pages on how when he was growing up, he could be anyone in a forum and there was this sense of anonymity and at the same time, just freedom. And later on when he saw just how much the overreach of govt. was etc., he did what others couldn't.

              It wasn't as if there weren't any other contractors like Snowden, but there were no other whistleblowers like Snowden

              and where'd that leave him? In a country far away from his motherland and being worried about his safety. Being called god knows what by the country at home and most general people don't even care.

              Snowden didn't do it for the money, he did it for what he felt was right and that's so rare.

              Its so sad how when I searched up on Snowden on youtube, the first thing I found was ex CIA agent claiming Snowden wasn't innocent and how he had to befriend russia but at the same time, that was only because US would have literally killed him and made an example out of him to whistleblow about such a large-scale mass surveillance

              “What kind of asshole reveals the fact we’re the assholes, then doesn’t let us kill him!” is one heck of a comment I found.

              Also, We will charge the whistleblower with death but we will not take any action against the act which was whistleblown in the first place (:

              • By Departed7405 2026-03-0512:161 reply

                I agree. What people forget is Snowden didn't intend to end up in Russia. He wanted to go from Hong Kong (where he tought he would be safe, but realised extradition still was an option) to Ecuador. But he feared US would intercept his plane if he went over US/US allies sky. So his plan was to go from HK to Russia, then to Cuba and finally Ecuador.

                Russia stopped him because US had cancelled his passport.

                • By veidr 2026-03-0516:33

                  That fear proved well-grounded. While it probably doesn't seem as big of a deal now — in this era when we just serially assassinate heads of state we don't like without any pretense otherwise — the US indeed did direct its European allies to intercept the plane of Bolivian president Evo Morales, based on the (incorrect, as it turned out) suspicion that Snowden was on board.

                  https://en.wikipedia.org/wiki/Evo_Morales_grounding_incident

          • By belter 2026-03-054:19

            [dead]

      • By thisisit 2026-03-054:253 reply

        Most likely scenario is that if it does something “unlawful” and found out - claim that “These machines are black boxes and they don’t know what went wrong. They will set up an investigative committee and find out.”

        • By nso 2026-03-055:37

          * spawn 8 investigative agents

        • By genxy 2026-03-054:541 reply

          When shit hits the fan they are going to blame AI, but then not even use hand sanitizer. They will 100% be using OAI as a scapegoat, although I'd like to see the OAI goat stay and someone else run into the woods.

          All Lawful Use is a tautology with fascists because they cannot break laws by definition.

          • By delaminator 2026-03-058:081 reply

            Yeah, here's some examples of all these fascists doing exactly that:

            Soviet Union - The show trials of the 1930s were conducted with full legal apparatus: confessions, judges, verdicts. Stalin's purges operated through legally constituted troikas. Entirely "lawful" by Soviet law.

            East Germany (DDR) - The Stasi's surveillance and harassment programmes were codified in law. When the wall fell, many Stasi officers genuinely argued their conduct was legal under GDR statute: a defence that West German courts largely rejected.

            Castro's Cuba - Mass executions after the revolution were conducted by legally constituted revolutionary tribunals. Castro explicitly defended this on legality grounds when challenged by foreign press in 1959.

            Chavez/Maduro's Venezuela - Suppression of opposition media, jailing of political opponents was consistently defended as operating within Venezuelan law, which was progressively rewritten to make it so. Classic self-referential legality.

            Mao's Cultural Revolution - The revolutionary committees had legal standing. Persecution of intellectuals and landlords proceeded through formal (if kangaroo) legal processes.

            • By oscaracso 2026-03-0510:022 reply

              You should ask the language model that output this text the definition of 'whataboutism,' and if the comment you've posted responds meaningfully to the discussion at hand.

              • By abustamam 2026-03-0510:38

                I think similar to how AI-generated comments are frowned upon, "this comment was generated by Ai" comments should also be frowned upon. It's really annoying to see a well written comment and replies that don't address the comment but just accuse the poster of having used Ai to generate the comment.

              • By delaminator 2026-03-0510:203 reply

                [flagged]

                • By sebastiennight 2026-03-0511:28

                  > you should ask the GP about his use of the word fascist on everything he doesn't like.

                  If mirror dot org actually existed, you might want to look into it, because your long list of examples has one related to 1930s Germany, and the rest has nothing to do with the political definition of "fascism"?

                  Your point about legality was valid, but you're undermining it with the sarcasm.

                • By mikkupikku 2026-03-0511:571 reply

                  Nothing deep going on there. Fascism in modern informal parlance is a synonym for authoritarianism. Those who object most loudly to Stalin being called a fascist are usually themselves actually fascists, or stalinists. Everybody else gets it.

                • By genxy 2026-03-0516:201 reply

                  Everything I don't like is pretty broad brush. I have only used it with the Trump regime.

                  https://en.wikipedia.org/wiki/Ur-Fascism

                  https://www.rollingstone.com/politics/politics-news/trump-su...

        • By n6hdhf 2026-03-056:53

          More like they will feed machine bullshit like WMDs exist in Fiji. My gut says so. My mom always believes me. Machine will call it out. Then they want overide. Machine will log it. Then they want an erase log button etc. Institutions and rules didnt fall from the sky. It evolved to damp the damage caused by such behavior.

      • By SoftTalker 2026-03-055:01

        OpenAI: Is that... legal?

        DoD: I will make it legal.

      • By vasco 2026-03-0513:541 reply

        Alignment is with the user of the LLM not to some fuzzy interpretation of human rights. So solving alignment for the DoW is just "don't refuse to bomb people when I ask you".

        • By _heimdall 2026-03-0514:031 reply

          That's absolutely not the definition people use for alignment. Safety discussions often circle around alignment because they are worried about AI doing things that are bad for humanity as a whole, not because it goes off track from any one user's goal. That would be terrible for safety if alignment meant I could ask to hack tha TSA and the LLM would do it.

          Ignoring the definition, what would be required for individual alignment is exactly the same as collective alignment. The only difference is the goals and who writes them, for the LLM it is being somehow forced to follow those rules no matter what.

          • By vasco 2026-03-0521:55

            That's safety, not alignment. Alignment is necessarily to the user.

    • By JumpCrisscross 2026-03-054:402 reply

      For consumer ChatGPT accounts, go to their privacy portal [1] and, first, delete your GPTs, and then, second, delete your account.

      [1] https://privacy.openai.com/policies?modal=take-control

      • By Towaway69 2026-03-055:042 reply

        How do I cancel my subscription to the DoW?

        The bigger picture is that the DoW got what it wanted and it got it by threatening one company while the other did its bidding.

        • By davidw 2026-03-055:252 reply

          By voting.

          • By raincole 2026-03-058:042 reply

            Voting changes the name of the department. It doesn't change if the government wants mass surveillance.

            See PRISM.

            • By davidw 2026-03-0514:351 reply

              This is my Senator:

              https://www.wyden.senate.gov/issues/domestic-surveillance-re...

              He may not be perfect on everything, but elect more people like him and it starts moving the needle. Or elect some more that are even more opposed to some of these things. It doesn't happen overnight. Change is difficult.

              • By mmooss 2026-03-0516:24

                > Change is difficult.

                I agree, though notice that the GOP/MAGA have and continue to make enormous changes. The difference is that they believe they can do it while others sit around talking about hopelessness and powerlessness. The only difference is belief.

            • By mmooss 2026-03-0516:23

              > Voting changes the name of the department.

              You're conceding that the name has already changed, without voting.

              > It doesn't change if the government wants mass surveillance.

              That can be prevented by Congress with enough political will.

          • By don_esteban 2026-03-055:522 reply

            Did the nsa's spying on everyone change between democratic and republican governments?

            • By ori_b 2026-03-056:032 reply

              Did you vote in the primaries for a candidate that might change it?

              • By don_esteban 2026-03-057:284 reply

                Did democrats offer primaries in the last elections?

                Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?

                I wonder how long can the American public keep the self delusion that the elections are anything but a theater for the naive, to keep the pretense the public has any say in things that matter.

                How much has the current administration asked the public about going to war with Iran?

                • By wilg 2026-03-057:322 reply

                  https://en.wikipedia.org/wiki/2020_Democratic_Party_presiden...

                  https://en.wikipedia.org/wiki/2024_Democratic_Party_presiden...

                  Skill issue. Run your candidate. Convince people to vote for them.

                  > How much has the current administration asked the public about going to war with Iran?

                  THE ELECTIONS are how the public weighs in.

                  • By tremon 2026-03-0521:11

                    > THE ELECTIONS are how the public weighs in

                    That's the second box only. There's also the soapbox (that you also referred to), the jury box and ultimately the ammo box.

                  • By don_esteban 2026-03-057:512 reply

                    Re: Skill issue Money issue. This is not level playing ground, the field is severely tilted. The referee is bought.

                    But you are saying: You lost fair and square, wait 4 years to have any say in what is going on.

                    Re: THE ELECTIONS are how the public weighs in.

                    When the choice is between Tweedledee and Tweedledum, the public's choice is meaningless.

                    To say nothing about politicians outright shamelessly lying (e.g. Trump campaigning on 'no more wars').

                    • By astrange 2026-03-061:45

                      Bernie had the most money. Political campaigns get money from people using ActBlue to give them $25.

                      (Also, political donors are unusually engaged upper-middle-class people which means they are left of the average voter.)

                    • By wilg 2026-03-057:581 reply

                      Money issue is also a skill issue, but I have no doubt in the era of free media someone could figure it out.

                      Sorry I didn't invent the idea that there are federal elections every two years, I'm just telling you that you have to win them. Bonus points: this is also how you can change the election schedule or political system!

                      If you're saying both candidates were bad when one was Trump, and the other was Hillary, Kamala, or Joe, then you don't have very good judgement. I agree Trump lying about not starting a war was bad. Many of us have said for years that he is a terrible liar. Please help us.

                      • By don_esteban 2026-03-058:55

                        I agree that Clinton/Harris/Biden are not equally bad as Trump.

                        Trump is monstrously bad (= force the shit hitting the fan NOW), the democratic alternatives were just 'normally' bad (= continue the same old crap driving the shit closer to the fan, ignoring the looming disaster).

                • By JumpCrisscross 2026-03-0512:17

                  > How much has the current administration asked the public about going to war with Iran

                  Here is the 2026 Senate map [1]. Do you suggest any of them will flip over Iran? (I don’t. The folks who regularly vote simply don’t show any sign that this is a priority. Folks who stay at home grumbling don’t matter.)

                  [1] https://en.wikipedia.org/wiki/2026_United_States_Senate_elec...

                • By wasabi991011 2026-03-057:481 reply

                  > Did voting for Bernie Sanders in the last two primaries (especially the ones when Trump won for the first time) amount to anything?

                  He didn't win the primaries though. It would have amounted to something if he got enough votes.

                  • By don_esteban 2026-03-058:111 reply

                    1) He did not win primaries, in significant part also because DNC was heavily against him. The level playing field thing.

                    2) If he won the primaries, there is still no guarantee that that would have amounted to anything.

                    First, he might not have won the elections (mainstream media and the whole ruling elites were heavily against him). And even if he won, he might not have been able to do much against the permanent state.

                    I still think the main cause of Trump's wins is the deep disillusionment of the democratic voters by Obama's failure (inability/unwillingness) to impact a meaningful change.

                    • By wilg 2026-03-058:291 reply

                      Everything you're saying here is the exact delusional cynicism that got us here. Stop.

                      • By don_esteban 2026-03-058:591 reply

                        Yes, my stance is cynical.

                        Sadly, it is also factually correct (i.e. not delusional).

                        Which of my statements are you contesting?

                        From my point of view, your stance (play fairly, according to the rules set by your stronger opponent) is delusional. Note that the opponent is not 'republicans', but the whole ruling elites.

                        And no, I can't help you, I am not USian, just an outside observer. Sadly, due to its weight, whatever USA does, heavily influences everybody else as well.

                        • By JumpCrisscross 2026-03-0512:151 reply

                          > it is also factually correct

                          No, it isn’t. Sanders’ supporters didn’t have the votes. That’s a fact.

                          If people believe in something, they should call their electeds and vote. The fact that a lot of people with a certain confluence of views (privacy, anti-war, et cetera) are too lazy to do either (regardless of post rationalization), but not self aware enough to not complain about it, is delusional cynicism.

                          • By don_esteban 2026-03-0515:231 reply

                            Note that I did not say he won the primaries.

                            I said the leadership of the democratic party did dirty tricks to prevent him winning.

                            The mainstream media was also against him.

                            Not anywhere close to a level playing field.

                            Note, that I am not against voting or calling your elected officials and all the related stuff. That is necessary. But, sadly, far from sufficient. If you think that that is sufficient, you are delusional.

                            Your subsequent generalizations are lazy and unsubstantiated, in fact they fit the classical smear patterns established by the mainstream media.

                            • By JumpCrisscross 2026-03-0518:44

                              > Not anywhere close to a level playing field

                              But still, ultimately, turnout was turnout. Media saying mean things about your side isn’t a real excuse, Trump has been saying the same for a decade.

                              > they fit the classical smear patterns established by the mainstream media

                              Of course they must. In the meantime, the issues I care about seem decently reflected (outside privacy and war, where I concede most Americans who share my views are lazy, delusional and nihilistic). I’ve even had the opportunity to help write some state and federal legislation. So I guess I should be okay with the lack of political competition.

                • By davidw 2026-03-0515:22

                  > Did democrats offer primaries in the last elections?

                  Uh, yeah? I voted for Biden/Harris.

                  And in any case, focusing almost exclusively on one race is part of the problem. Where I live, we also had a Dem primary for the house district, and a more electable candidate won - and then went on to win in the general. It was one of the very few red->blue flips in 2024.

                  Our former congresswoman, incidentally:

                  https://newrepublic.com/post/207234/trump-labor-secretary-ch...

                  Then there are all the races for school boards, city council, county commission and all those things that provide the base and the bench to build off of.

              • By jrflowers 2026-03-057:04

                I like that I can’t tell if this is some sort of admonition for not voting centrist enough in a primary that didn’t happen or for not voting left enough in a primary that did not happen. It seems like if you’re going to be so bold as to do a callout you might as well say what for (and why you either picked or specifically skipped a primary that did not happen)

            • By vkou 2026-03-058:22

              No.

              ... But the government flooding cities with thousands of masked thugs with a license to do whatever they want... has so far been an entirely Republican thing.

              There are more colours to the world than pure black and pure white. There are also a million shades of grey in between, and most of us have the ability to distinguish between them.

        • By delaminator 2026-03-058:101 reply

          Here's a simple ubsubscribe guide

          https://usa.gov/renounce-lose-citizenship

          • By abustamam 2026-03-0510:40

            Unless we move out of the country though, we are still technically subscribed to the DoW (still need to pay taxes etc)

      • By teruakohatu 2026-03-054:59

        Why?

        If you have so little faith in them that they won’t honour the privacy controls you should also delete your non-consumer account too.

    • By oxdgd38 2026-03-052:183 reply

      We know how this story will end for Dario. See Oppenheimer, Turing, Lavoisier, Galileo, Socretes etc. Power does not reside in the hands of people with knowledge or even wealth. And most technical people have not taken a political philosophy course or even a philosphy course. The Ring of Gyges story is 4000 years old.

      • By beepbooptheory 2026-03-052:291 reply

        I do not believe the Ring of Gyges preceded Plato making it up for The Republic... Where are you getting 4000 years?

        Also maybe not seeing the message or connection here... That myth isn't really about who has power or not, right? It's kind of just a trite little "why you should do good even when no one is watching" thing. It just serves Socrates for his argument with Thrasymachus, and leads us into book 2 where it really gets going with Glaucon and all that. This is from memory so I might be a little off.

        • By oxdgd38 2026-03-052:551 reply

          I got it from Tamar Gendlers philosophy and human nature course on open yale courses. She says it was a popular folk story passed down orally much before it was written in a book. Plato used it because people grew up hearing the story.

          The story is asking whats the source of morality? Who decides where the lines are? And its not scientists. Science produces the Ring.

          • By beepbooptheory 2026-03-054:09

            I was wrong, it's in Book II. This is "Socratic irony", its Glaucon speaking, assuming the position of an argument from earlier. Socrates himself of course doesn't believe in this conclusion... we are going to learn later that justice is a form, based on the Good! This is all the doxa of one still in the cave.

            > According to the tradition, Gyges was a shepherd in the service of the king of Lydia; there was a great storm, and an earthquake made an opening in the earth at the place where he was feeding his flock. Amazed at the sight, he descended into the opening, where, among other marvels, he beheld a hollow brazen horse, having doors, at which he stooping and looking in saw a dead body of stature, as appeared to him, more than human, and having nothing on but a gold ring; this he took from the finger of the dead and reascended. Now the shepherds met together, according to custom, that they might send their monthly report about the flocks to the king; into their assembly he came having the ring on his finger, and as he was sitting among them he chanced to turn the collet of the ring inside his hand, when instantly he became invisible to the rest of the company and they began to speak of him as if he were no longer present. He was astonished at this, and again touching the ring he turned the collet outwards and reappeared; he made several trials of the ring, and always with the same result—when he turned the collet inwards he became invisible, when outwards he reappeared. Whereupon he contrived to be chosen one of the messengers who were sent to the court; whereas soon as he arrived he seduced the queen, and with her help conspired against the king and slew him, and took the kingdom. Suppose now that there were two such magic rings, and the just put on one of them and the unjust the other; no man can be imagined to be of such an iron nature that he would stand fast in justice. No man would keep his hands off what was not his own when he could safely take what he liked out of the market, or go into houses and lie with any one at his pleasure, or kill or release from prison whom he would, and in all respects be like a God among men. Then the actions of the just would be as the actions of the unjust; they would both come at last to the same point. And this we may truly affirm to be a great proof that a man is just, not willingly or because he thinks that justice is any good to him individually, but of necessity, for wherever any one thinks that he can safely be unjust, there he is unjust.

            https://gutenberg.org/cache/epub/1497/pg1497.txt

      • By tmule 2026-03-053:223 reply

        Oppenheimer? Really? Quoting a review of an Oppenheimer biography:

        “Oppenheimer was clearly an enormously charming man, but also a manipulative man and one who made enemies he need not have made. The really horrible things Oppenheimer did as a young man – placing a poisoned apple on the desk of his advisor at Cambridge, attempting to strangle his best friend – and yes, he really did those things – Monk passes off as the result of temporary insanity, a profound but passing psychological disturbance. (There’s no real attempt by Monk to explain Oppenheimer’s attempt to get Linus Pauling’s wife Ava to run off to Mexico with him, which ended the possibility of collaboration with one of the greatest scientists of the twentieth, or any, century.) Certainly the youthful Oppenheimer did go through a period of serious mental illness; but the desire to get his own way, and feelings of enormous frustration with people who prevented him from getting his own way, seem to have been part of his character throughout his life.”

        Seems more like Sam Altman, who is known to get his way, than Dario.

        • By toraway 2026-03-056:29

          The source for the poisoned apple story is Oppenheimer himself, and otherwise uncorroborated to be clear. He spent his life clearly racked by feelings of inadequacy, guilt and self-doubt.

          When combined with a somewhat paradoxical large ego and occasionally fanciful reshaping of his own life story or exaggeration, it's entirely plausible (if not likely) that this was in reality a brief intrusive thought or a partially realized fantasy blown up into a catchy anecdote that better fit his self-image of being unable to control his typically human qualities of anger and envy.

          If it was Sam Altman, we'd have heard the story from the guy he tried to poison, who instead of filing a police report thought it showed Sam was a real go-getter and offered him his first job on the spot as VP at the company he founded (later forced out by Sam replacing him as CEO, but still considers him a friend with no hard feelings).

        • By CamperBob2 2026-03-055:361 reply

          The idea isn't that Oppenheimer was a saint, but that the government he served well and faithfully -- at the expense of his soul, some would argue -- turned on him viciously as soon as he dared to question their agenda.

          As you suggest, it is easy to imagine Altman in the same hot seat. Never mind his sexual orientation, which the Republican theocrats will eventually use against him as surely as the knives came out for Ernst Röhm.

          • By msabalau 2026-03-0516:081 reply

            It's a bit simplistic to personify complex organizations of millions of people like "The Government" or "The Market" as if they were a living, breathing persons with a single mind.

            There were people working in government who successfully attacked Oppenheimer for personal and/or policy reasons, people who stood by, and people who unsuccessfully supported him, voted to clear him, or condemned the proceedings.

            Oppenheimer still paid the price, and arguably, the risks to someone like him today are considerably higher, as the current administration isn't exactly like Eisnehower's.

            Nevertheless it's reductionist, reifying sentimentality to talk about "the government" turning "viciously" on someone who "served them well" because they are defying its agenda. The government isn't a character in Game of Thrones. The responsibility lies with the specific individuals who attacked him, and those who stood by.

            • By CamperBob2 2026-03-0518:31

              Nevertheless it's reductionist, reifying sentimentality to talk about "the government" turning "viciously" on someone who "served them well" because they are defying its agenda. ... The responsibility lies with the specific individuals who attacked him, and those who stood by.

              I'm sure that was of great comfort to Oppenheimer, as it will be to Altman and/or Amodei. "It's not you, it's us."

      • By adriand 2026-03-053:177 reply

        I think Amodei is widely underestimated. The consensus viewpoint on the deal that OpenAI struck with the Pentagon is that Anthropic got played. I disagree. I'm certain that Amodei and his team gamed this out. In doing so, I think there's at least two conclusions they would have drawn:

        1. Some other AI company would cut a deal with the Pentagon. There's no world in which all the labs boycott the Pentagon. So who? Choosing Grok would be bad for the US, which is a bad outcome, but Amodei would have discounted that option, because he knows that despite their moral failures, the Pentagon is not stupid and Grok sucks.

        That leaves Gemini or OpenAI, and I bet they predicted it would be OpenAI. Choosing OpenAI does not harm the republic - say what you will about Altman, ChatGPT is not toxic and it is capable - but it does have the potential to harm OpenAI, which is my second point:

        2. OpenAI may benefit from this in the short term, and Anthropic may likewise be harmed in the short term, but what about the long game? Here, the strategic benefits to Anthropic in both distancing themselves from the Trump administration and letting OpenAI sully themselves with this association are readily apparent. This is true from a talent retention and attraction standpoint and especially true from a marketing standpoint. Claude has long had much less market share than ChatGPT. In that position, there are plenty of strategic reasons to take a moral/ethical stand like this.

        What I did not expect, and I would guess Amodei did not either, is that Claude would now be #1 in the app store. The benefits from this stance look to be materializing much more quickly than anyone in favour of his courage might have hoped.

        • By hedora 2026-03-053:572 reply

          > Choosing Grok would be bad for the US

          They chose Grok and OpenAI. The story was drowned out by the Anthropic controversy, but an xAI deal was signed the same week.

          • By dolphinscorpion 2026-03-054:251 reply

            Grok is chosen because Musk spent $250+ million to elect Trump and is expected to underwrite the 2026 elections. Also, a lot of Trumps and their friends are invested in SpaceX. So they give them money too, but use OpenAI or Claude. I have a feeling that the military likes Claude more

            • By thegreatpeter 2026-03-0512:052 reply

              Didn’t they choose Anthropic first and then all of this happened so they were forced to go with Grok?

              Not adding up

              • By scottyah 2026-03-0622:07

                Also I imagine this is partly due to intra-military power struggles. I'm sure there are a lot in the DoW who like Anthropic- models wise and all that they stand for. The supply chain thing was a way to take the power from them, though petty.

                Pete is also facing a lot of risk from AI, power structures will be forced to change once a few teams can take over entire departments of people. The military ecosystem is very much like the private sector in where the number of butts in seats is a metric for people. The dynamics will be changed if your group can just hand-roll what they relied on others for.

              • By sigwinch 2026-03-0513:50

                We must conclude that they’re wary of Grok. Maybe it’s the incentive for bias and sabotage.

          • By xvector 2026-03-054:20

            They "chose Grok" for political optics, but they don't seriously intend to use it because it's actually just benchmaxxed garbage - hence why they worked with OpenAI.

        • By oxdgd38 2026-03-053:341 reply

          The mistake here is thinking they can take on Power without really sitting in any officual position of Power.

          Wikileaks and Assange got popular too. What happened to them?

          The State Dept and CIA do exactly what Assange did. They pick and choose who to target with leaks. They get away with it (mostly even when exposed) because they officially are in power. Assange was not in power. If you take a moral position do it when you have real power.

          • By generic92034 2026-03-058:35

            > If you take a moral position do it when you have real power.

            If the condition for getting real power is having no morals, this is hard to accomplish.

        • By xvector 2026-03-054:211 reply

          There is also:

          3. Talent migration to Anthropic. No serious researcher working towards AGI will want it to be in the hands of OpenAI anymore. They are all asking themselves: "do I trust Sam or Dario more with AGI/ASI?" and are finding the former lacking.

          It is already telling that Anthropic's models outperform OAI's with half the headcount and a fraction of the funding.

          • By kelnos 2026-03-0510:122 reply

            I think that's wishful thinking. Just because someone is a "serious" researcher (careful, sounds like a No True Scotsman coming up), it doesn't mean that they care about AI guardrails or safety, or think our current administration is immoral.

            • By squidbeak 2026-03-0513:57

              I don't - idealistic motives seems to be common among leading AI developers and researchers. It's totally realistic that Anthropic sticking to principle & taking a hit for it will give it an edge recruiting those idealistic types.

            • By xvector 2026-03-0515:51

              I've hung out with this crowd and they are very idealistic, they care deeply about guardrails and safety, and definitely find the idea of handing the current administration AGI/ASI repulsive.

        • By derwiki 2026-03-053:31

          Lyft was briefly number one ahead of Uber, too

        • By techpression 2026-03-054:351 reply

          They still need a lot of money and what their VC’s think is going to be more important than what Amedei does. Nothing more profitable than war and government.

          App Store rankings are meaningless, I have Claude, ChatGPT and Gemini all in top five, with a electronic mail app being 1 and a postal tracking service app (for a very small provider) being 3.

          • By internet101010 2026-03-055:411 reply

            The value of hyperscalers' equity in Anthropic alone dwarfs their contracts with the government. Not to mention the revenue from hosting their models that helps justify the insane capex. Anthropic going to $0 would be a huge hair cut to all of their balance sheets.

            • By techpression 2026-03-056:33

              They’ve only invested a couple of billions, like 20 or so split between them. Not really something that hurts them long or even medium term. Microsoft has multiple multi billion dollar government deals, I think Amazon is the only that doesn’t, Google also has a lot of government contracts, especially outside of cloud.

        • By panta 2026-03-056:11

          > Choosing OpenAI does not harm the republic

          if we consider AIs as "force multipliers" as we do with coding agents, it's easy to see how any AI company can harm the republic if the government they are serving is unethical and amoral.

        • By veidr 2026-03-0516:58

          Nobody gives a shit about jumping to #1 in the app stores, at this scale.

          If US & A really goes full-Huawei on Anthropic, they can't IPO. It's an existential crisis for them. I think they can survive in some form, somehow, because their model is really good, probably the best.

          And in other times, I would think the US government had sufficient intellectual horsepower to not cut off its own dick, and the golden goose's head, over some idiotic morning-drinker road-rage type beef. But these are not other times. These are these times.

    • By hn_throwaway_99 2026-03-056:163 reply

      Agree with this completely.

      But besides Sam Altman, this whole episode has made me totally and completely lose all respect for Paul Graham. I used to really idolize pg, and I really used to like his essays, but over the years I've found his essays increasingly displayed a disturbing lack of introspection, like they'd always seem to say that starting a startup is the best thing anyone can do, and if you're not good at startups then you kind of suck.

      But his continued support of Altman in this instance (see https://x.com/paulg/status/2027908286146875591, and the comment in that thread where he replies "yes") is just so extra disappointing and baffling. First, his big commendation for Altman is that he's doing an AMA? Give me an f'ing break. When someone is a great spin doctor I'm not going to commend them for doing more spinning. It's like he has total blinders on and is unwilling to see how sama's actions in this instance are so disgusting and duplicitous. Maybe subconsciously he knows he's responsible for really launching sama into the public consciousness, so he now just is incapable of seeing the undeniably shitty things sama has done.

      Oh well, I guess it's just another tech leader from the late 90s/early 00s who has just shown me he's kind of a shitty person like a lot of us.

      • By jahnu 2026-03-0510:33

        Billions of dollars is a hell of a drug.

      • By causal 2026-03-0514:56

        Yeah he has some great essays but also some that I find really dumb. Reading “Founder Mode” is when I realized he’s just as susceptible to fallacy as the rest of us.

      • By nextaccountic 2026-03-0518:42

        Never meet your heroes

    • By filoeleven 2026-03-0517:031 reply

      It's not the department of war. Don't call it that.

      > However, only an act of Congress can legally and formally change the department's name and secretary's title, so "Department of Defense" and "secretary of defense" remain legally official.

      https://en.wikipedia.org/wiki/United_States_Department_of_De...

    • By hollosi 2026-03-0514:55

      Enforcement is the real issue, not the specific red lines, regardless of what Anthropic claims and news outlets repeat.

      Verification requires access to classified logs. These logs would attract the spies of the whole world. Even if these logs are in principle for "past actions", in practice past logs (for war games, for example) would compromise future strategy.

      Since these manual audits are too risky, the only alternative is to hard-code limits into the AI. But are we ready trust an AI to "judge" a mission and refuse to execute during a crisis?

      Anthropic wanted technical enforcement, the Pentagon wanted trust.

      It’s a choice between two bad options: an unaccountable military and an unreliable AI kill switch. They are both very dangerous, just in different ways.

    • By sakesun 2026-03-051:461 reply

      > it was clear that either the DoW or OAI (or both) were fudging.

      This is my first thought as well. It's too obvious. He should have consulted ChatGPT before the announcement.

      • By shigawire 2026-03-054:22

        More likely assumed (perhaps rightfully) that there would be no consequences anyway.

    • By cobbzilla 2026-03-055:32

      per other Snowden comments, “all lawful use” means whatever we want it to mean.

      Secret FISA court decisions will say the use is lawful, but you’ll never get to read or challenge those decisions.

    • By LarsDu88 2026-03-055:06

      Greg Brockman donated 25 million dollars, and DoW gives OpenAI 200 million dollar contract.

      Just good 'ol fashion grifting mixed with a bit of government corruption.

      This country has been boiling the frog of graft, grifting, and corruption too long.

    • By qwertox 2026-03-0510:33

      @sama's did say: "[..] will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control". Law is what Trump decides.

    • By cheema33 2026-03-051:505 reply

      > OAI conditions were basically "DoW won't do anything which violates the rules DoW sets for itself."

      I believe this understanding is correct. The issue many people have these days with Dept. of War, and most of Trump admin is that they have little respect for laws. They only follow the ones they like and openly ignore the ones that are inconvenient.

      Dept of "War" should have zero problems agreeing to the two conditions Anthropic outlined, if they were honest brokers. But I think most of us know that they are not. Calling them dishonest brokers seems very charitable.

      • By aardvarkr 2026-03-053:181 reply

        I don’t care who is in the whitehouse. Snowden revealed the crimes of the NSA in 2013 when Obama was president. They’re all going to want to use AI for mass surveillance

        • By Tanjreeve 2026-03-057:372 reply

          AI doesn't add anything to the ability to do mass surveillance. That genie was already out of the bottle from clouds and big data systems. At best AI might take on some of the gruntwork for drawing conclusions from profiles but it's doing it's usual thing of being a powerful interface built on top of other systems.

          • By mithr 2026-03-0515:191 reply

            > AI doesn't add anything to the ability to do mass surveillance

            I recommend reading Yuval Noah Harari's Nexus for a deep discussion around this.

            He makes the point that what makes this AI age much more dangerous for mass surveillance isn't just the collection of data, which has indeed been possible for a while, but the new ability to have AI sift through that enormous volume of information, an ability which until recently has not been possible in a meaningful way without a ton of manual work to support it.

            Older attempts at mass control of a population already involved mass surveillance, even in a large amount of detail, but even when capturing in detail all citizens' activities, there were just not enough people around to be able to dig through that and analyze it. This has been somewhat true even with the help of computers, though computers have certainly already been making this easier.

            But now you can just give all that data to an AI with your instructions, and it'll apply some sort of "judgement" on your behalf, completely autonomously, and even perform actions against those folks it finds, again autonomously, without needing to manually build a whole infrastructure to do that with manual rules. That's a very meaningful upgrade for someone wanting to control a population.

            • By Tanjreeve 2026-03-0518:09

              That's still actuating using existing infrastructure that already existed. I agree with the summarise + decide part maybe being quicker sometimes but the bottleneck remains collection and collation and actioning infrastructure

          • By veidr 2026-03-0517:101 reply

            crazy take

            like saying kids having internet-connected devices with built-in cameras doesn't increase the probability of sexting, they could do the same with film cameras and a fax machine

            • By Tanjreeve 2026-03-0518:071 reply

              AI doesn't increase the amount of data captured or the processing throughput is the difference with your cameras metaphor. As said at best it can summarise things better sometimes.

              • By 1718627440 2026-03-0522:36

                I would say AI is very much increasing the processing throughput of labeling surveillance data.

      • By reactordev 2026-03-051:52

        I haven’t seen them follow a law yet

      • By lmeyerov 2026-03-053:031 reply

        I find it confusing in most directions.

        Ex: For the above statement, if they're truly dishonest brokers and openly ignore the rules that are inconvenient, they would have zero problems agreeing to Anthropic's terms and then violating them. So what you say may be quite true, but there would still need to be more to the story for it to make sense.

        Ex: DoW officials are stating that they were shocked that their vendor checked in on whether signed contractual safety terms were violated: They require a vendor who won't do such a check. But that opens up other confusing oversight questions, eg, instead of a backchannel check, would they have preferred straight to the IG? Or the IG more aggressively checking these things unasked so vendors don't? It's hard to imagine such an important and publicly visible negotiation being driven by internal regulatory politicking.

        I wonder if there's a straighter line for all these things. Irrespective of whether folks like or dislike the administration, they love hardball negotiations and to make money. So as with most things in business and government, follow the money...

        • By 3eb7988a1663 2026-03-053:18

          I have no idea what exactly Anthropic was offering the DoD, but if there were a LLM product, possible that the existing guardrails prevented the model from executing on the DoD vision.

          "Find all of the terrorists in this photo", "Which targets should I bomb first?"

          Even if the DoD wanted to ignore the legal terms, the model itself would not cooperate. DoD required a specially trained product without limitations.

      • By tqwhite 2026-03-0514:431 reply

        Mass surveillance is completely legal. It's just stupid to say its not.

        • By veidr 2026-03-0517:19

          I don't think that's what is being said, mainly? Like that's why Anthropic wants to have it in the contract(s) with the government?

          At the same time, it is expressly illegal in some circumstances; that was the whole core of the Snowden revelations. The NSA and CIA are expressly curtailed from doing that by law — there are cases where they may surveil citizens with a court order, but not "mass" surveillance. There are some restrictions on the military along those same lines.

          Keywords: Executive Order 12333, FISA, National Security Act, Posse Comitatus Act

      • By ExoticPearTree 2026-03-053:507 reply

        [flagged]

        • By sfink 2026-03-054:472 reply

          There's a reason it's unpopular.

          If your company makes an herbicide that happens to be very good at killing off anyone who drinks it at a high concentration in their water supply, you're saying that there should be no way for your company to resist being used for mass murder (including unavoidable collateral damage)?

          Also, the core mission of the military is not "killing its adversaries through any means necessary". It is to defend state interests. Some people have a belief that mass killing is the best mechanism for accomplishing that. I do not agree with, nor do I want to associate with, those people. They are morally and objectively wrong. Yes, sometimes killing people is the most effective -- or more likely, the quickest -- way. In practice, it doesn't work very well. The threat of violence is much more powerful than actually committing violence. If you have to resort to the latter, you've usually screwed up and lost the chance to achieve the optimal outcome. It is true that having no restrictions whatsoever on your ability to commit violence is going to be more intimidating, but it also means that you have to maintain that threat constantly for everyone, because nobody has any other reason to give you what you want.

          The actual military is not evil. Your conception of it is.

          • By palmotea 2026-03-055:042 reply

            >> Unpopular opinion around here, but no company should have the ability to stop the military from its core mission: killing its adevarsaries through any means necessary.

            > The actual military is not evil. Your conception of it is.

            You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?

            To give different hypothetical example: should Microsoft be allowed to put terms in its Windows contracts with the government, stipulating that Windows cannot be used to create or enforce certain tax policy or regulations that Microsoft disagrees with? Windows is all over, and I'm sure pretty much every government process touches Windows at some point, so such a term would have a lot of power.

            • By sfink 2026-03-055:37

              > You're right, but there's a a real question here: should a company have the ability to control or veto the decisions of the democratically-elected government?

              I don't think "control or veto" is fair. Anthropic is not trying to prevent the US government from creating full autonomous killbots based on inadequate technology. They are only using contract law to prevent their own stuff from being used in that way.

              But that aside, my opinion is that to a first order approximation, yes a company should very much be able to have say in its contract negotiations with any party including the government. It's very similar to the draft. I don't believe a draft is ethical until the situation is extreme, and there ought to be tight controls on what it takes to declare the situation to be that extreme. At any other time, nobody should be forced to join the military and shoot people, and corporations (that are made of people) should not be forced to have their product used for shooting people.

              A corporation is a legal fiction to describe a group of people. Some restrictions can be placed on corporations in exchange for the benefits that come from that legal fiction, but nothing that overrides the rights of its constituent people.

              Governments are made of people too. Again, a subset of people are given some powers in order to better achieve the will of the people, but with tight controls on those powers to keep the divergence to a minimum. (Of course, people will always find the cracks and loopholes and break out of their constraints, but I'm talking about design not real-world implementation here.)

              So to look at your hypothetical, first I'd say it's not very different from the question of whether an individual person should be forced to personally enforce tax policy. Normally, I'd say no. There are many situations where the government needs more say and authority in such things, but that must only be achieved via representatives of the people passing laws to allow such authority. Other than that, yes: I believe a company should be able to negotiate whatever contract terms it wants. In a democracy, we are not subjects of a controlling government; the government is an extension of us.

              In practical terms, if Microsoft were to insist on that contract stipulation, the government would not agree to the contract and would award its business to someone else. If the government were especially out of control and/or unethical, it might punish Microsoft with regulations or declarations of supply chain risk or whatever, but that is clearly overstepping its bounds and ought to be considered illegal if it isn't already. The usual fallback would be that the people would throw the people perpetrating that out on their asses. That's the "democratically-elected part".

              Obviously, Microsoft would be stupid to insist on such a thing in their contract, and its employees would probably lose all confidence in the corporate leadership. Most likely, they'd leave and start Muckrosaft next door that rapidly develops a similar product and sells it to the government under a reasonable contract.

              Basically, I'm always going to start from people first, and use organizations and laws only in order to achieve the will of the people. The fact that the people are stupid does make that harder, but the whole point of democracy is that we'll work out the right balance over time.

          • By ExoticPearTree 2026-03-055:091 reply

            My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it. And we would all go through military training just in case, you know, a neighbor drank too much last night and thinks it can win against you.

            > The threat of violence is much more powerful than actually committing violence.

            While I agree with this statement, the only way the threat works is if from time to time you apply violence to reinforce your capability and availability to actually do it. And the US is really good at actually being violent so others don't even think about doing something against it, at least the majority of countries anyway.

            • By don_esteban 2026-03-056:041 reply

              Re: My conception is that the world would be a much simpler place if war was total. No one would start it unless it would be 200% it could win it

              Now apply the same logic to the current Iran war.

              • By ExoticPearTree 2026-03-057:163 reply

                I do not see Iran winning this. The current government is also hated by the people who would very much like to see all of them dead.

                Al Jazeera has some very good insights into this, and the gist of it is: the Iranian regime is in a fight for its life with nothing to lose. If they are degraded enough, a revolution will start in Iran and they will be killed by the people. Or by US/IL bombs - whichever comes first. There is no way they get out of this alive. They are trying to prolong the inevitable.

                • By Tanjreeve 2026-03-057:55

                  This is the same mistake as made in Iraq and Syria by media policy pundits. Dictatorial regimes collapse pretty quickly without a significant base of support enough to stop a revolution happening. They might not have a majority of people supporting but it isn't a democracy. Dictatorial regimes will always have one or more of military, business, or sub-groups of citizens in their pockets as clients.

                  Whenever we say "the regime is hated by it's people it will collapse" it should be asked "then why didn't it collapse already?". In Iran metropolitan areas are where you see opposition. That's also where people have cameras and media orgs tend to be. We get a warped depiction of opposition in Iran even without our own media's baggage. Meanwhile the power base of Iran is everywhere but metropolitan cities. And there's a lot of clients who benefit from the regime. I think this might be worse than the sectarian violence that came out of the Hussein regimes collapse because the Sunni sect his base was built around was still a minority. This time it's the majority and the people being fought against are the Americans, the Israelis and the Arabs so their backs are against the wall this is a total war already from their side.

                • By don_esteban 2026-03-057:421 reply

                  Regarding Iran's future:

                  You are describing Libya scenario, not a 'lived prosperously ever after'. There is no credible opposition in Iran to take the mantle.

                  • By ExoticPearTree 2026-03-059:321 reply

                    No. Iran has almost all of its population part of the same ethnic group, which in Libya it was not true: all the tribes started fighting each other.

                    It does not an established opposition because the current regime has the habit of killing anyone it doesn't like or goes against the official line. Now there is a chance for opposition to form.

                    • By don_esteban 2026-03-0515:46

                      Iran has significant Kurdish, Azeri, Baluchi and Arab minorities, Persians form cca 2/3 of the population.

                      With the US & Israel supporting the minorities (most likely offering them independence), in the hope of toppling the regime, and bombing mostly Persians, the most likely outcome (assuming they are actually able to force regime change, which is far from guaranteed) is fragmentation and general lawlessness.

                      Note that whoever inherits the regime would have to deal with wholesale destruction of the country, traumatized population and hate for those who bombed them and killed their relatives and children. Slavishly obeying the new foreign overlords will not be very popular. Have we not learned anything from Iraq and Afghanistan? How can you still believe the fairy tales of welcoming the liberators?

                • By don_esteban 2026-03-057:381 reply

                  OK, slowly:

                  The wars are already total for the weaker sides. See Ukraine/Iran. Did not stop the stronger side attacking.

                  You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?

                  But yes, you are right, the world would be much simpler in such case - there will be no humans left. OK, maybe some hunter-gatherers.

                  • By thaumasiotes 2026-03-057:48

                    > You are advocating for no constraints (total war) on the stronger side. Taken literally, that means genocide of the losers. Really, that's what you want?

                    Taken literally, it means genocide of the losers is an option the winning side has. It always has been.

                    Note that Genghis Khan's explicit plan when he conquered China was to wipe out the Chinese to make room for Mongols. He wasn't stopped from doing that; there was no constraint to block him.

                    But he was persuaded not to.

        • By saghm 2026-03-057:051 reply

          With the way you've phrased it the government could nuke the entire world; all of the adversaries would be dead along with literally everyone else. I don't really see why it's an issue if a company doesn't want to sell them the tools to do that.

          • By ExoticPearTree 2026-03-057:09

            On the flipside, housing prices would go down significantly. Lots of room to expand.

        • By xrd 2026-03-054:002 reply

          If I start a small business that sells Apples and the US government comes to me and says "we want to buy your apples and fire them at high speed to" these are now your words "kill adversaries through any means necessary."

          If I say, no, then am I stopping the military?

          I feel like it is reasonable that I can say "no, I don't want to sell you my apples."

          I cannot for the life of me figure out why that means I am stopping the military from killing people. The US Military will definitely still be able to kill people for centuries. I'm just saying I don't want to participate in it.

          • By throwaway173738 2026-03-055:09

            More to the point, if everyone stopped selling anything to the military they would still be able to kill people with their bare hands. People are arguably very good at killing people and it takes civilization to train us not to kill each other.

          • By ExoticPearTree 2026-03-055:031 reply

            In the context of the larger discussion, if you already sold apples to the military, you cannot go to them and say you don't like how they're using the apples you sold them.

            • By sfink 2026-03-055:40

              In the context of the larger discussion, Anthropic thought of that ahead of time and put the restrictions into the contract that the government agreed to. So "already sold" is a non-sequitur; that's not the situation under discussion.

        • By Cantinflas 2026-03-055:41

          That's not their mission, in any country, ever.

        • By sixothree 2026-03-055:301 reply

          The problem here is that this department claims its adversaries are Americans. Do you think antropic should aid in the killing of Americans?

          • By ExoticPearTree 2026-03-057:172 reply

            I don’t believe for a second the Pentagon sees Americans as adversaries.

            • By ImPostingOnHN 2026-03-0514:50

              Unfortunately, reality is not determined by what you personally don't believe for a second.

              Evidence (the Commander in Chief calling the opposition terrorists, and celebrating their government executions, for example) indicates that reality indeed reflects the things you personally don't believe.

            • By uxcolumbo 2026-03-059:36

              Trump sees many Americans as adversaries (i.e. the 'radical left' like Alex Pretti an ER nurse and Renee Nicole Good - a mother). In his first term he asked whether protestors can be shot in the legs.

              So in short it doesn't matter what the Pentagon thinks as Trump is the commander in chief and as far as I know the Pentagon has to follow his orders.

        • By throwaway290 2026-03-054:021 reply

          Any company is free to choose its business partners and set terms to them. "Don't like our terms, don't partner with us"

          If government can force any private company to work specially for government then US is no better than PRC

          • By SoftTalker 2026-03-054:561 reply

            You might want to read about the War Production Board during World War II. Established by a presidential executive order no less.

            • By throwaway290 2026-03-055:001 reply

              Wasn't that for defense during an actual war started by another country?

              Legit war time measures can be a thing (that's why it's fucked if president can just start a war and then use that as excuse for any war time measures they like)

              • By ExoticPearTree 2026-03-055:232 reply

                "Legit war time measures" is not a thing. If Congress declares war on Cuba or Venezuale for example, people who do not support it will not see the measures as "legit". The US has a lot of precedent of bombing/invading other countries at the whim of presidents without actually calling it a war for decades.

                And for better or worse, it is actually good that it is like this. Otherwise, if Congress declares war on Iran or China or whatever, the whole country will be put on a war footing, companies will be directed to build whatever the Pentagon says it needs, drafts will be enforced and so on. And it would be pretty ugly.

                • By ithkuil 2026-03-056:49

                  If Congress declared an actual war and if they declared to use war time laws to force a private company to comply with the war effort, we wouldn't be having this conversation.

                  What happened was different: a private company decided to enforce some terms, as they can do during peace time and they have been bullied in a way that is disgraceful precisely because it didn't happen during war time nor it has been done using the existing laws around that.

                  What is the purpose of having laws in the first place if we accept that the government can rule by intimidation?

                • By throwaway290 2026-03-056:33

                  if you didn't notice we are talking about wwii

                  usa was not aggressor

                  fat chance congress declaring war of aggression on a peaceful country

        • By hedora 2026-03-054:02

          Yes, Musk is guilty of treason for exactly that reason. He directly sabotaged a major US military operation in Ukraine.

          However, the military is bound by US and international law. It's clear they're not going to obey either of those with respect to this contract.

          On top of that, Anthropic has correctly pointed out that the use cases Trump was pushing for are well beyond the current capabilities of any of Anthropic models. Misusing their stuff in the way Trump has been (in violation of the contract) is a war crime, because it has already made major mistakes, targeted civilians, etc.

    • By fmajid 2026-03-053:451 reply

      Or, as is likely, OpenAI models have no guardrails, Anthropic's did and the DoD was bumping into them.

      • By galangalalgol 2026-03-054:331 reply

        Does anyone else notice claude is just plain better at reasoning? It may not just be post training guardrails. It would not surprise me of it was something anthropic couldn't simply disable. Either from reinforcement or even training corpus curation. Of all the models, claude is the only one that makes me wonder if they have figured out something beyond stochastic language generation and aren't telling anyone

        • By solenoid0937 2026-03-054:48

          I have noticed this too, despite the close benchmark results Claude just works better. It knows when to push back, it has an "agency"... there is something there that I don't see with Gemini or OpenAI's best paid models.

    • By spwa4 2026-03-058:02

      Except if there's one defining property of the last 4-5 administrations it is that they definitely and constantly violate the rules they set for themselves. With every new administration it gets worse and worse.

      And while this administration is brazen about this, it's not really a drastic change anywhere.

      In fact most EU laws (GPDR, AI regulation, Chat Control) are directly, up front, declaring they themselves won't respect it. They very directly have one set of rules for states, government employees, ... and ANOTHER set of rules for everyone else. And they're incredibly brazen. For private individuals, companies it goes very far, it's essentially impossible to even know what does and does not violate the GPDR, and you can't ask the courts, that's not allowed. You also cannot use the courts to compel government to do anything under these laws.

      For governments, when it comes to what's allowed, it goes incredibly far. Governments can declare any action legal under the GPDR, before and after the fact, without parliament involvement. It does not matter if that action was done by the government themselves, or if it's an action by a private company (so the government can use subcontractors for any violation of the GPDR)

      This means that, for THE example given for GPDR protection: medical information. Medical insurance in the EU is either state-owned or has exceptions, the law does the exact opposite of what it appears to do: it makes all your medical information available for medical insurers. And the police (e.g. to find you). And the tax office. And courts. And medical institutions themselves (to deny transplants to smokers). And ... And while doctors (and priests) used to be huge no-no's when it came to information gathering, that's no longer the case. If a doctor uses the state required medical file, your medical information flows straight into a state database, immediately searchable for everyone the GPDR supposedly protects you against.

    • By jaredklewis 2026-03-056:001 reply

      > DoW balked at Anthropic's conditions so OAI's agreement must have made the "conditions" basically unenforceable.

      I think it’s also possible DoW didn’t care about the conditions but just wanted some pretext to punish Anthropic because Dario isn’t a Trump boot licker like the rest of the SV CEOs.

      • By sigwinch 2026-03-0513:59

        I think this is supported. Hegseth has said numerous things about curly hair. I read that his reaction to Dario started with his hair.

  • By 6Az4Mj4D 2026-03-050:0813 reply

    Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?

    It doesn't match.

    • By pfisherman 2026-03-051:224 reply

      This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.

      OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.

      Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.

      • By stingraycharles 2026-03-051:361 reply

        Yeah, it never made sense when Sam immediately said that they had the same constraints yet de DoW immediately agreed with that.

        From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.

        • By ExoticPearTree 2026-03-054:021 reply

          I think they said they will comply with the law and Pentagon policies.

          And:

          1. there is no law currently prohibiting autonomous weapons platforms

          2. the Pentagon can create policies overnight allowing all kinds of stuff

          So yeah, OpenAI is going to make a lot of money from actually doing what the military asks from them.

          • By cobbzilla 2026-03-055:36

            Secret FISA court decisions are also law, the public just can’t see or challenge them. So we really have no idea what is considered lawful.

            If the contract says “all lawful use” it’s a blank check to the state.

      • By felipeerias 2026-03-051:551 reply

        Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models through Palantir.

        My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.

        • By stingraycharles 2026-03-052:171 reply

          Wasn’t the trigger for all this what happened with Maduro earlier this year? From what I understood, Anthropic wasn’t very happy how their systems were being used by the DoW through Palentir which caused this whole feud.

          • By felipeerias 2026-03-052:531 reply

            Reportedly, Anthropic didn't know about Claude's role in capturing Maduro until they saw it on the headlines.

            • By ExoticPearTree 2026-03-054:046 reply

              And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.

              • By ImPostingOnHN 2026-03-0515:00

                > And why would they have an objection to that? They sold a product to a customer. They should have no business in how that customer uses their software.

                They sold a service to a customer, contractually subject to terms they both agreed upon. How do people keep missing this? The government changed their mind after agreeing to the restrictions and tried to alter the deal with Anthropic ex-post-facto.

              • By felipeerias 2026-03-057:181 reply

                It’s a bit more complex than that, but to be fair I don’t know what they were expecting after they integrated a purpose-built model with Palantir to be deployed in high-security networks to carry out classified tasks.

                • By stingraycharles 2026-03-0511:19

                  TBH I don’t know what they were expecting when closing that $200 million DoD contract last year.

              • By warkdarrior 2026-03-054:27

                Licensing is a thing. See requirements that, for example, GPL3 places on customers.

              • By sixothree 2026-03-055:341 reply

                I'd hate to break it to you, but companies do have a right to determine how their products are used. You were subject to that when you wrote that comment. Did you not notice that?

                • By ExoticPearTree 2026-03-057:063 reply

                  No, I do not think they do. If a buy a car a run somebody over on purpose, the manufacturer has no right to come take my car away. Even if it were to be written in a contract.

                  • By sixothree 2026-03-0918:34

                    If you tell the car dealership that your plan is to run someone over with the car you are buying, they 100% have the right to refuse selling the car to you.

                    If you tell a gun dealer you're going to kill someone when you walk out of the shop, they have a right and an obligation to refuse the sale.

                    Please feel free to tell me how these analogies are incorrect.

                  • By ethbr1 2026-03-0513:141 reply

                    You're confusing physical goods transactions with subscription access to a service.

                    One of the many reasons every company has tried to shift their business model to the latter: greater control over users.

                    • By tremon 2026-03-0521:15

                      The GGP did not make that distinction, they made a statement about all companies and all products.

                  • By wsng 2026-03-0512:13

                    It's different with services. If you close a mobile phone contract and use it for spamming, the supplier can cancel your contract.

              • By mcmcmc 2026-03-055:381 reply

                So firearms dealers should be fine with their customers going on mass murder sprees?

      • By Loquebantur 2026-03-051:371 reply

        “We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at [the Pentagon], Palantir, our political consultants, etc, assumed was the problem we were trying to solve),” Amodei reportedly wrote.

        “The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.

        https://www.theguardian.com/technology/2026/mar/04/sam-altma...

        • By mrandish 2026-03-053:192 reply

          > we haven’t donated to Trump

          Another reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:

          > "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.

          More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."

          https://the-decoder.com/stargates-500-billion-ai-infrastruct...

          • By scottyah 2026-03-0622:17

            Reminds me of when they cut the camera to Zuck and he made the $600 Billion Deal announcement, but was hot mic'd after and said "I'm sorry I wasn't ready... I wasn't sure what number you wanted to go with". I will be extremely surprised if half of these deals actually go through

      • By waterproof 2026-03-053:42

        Sam donated $1M to Trump's inaugural fund. Dario did not.

        http://magamoney.fyi/executives/samuel-h-altman/

    • By dmix 2026-03-050:467 reply

      > signed up with surveillance company Palantir

      Just to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.

      https://www.wired.com/story/palantir-what-the-company-does/

      • By trinsic2 2026-03-051:072 reply

        They are providing the software to do surveillance, They are definetly bad actors, you can dance around this all you want, but they are in it.

      • By SirensOfTitan 2026-03-051:381 reply

        Their data integration and sale allows for the government to surveil citizens without probable cause or warrants.

        • By dmix 2026-03-053:491 reply

          The solution is still no different than a decade ago. Far stricter laws on intelligence, federal and local police surveillance, and a reduction in executive power which oversteps checks and balances.

          There will always be another IT company willing to do integrations even if Palantir dies. Software isn’t going away.

          • By SirensOfTitan 2026-03-0513:31

            Right. But this is about Anthropic -- a company frames itself as a responsible and ethical steward of LLM technology. They can't pretend that OpenAI is somehow morally bankrupt here while continuing to deal with companies that undermine peoples' civil liberties.

            I'm also a little unsure what you're saying here. Are you saying that it's futile to rely on corporate leaders to commit to ethical acts, as there's always someone else who will debase themselves to make money? I think that solely relying on the state to regulate itself with respect to civil liberties is a fast path to despotism. The well-regulated state was always a partnership between ordinary people bravely standing up for their rights and the norms of the rules and laws that made it socially acceptable to do so.

            If I'm grasping you correctly, I think you're right; however, this points to the rottenness of our culture's way of organizing labor: the optimization of the shareholder over everyone else leads to some really awful effects.

      • By ImPostingOnHN 2026-03-051:19

        I think a company which provides a sensor fusion dragnet for a government-run mass domestic civilian surveillance system is at least as culpable (and odious) than the ones supplying the data.

      • By hedora 2026-03-054:311 reply

        It's funny you'd pick IBM:

        https://en.wikipedia.org/wiki/IBM_and_the_Holocaust

        Though, I guess IBM did get away with lots of stuff that... Actually, did any supply companies in the WWII German war machine actually get in trouble for war crimes, or did they just go after officers and the people actually working in the camps?

        The company selling punchcards that were used for logistics was apparently fine. What about the people making the gas canisters, or supplying plumbing fixtures? The plumbers? Where's the line?

        Wondering, since this is increasingly becoming a current events question instead of an academic concern.

        • By DrSAR 2026-03-055:12

          There were the so-called Subsequent Nuremberg Trials (12 of them). Among them were the trials of IG Farben (gas chamber supplies, Zyklon B) and Krupp (armament of the German military forces in preparation of an aggressive war)

          I'm under no illusion that all the perpetrators of war crimes were held accountable but it's not a bad model.

      • By _jab 2026-03-051:351 reply

        Sure, but it's not as if the DoD was planning on using Anthropic to _collect_ the data either? I assume that the hypothetical DoD use case Anthropic shied away from dealt with the processing of surveillance data, just like what Palantir does.

        • By roywiggins 2026-03-051:44

          https://www.washingtonpost.com/technology/2026/03/04/anthrop...

          > The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system...

          > As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people.

      • By clipsy 2026-03-051:121 reply

        > They do data integration the way IBM does under contract for the governments

        Good thing IBM's data integration was never used for ill!

        Oh, wait https://en.wikipedia.org/wiki/IBM_and_World_War_II

        • By dmix 2026-03-053:51

          Oracle started by building databases for the CIA

      • By gjsman-1000 2026-03-051:221 reply

        Basically it’s glorified Excel.

        Take it out on the database purveyors, not Palantir.

        • By arduanika 2026-03-056:26

          Sure, Palantir is just one tool in the chain, and it's a lot more boring than people make it out to be.

          On the other hand, a comment like yours does smack a bit of "Once the rockets are up, who cares where they come down."

    • By ekjhgkejhgk 2026-03-050:341 reply

      It might match. The red line was domestic surveillance. You don't know what deal they had. Giving Anthropic the benefit of the doubt, perhaps Palantir said "Deal, we won't use your tool domestically".

      • By taurath 2026-03-051:242 reply

        Every single time the box is flipped over, whats inside is "more domestic surveillance". Who in their right mind would give the benefit of the doubt?

        • By xvector 2026-03-054:29

          Well, I think a company that stood their ground knowing full well they'd be designated a SCR deserves the benefit of the doubt.

        • By ekjhgkejhgk 2026-03-0513:14

          > Who in their right mind would give the benefit of the doubt?

          I'm saying that we should give Anthropic the benefit of the doubt that when they say "our deal with Palantir doesn't cross our red line", we should believe Anthropic, that they have gotten an assurance from Palantir that they wouldn't use it domestically. I'm NOT saying we should give Palantir the benefit of the doubt.

          I wasn't commenting on "is giving AI to Palantir a good idea" (I don't think it is), I was commenting on "should we conclude that Anthropic is being dishonest because they claimed they have red lines but work with Palantir" (I think it's unclear, but there's a plausible explanation in which they're not being dishonest, but possibly naive, so give them the benefit of the doubt).

    • By tbrockman 2026-03-050:42

      Whether you disagree with whether it truly aligns with their stated values, in their partnership with Palantir (making Claude available within their AI platform) they requested consistent restrictions:

      > “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.

      Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...

    • By sigmar 2026-03-051:14

      Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.

    • By elevation 2026-03-051:121 reply

      The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.

      • By bryant 2026-03-051:261 reply

        > The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.

        Anthropic is a Public Benefit Corporation chartered in Delaware, with an expressed commitment to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."

        So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.

        • By freejazz 2026-03-0514:00

          > So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.

          So why were they ever working with the military in the first instance, if that's the case? If you didn't gleam from OpenAI that it doesn't matter. Everyone is greedy and will jump ship for money if Anthropic does not get it for them.

    • By Madmallard 2026-03-051:13

      They are all guilty.

    • By throwaway613746 2026-03-050:18

      [dead]

    • By EA-3167 2026-03-050:57

      [flagged]

    • By freejazz 2026-03-050:431 reply

      [flagged]

      • By xvector 2026-03-054:311 reply

        I wish people like you would actually talk to people at Anthropic, maybe interview with the company, actually engage with the real humans there before making blithe comments like this.

        Seriously, you're on HN, you can't possibly be that many degrees removed from someone at the company.

        In any case it's absolutely not "just marketing", it suffuses their whole culture, and it is genuine.

        • By freejazz 2026-03-0513:581 reply

          [flagged]

          • By xvector 2026-03-0517:031 reply

            Just have an actual, good faith conversation with a real human working there instead of fighting/making assumptions about a strawman in your head.

            • By freejazz 2026-03-0518:461 reply

              I'm not talking about employees, I'm talking about the CEO. The fact that employees believe it means the marketing works. Everything about your posts makes my point. Anthropic is a business and that you believe they have a serious commitment to the PBC or any of that other stuff, then you have drank the kool-aid, full stop.

              • By xvector 2026-03-066:491 reply

                Really not sure how you can reconcile that with them making decisions that got them designated a SCR.

                • By freejazz 2026-03-0615:34

                  Then you're not thinking very hard when this thread is full of people saying "I'm deleting my OpenAI account RIGHT NOW" Which isn't a surprise because you are also buying this hook, line and sinker.

    • By bko 2026-03-051:315 reply

      [flagged]

      • By jfengel 2026-03-052:021 reply

        "The law" is the contract. The Pentagon agreed to terms of service. The law is not on the Pentagon's side. The contract did not change; what changed is the Pentagon breaking the contract.

        Perhaps you think the law shouldn't allow such a contract; that's a valid position. But that's not what the law currently says.

        • By bko 2026-03-053:151 reply

          I'm saying they shouldn't write in their contract that they have some veto power of how their software is used if it's within the law of the land (ie laws written by congress)

          Is that more clear?

          • By jfengel 2026-03-0513:131 reply

            Sure. And since they can't reach a contract they do agree on, there is no sale. They cannot be compelled to sign a contract that they do not agree to.

            • By bko 2026-03-0515:36

              Agree. Anthropic shouldn't require that in their contract (it is stupid). I'm glad the government resisted as it was an insane overreach. But since Anthropic insisted there should be no contract.

      • By mullingitover 2026-03-051:441 reply

        > if its within the law.

        The current administration has been caught flouting court orders in dozens of cases, to the point that courts are no longer even granting them the assumption that they’re operating in good faith.

        I can think of a million good reasons not to give these people the tools to implement automated totalitarianism. Your proposal that they simply refuse service to the government entirely would be ideal.

        • By bko 2026-03-053:161 reply

          Yes we obv need large corporations to exert some kind of control over our elected officials.

          • By mullingitover 2026-03-054:031 reply

            Our elected officials shouldn’t violate contracts. This isn’t rocket surgery.

            • By bko 2026-03-0510:47

              They can have a contract that says whatever they want. My argument is this shouldn't try to push one of these contracts and the government shouldn't agree to such a contract.

              Nowhere did I say elected officials should violate contracts.

      • By trinsic2 2026-03-051:381 reply

        The government works for the people, not the other way around. For the people, by the people and of the people.

        If you don't question people in positions of power they will just do whatever they want. Democracy is sustained by action, not by acquiescence.

        And with the lawlessness of this administration, I would make it a point to hold them accountable. I'm not going to let them do mass surveillance when they decide to change the law.

        Are you native, or just ignoring what is going on?

        • By bko 2026-03-053:17

          I want people to question people in power. Thats kind of the point of democracy. But it's good to remember corporations aren't people :-)

      • By Spooky23 2026-03-051:351 reply

        It’s a service. Democracy doesn’t give the government the right to force you to perform a service.

        The technology isn’t suitable for the purposes the regime wants.

        • By bko 2026-03-053:191 reply

          They can choose to sell to government agencies or not. But selling to them and then trying to have some veto power is wrong. So it sounds like we're in a agreement.

          I would like western Democratic powers to have the most advanced technology personally but you may disagree.

          • By Spooky23 2026-03-0517:02

            Basically, yes.

            I've worked in government outside of the Federal level. The government has a moral and often legal incentive to do inefficient things for the simple reason that the work they do needs to be safe, controlled and deterministic.

            Any US state maintains a birth registry, death registry and DMV. But firewalls exist so that live links don't exist between these and other programs. It's inefficient, but avoids many hazards and conflicts in regulatory or legal compliance. For example, income tax information is secret, and cannot be shared outside of the tax processing scenario. Police investigatory data should not be linked to your unemployment claim. Fundamentally, those are examples of why the stuff that Palantir is doing is problematic.

            With military applications, it's even more fraught, and human life is in peril by design. It's important for a professional army like the US Army that strict discipline and rules of engagement are followed. Soldiers may find themselves in situations where people are shooting at them, and they are ordered to take no action.

            AI is not capable of functioning in that environment.

            My point is these are complex issues, and we are in a political environment where people seeking simple answers are looking at technology like AI to disconnect them from accountability. There's a nuance there, and a reason why Anthropic is willing to partner with Palantir for their work, but hesitant to powering drones that are dropping hellfire missiles on people.

      • By jheimark 2026-03-051:411 reply

        That is crazy. You are suggesting that corporations should have no power over their own IP.

        Are you really saying that if Anthropic sells a limited version of their product to Palantir at a certain price, the government should be able to demand access to an unlimited version of Anthropic's product for free because they are a customer of Palantir?

        That would effectively mean the government gets an unlimited license to all IP of companies that do business with government suppliers... that would be terrible.

        • By bko 2026-03-053:221 reply

          Imagine if a gun manufacturer sold weapons to the military but said "don't use them is unjustified wars as we deem fit" seems wrong as we dont want gun manufacturers setting our foreign policy. Choose not to sell them sure, but this isn't "ownership of IP". If the feds were to ask for weights and torrent it out, sure IP. But this ain't that

          • By mullingitover 2026-03-054:01

            Guns aren’t a service, which is what Anthropic sells.

            Anthropic has a contract for how their service is to be used, the government committed itself to following the contract by signing. Then it violated the contract.

            Basically the government committed fraud by signing a contract that it clearly intended to violate. Then they tried to bully Anthropic into not doing anything about their breach of contract.

            It’s mobster behavior. You’re saying Anthropic should just not sell services if it’s going to enforce the terms of service. You have it backwards: the government shouldn’t enter into contracts that it intends to violate.

    • By spaghetdefects 2026-03-051:041 reply

      [flagged]

      • By jfengel 2026-03-051:592 reply

        If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.

        They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)

        • By nmfisher 2026-03-053:121 reply

          > If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.

          Anthropic claim that superintelligence is coming, that unaligned AI is an existential threat to humanity, and they are the only ones responsible enough to control it.

          If that's your world view, why would you be willing to accept someone's word that they'll only Do Good Things with it? And not just "someone", someone with access to the world's most powerful nuclear arsenal? A contract is meaningless if the world gets obliterated in nuclear war.

          • By jfengel 2026-03-0513:19

            Anybody who works with the military has to deal with that moral dilemma. Many people believe that the military has some legitimate use. They have to figure out for themselves how do deal with the the possibility that it can also be used illegitimately.

            So I don't blame Anthropic for getting into bed with the military, and getting out when it got bad for them. A lot of military suppliers are facing a similar dilemma, I suspect. The army runs on its stomach, and I do not envy the people delivering pizzas to the Pentagon, knowing what room those pizzas are consumed in.

        • By spaghetdefects 2026-03-052:021 reply

          I don't think any AI company should get in bed with the military. That being said, if the terms of service have been violated, the account should be canceled.

          • By hedora 2026-03-054:40

            They basically are cancelling the contract, but there are some nuances on Anthropic's side. The contract probably has stipulations that prevent them from doing it overnight, so it might be illegal (but ethical) for them to just turn off the API keys.

            Also, doing that might have bad second order effects with bad ethical implications.

            For example, when Musk decided to pull the plug on a bunch of starlink terminals, he (intentionally and knowingly) blocked a US-funded attack that would have sunk a big chunk of the Russian navy, which certainly prolonged the Ukraine war. That was clearly an act of treason (illegal).

            Anyway, just turning off Claude could kill a bunch of civilians in the region or something. It depends on how deeply it's integrated into military logistics at this point.

            Anyway, your point certainly holds for OpenAI:

            They walked into a "use ChatGPT for war crimes, and illegal domestic surveillance / 'law enforcement'" deal with open eyes, and pretty obviously lied about it while the deal was being signed. I don't see any ethical nuance that would even partially excuse their actions.

    • By trinsic2 2026-03-051:05

      This exchange between Anthropic and OpenAI feels a lot like theater. If I was really trying to stop abuses I wouldn't going out of my way to talk about it. The "public sees us as the hero's" bullshit feels like a smoke screen. Id make one statement and keep silent and let the public do the math and not get involved.

    • By sgustard 2026-03-053:191 reply

      Here's the extracted text https://pastebin.com/LS2LpLZ7

      • By cousin_it 2026-03-0511:55

        Did you use AI to extract the text? It rephrased the text along the way, I'm too lazy to point out all the differences, but if you for example search for the word "suspicious" (which is in the image but not in the extracted text) you should start to get suspicious yourself.

    • By blueblisters 2026-03-055:033 reply

      Wow. Surprising to see open hostilities between the leaders of the big ai labs. The differences appear to not just be competitive but also ideological.

      Edit: Also openly calling OpenAI employees "gullible" and "twitter morons" seems sub-optimal if you like that talent to work for you at some point.

      Example - https://x.com/tszzl/status/2029334980481212820

      • By utopiah 2026-03-0510:55

        > if you like that talent to work for you at some point.

        They might not if they think everybody who stayed after Sam Altman was reinstated might be excellent technically speaking yet not have the culture they want, which seems to be the case with all the recent communication.

      • By sinity 2026-03-0511:35

        Twitter morons wasn't referring to OpenAI employees, I think.

      • By fatata123 2026-03-055:07

        [dead]

    • By tkgally 2026-03-052:54

      Thanks for posting that link. Interesting reading, especially the closing:

      “I think this attempted spin/gaslighting is not working very well on the general public or the media, where people mostly see OpenAI’s deal with DoW as sketchy or suspicious, and see us as the heroes.... It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees. Due to selection effects, they’re sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees.”

HackerNews