
We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help…
We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center.
From that same X thread: Our agreement with the Department of War upholds our redlines [1]
OpenAI has the same redlines as Anthopic based on Altman's statements [2]. However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
[1]: https://xcancel.com/OpenAI/status/2027846013650932195#m
[2]: https://www.npr.org/2026/02/27/nx-s1-5729118/trump-anthropic...
OpenAI is playing games.
When Anthropic says they have red lines, they mean "We refuse to let you use our models for these ends, even if it means losing nearly a billion dollars in business."
When OpenAI says they have red lines, they mean "We are going to let the DoD do whatever the hell they want, but we will shake our fist at them while they do it."
That's why they got the contract. The DoD was clear about what they wanted, and OpenAI wasn't going to get anywhere without agreeing to that. They're about as transparent as Mac from It's Always Sunny in Philadelphia when he's telling everyone he's playing both sides.
I am going to stop using ChatGPT immediately.
Delete your account. I did. https://www.reddit.com/r/OpenAI/comments/1rh30rw/great_day_t...
Deleting my account today once I import my data to Claude
I'm also waiting on my ChatGPT data export. I started it last night and I'm still waiting. I would say there's huge opportunity here for Claude to offer direct import tooling.
Literally a feature being advertised as of today.
I just deleted my account. The other LLMs are so good that I don't even feel like I'm sacrificing much.
Good. More of this. I did.
No no no use it more, make sure to use up as much tokens as possible. They do inference at a loss
This makes no sense, their value in the marketplace is in usage and inflated promise, not actual revenues
> They do inference at a loss
They don't, inference is cheap, especially for agents because of cache hits. The API prices are just inflated.
Ive got a 'Claw interfacing with OpenAI and generating garbage questions and responses. I have an 8k context on mine.
Deletion with OpenAI isnt really deletion. So I'll waste their resources AND train on low quality slop on my side.
My work degrades theirs.
> but we will shake our fist at them while they do it
Not even that. They are not shaking anything except their booty.
"Red lines" does not mean some philosophical line they will not cross.
"Redlines" are edits to a contract, sent by lawyers to the other party they're negotiating with. They show up in Word's Track Changes mode as red strikethrough for deleted content.
They are negotiating the specifics of a contract, and Anthropic's contract was overly limiting to the DoD, whereas OpenAI's was not.
That’s not how the term is being used here.
In this case “red lines” as a term is being used as “lines than can not be crossed”
Anthropic wanted guardrails on how their tech was used. DOD was saying that wasn’t acceptable.
[flagged]
Only Congress can change the name of a federal department, so the Department of Defense is still properly called that.
Only Congress can declare war but here we are with the department of war bombing a foreign country and capturing and assassinating foreign leaders.
That policy changed a long time ago. The last declaration of war was June 4, 1942.
After Vietnam, Congress passed the War Powers Resolution to limit the ability of Presidents to conduct military action without Congressional approval, but it still allows military action for up to 60 days. Every President since then has used that power.
That 60 day limit was ignored so frequently in the past it might as well not exist.
Pretty much every attempt at stopping the president (from Clinton onwards) ends the same way: house votes on it, senate might agree with the slimmest of majority, it reaches the president's desk, president vetoes it, it goes back to the senate where it needs 2/3 majority to overthrow the veto, and it never gets that 2/3 majority.
Yep, it’s a case of are they willing to impeach the president over this. And the answer is likely no. Some of the America first lot might vote against on ‘How does this help America’ grounds but I don’t see them getting near the threshold.
Same as it always has. The senate has to vote on whether to convict. And they always vote no.
Even your link doesn't say what you imply.
> It provides that the president can send the U.S. Armed Forces into action abroad only by Congress's "statutory authorization", or in case of "a national emergency created by attack upon the United States, its territories or possessions, or its armed forces".
There was not at attack on the United States.
I don't know why we're getting mired in the details here. The administration certainly isn't. We all work for trump now. Lawyers, journalists, universities, tech companies, state, local and foreign governments. Anything trump or one of his designated people wants, you need to do. If you start sputtering about your agency or your rights or your sovereignty, then expect as much shit thrown at you as the trump organization can muster. That's it, there is no legal justification. There are no fine points to argue. Obey or be punished.
The point is that someone claimed the law was changed, and then linked to something that didn't support the claim.
Yes, Trump is ignoring the law, but you have to be aware that he is crossing the line rather than gas lighting that there wasn't a line at all.
So the president can wage war without the Congress, but it can't officially rename the department that supports these wars autocratically. That's interesting.
[dead]
Iraq war was the last declared war. Afghanistan war was also declared.
Incorrect. The only times America has formally declared war were the War of 1812, the Mexican-American War, the Spanish-American War, World War I, and World War II.
In the case of the Barbary Wars, Vietnam War, the Iraq War and War on Terror / Afghanistan War, etc... congress approved military engagement but DID NOT issue a formal Declaration of War.
You mean that they were special military operations? j/k
Interesting though, I never knew this.
[dead]
However if you go to https://www.usa.gov/agencies/u-s-department-of-defense and click in the link for the https://www.defense.gov/ you get redirected to https://www.war.gov/
So I think a bit of confusion on my part is justified.
'Power is the perception of power'
That part isn't sited. It is likely not true.
The EO itself agrees with this and says that the War title is secondary. It explicitly doesn’t truly rename the department.
The Department of Defense was established by the National Security Act of 1947. If the Congress wanted to change the name then they would pass another law to do so.
An executive order is not law.
Even though the the DoD was created via an act of Congress, as POTUS is the head of the Executive Branch and the CiC of the armed forces, could you make an argument that a name change can be done by executive order? (setting aside whether or not the new proposed name is stupid)
And when it was created it was DOW.
[dead]
because most americans do not want war, at least id hope, so calling it that seems pretty short sited (maybe until you continually do that 'war' thing), if you want the citizens to look positively on your spending it should probably be for defense not war, again, at least i should hope. im just a dumb "lib" whatever that means
On the other hand calling it "Department of Defense" seems quite whitewashing of what it actually does.
It spends the defence budget...
You see how that's a circular reasoning? The defense budget should by all accounts be called the war budget if we look at how it's spent (Iran).
I'm pretty sure the amount the money spent on offensive actions is significantly less than the defense
This resembles anti-vax logic. We haven’t been invaded because our military maintains a strong deterrence and strategic depth.
I think you're the one applying anti-vax logic here. Imagine beating a guy up for looking at you wrong and then get into a semantic argument with the judge on how you shouldn't be charged with assault because it was actually an act of defense, you see if you hadn't assaulted them they would surely have assaulted you so it's defense.
You're basically saying the US don't need a Department of Defense because the Department of War is doing such a good job.
Yeah, otherwise the USA would have been invaded by Cuba, Iraq, Vietnam, Syria, Afghanistan, Yemen and a hundred more, and they all would have a fight over who can have it. Thank god the US defended themselves against those terrible guys. Especially the WMDs were quite the close call, the Iraqis were minutes away from nuking the land of the mart.
Cuba's an odd rhetorical choice given https://en.wikipedia.org/wiki/Cuban_Missile_Crisis
> otherwise the USA would have been invaded
Yes, invading Hawaii was part of imperial Japanese planning. If you don’t understand that defense spending is still worthwhile even if you don’t blow anything up with it, I’m not sure how we connect.
But... you ARE blowing things up with it?
Maybe.
I was just saying that the purpose of the Department of Defence is to spend the "defence budget".
Gulf of Mexico.
DOW was already taken, and that is the one to watch when it all comes crashing down?
>Why DoD and not DoW?
Reddit/Bluesky brigade is in full force here, that's why
Actually that was the original name. And it was a more honest name.
It's always been the MoD in the UK afaik, but there was the War Office I suppose.
It was the War Office from 1857 to the mid 60s.
Different entities, but yes I said 'there was the War Office'.
What is currently considered the DoD was built after WW2 as the "National Military Establishment" by the "National Security Act of 1947" which restructured and reformed significant war and military assets under the "Secretary of Defense" and the NME was very quickly renamed the "Department of Defense".
The "Department of War" during WW2 was in control of the Army, and was separate from the Department of the Navy and eventual Department of the Air Force (spun off from the Army) and was headed by the "Secretary of War".
Changing the name to "Defense" was an intentional act by a President and government who wanted to reduce the power of the Military Industrial Complex and reduce the "War" focus of a subset of the government, and force the different departments to work together and share toys.
The reorganization was desired for many reasons but Truman made lots of talk about how this was about the national defense and made gestures to the Pearl Harbor attack as something relevant. Different departments failing to work together was a huge problem during WW2, and other wars. Putting them all under one single cabinet position, the Secretary of Defense, was a significant point.
This vocab was used during the war, about the reorganization being about the defense of the nation.
Similarly, NATO is a defense only pact, in very clear terms.
There was tons of debate in the US government at the time as to whether we had viable intelligence of the Japanese attack at Pearl Harbor ahead of time and it wasn't properly utilized or disseminated. In fact, there were many such instances in the Pacific Theater early on, where poor intelligence handling resulted in worse battle outcomes.
The point of the Department of Defense is to Defend America, and they do that by being in control of our Military. Letting our defense assets bully the world is the Utter Failure of the American voting public over the past 100 years.
https://en.wikipedia.org/wiki/United_States_Department_of_De...
Stoping and questioning why somebody uses DoD or DoW is way more telling than using any of those. Especially that both are perfectly fine, even officially.
A square was renamed in my home city about 20 years ago. We still use the original one usually, even teens know that name. I use a form of the original name of our main stadium which was renamed almost 30 years ago. Heck, some people use names of streets which are not official for almost 40 years now. Btw, the same with departments of the government. Nobody follows how they called at the moment, because nobody really cares. That’s the strange when somebody cares.
Or it could have just been a genuine question. I'm not American and I've seen DoW used in newspapers and thought the name change was official. Personally I've thought it a more apt and honest name for what they do.
But the backlash in the commments here show how ideologically charged the question seem to be.
I wasn't aware of how ideologically charged the question was. I'm also not American, but I'm glad I made the question. It's a clear sign for us not Americans to just leave them be.
> It's a clear sign for us not Americans to just leave them be.
Depending on where you live in the world that might be quite hard to do soon.
I agree. I live in Brazil and even though tariffs and interventions weren't directed at us, they influence the economy and political decisions. Also, Venezuela is right next to us, so instabilities there do tend to affect the whole region.
> Or it could have just been a genuine question.
Yes, exactly that’s why I wrote several examples to support why the chance for that is very-very slim.
Easier to work in hypotheticals than to do a bit of research like read the other comments. Just explained it was an honest question and why.
Do you really trust in random comments on the internet which states something to which the possibility is slim, because literally nobody cares why somebody calls the way it is, when that somebody knows both names, and when it's not political? I don't think that's optimal, and it's a hefty understatement of course.
By using the actual legal and official name of the department (which Trump didn’t and couldn’t change)?
Because using DoW is woke when the legal name is DoD.
Pretty ironic given their anti-woke agenda
Personally I think OpenAI is intending to infiltrate their political enemy's stronghold and look for ways to leak data to "get Trump" as per usual.
They'll say "oops" and then we'll spend the next few years listening to pointless Congressional hearings.
https://en.wikipedia.org/wiki/Executive_Order_14347
> Among U.S. federal agencies, the order authorized the official use of the secondary title "Department of War" for the United States Department of Defense. The department now refers to itself as the "Department of War" for non-statutory purposes, but is still technically named the Department of Defense, as only an act of Congress can formally change the name of a federal department.
Isn't it simpler to say that anthropic adopted a values based use approach and openai adopted a legal one?
Or In other words you can get to decide two ways to use a lucrative property:
1. designate it private and draft usage of how you allow to use it, per your value system(as long as values don't violate any laws)
2. In face of competition, give up some values and agree to a legal definition of use that favors you.
What does 'a legal approach' mean where there is no rule of law? USA just bombed another country without having a domestic legal basis for that. Can't imagined they're holding back on AI use that is illegal -- even textbook-clear warcrimes (like blowing up shipwrecked people) does not give Hegseth and Trump pause.
That goes for domestic actions too, happy to arm a paramilitary and set them loose against citizens who are not politically aligned with Trump... the Republican Senate barely even blinks. Hard to imagine they'd care about AI use in mass surveillance, nor AI use in automated anti-personnel weapons. The Senate will be, 'Oh no they unlawfully killed USA citizens, again... Welp, let me check my insider trading gains... yh, seems fine'.
Anthropic wanted to put those restrictions in the contract. OpenAI said they'll just trust their own "guardrails" in the training, they don't need it in the contract. (I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)
Very gracious of OpenAI to say Anthropic should not be designated a supply chain risk after sniping their $200 million contract by being willing to contractually let the government do whatever they like without restrictions.
It cannot really oversee this. If you can decompose a problem into individual steps that are not, in themselves, against the agent's alignment, it's certainly possible to have the aggregate do so.
How confident are we, with OpenAI's recent very large contribution to Trump's PAC, that OpenAI wasn't working to get Anthropic designated a supply chain risk behind the scenes? I don't want to be too paranoid here but given Sam's reputation and cui bono I don't think we can really rule this out either.
>(I'm not sure I believe "guardrails" can prevent mass surveillance of civilians?)
Right, wouldn't they need a moderation layer that could, for example, fire if it analyzed & labeled too many banal English conversations?
They really gave training credit for guardtrails? I mean, it could perhaps reject prompts about designing social credit systems sometimes, but I can't imagine realistic mitigations to mass domestic surveillance generally.
OpenAI's post about their contract has the "redlines" described and they don't match what Anthropic wanted. (even if the text tries to imply they do)
https://openai.com/index/our-agreement-with-the-department-o...
This is a good comment detailing the differences: https://news.ycombinator.com/item?id=47200771
> However somehow Anthropic gets banished for upholding their redlines and OpenAI ends up with the cash?
The current administration is so incompetent that I find this perfectly believable.
I imagine the government signed with OpenAI in order to spite Anthropic. The terms wouldn't actually matter that much if the purpose was petty revenge.
I don't know if that's actually what happened here, I just find it plausible.
Absolutely incompetent, but I don’t think that’s the cause here. I think Anthropic’s sin was publicly challenging the administration. They’re huge on optics. You can get away with anything as long as you praise and bow in public.
That and possibly OpenAI cofounder Brockman donating 25 million to a Trump-aligned PAC: https://www.sfgate.com/tech/article/brockman-openai-top-trum...
Grifters gonna grift.
same. this is about losing a negotiation and saving face / exacting revenge.
Sam Altman has no scruples. Dark Triad personality. No reason to believe anything he says.
The same goes for anybody still working at OpenAI past Monday morning 9 am.
People's need for food and shelter doesn't go away because their employer is unethical.
I don't think you could find a single person working for OpenAI that couldn't find employment elsewhere within a month that pays more than enough for food and shelter. This is a ridiculous statement.
These people are now dependent on their level of income. And they don't like financial uncertainty, just like anyone else.
But yeah, I'd expect them to change jobs in the coming year or otherwise I'm going to agree with you.
>These people are now dependent on their level of income.
Kinda sucks if you take a seven figure per annum job and are now dependent on their level of income. Quick question: Is this true for everyone? If I take a job that pays twice of what I earn now, my food spending is going to double for instance? Or is this an american thing?
I used to live on $20K/yr working a restaurant job, now in tech and six figs I'm still check to check. It's a lifestyle/personal choice thing in my case I'm dumb/waste money.
It's actually in some verticals of the American industrial/business sector a bit of a shibboleth I think. There's a certain mentality around "visible conspicuous consumption" that is a signal to those in the upper class that you're a prime candidate for leaning on. You're hungry, will do anything to stay where you are, and can be relied upon to "play ball with the big boys" in part because if you don't, and try to take them down, they know what you did to get there. Someone who doesn't participate in such is something to be wary of. Less purchase for manipulation. Possibly an indication of a lesser degree of skin in the game. An indication of different priors I guess. I've often wondered if there's a similar distrust between the nouveau riche and old money for similar reasons. Wouldn't know myself though. Haven't bumped around in quite those high circles myself.
There's a (maybe new?) focus on health in America, and it is tied directly with dollars spent for the most part. If I made $50k more a year, it would disappear down the drain on purchases like lean bison meat instead of fatty ground beef. I'd get more expensive, locally grown and better tasting vegetables. I'd get a home where I could have a cold plunge, sauna, and whatever new toys that data or rumor backs that promise to reduce brain fog, increase energy, etc. As always, America has so much diversity in what you can buy and food/health is no different.
I imagine it'd be hard to take your family from healthy meals back to rice and beans.
Or they all just get fancy cars and big houses to show off, who knows. I certainly can't be sure how my life would change with 7 figures.
There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.
there’s always someone in the world that will defend anything.
Like the people working at OpenAI had no other choice than to pick this cushy job (some have salaries of 500k per year), instead of anything else.
It’s an extreme personal opinion, but; all people working at OpenAI after this debacle are more than happy to make AI for war, because Food and Shelter.
I find your comment fitting this forum, it is where all this enabling started anyways.
Indeed, it is worth noting that Sam Altman got his chance through PG/YC and that YC was totally fine with both Musk and Zuckerberg giving them a platform long after it became evident that they had some screws loose in the ethics department.
Effectively the message is 'we don't mind you being an asshole, as long as you're rich'.
Per levels.fyi, median salary of most openAI positions are above 300k. Even "technical writers" have a median pay of 197k. I searched around the internet and it seems like even entry level positions receive well above 150k. Apart from people with severe lifestyle bloat or an unholy number of dependents I doubt too many people working there will face immediate financial difficulties if they quit.
Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.
> OpenAI hires "technical writers"
Mind blown. Isn't documentation a prime use case for "AI"?
As a technical writer who's spent a great deal of time recently editing AI-drafted documentation, this use case is not going to go as well as AI boosters think it is. :)
Have you ever seen the back of your head, without a mirror? Without two mirrors, actually?
How can AI accurately describe itself in full?
The problem it has describing itself isn't the lack of a metaphorical mirror, tool use is there and it can grep whatever code or research is written; the problem is that all machine learning is surprisingly slow to update with new info.
Ask ChatGPT to describe itself, you may get valid documentation and API calls, or you may get the API for GPT-3 (not ChatGPT, before that). I have had both happen.
Elephant.
Did it in one word, easy
What's next?
No, it's prone to assuming or falsifying details even when it has the tools at hand that could verify the true details. Even when explicitly instructed to perform a specific tool call that would load the correct information into its context. Sometimes the pull of the training data is too strong and it will just not make the call and output garbage, all the while claiming otherwise.
I dont think everyone working for OpenAI is unethical. But, it is ridiculous to frame Hmhighly paid people working for companies quite a few of their peers avoid for ethical reasons as poors with no choice.
Great comedy line, you're very funny!
It already started:
https://www.bbc.com/travel/article/20240222-air-canada-chatb...
> the airline said the chatbot was a "separate legal entity that is responsible for its own actions".
What an utterly pathetic, cowardly, spineless and defeatist statement
[dead]
[flagged]
[dead]
Anthropic demanded defining the redlines. OpenAI and others are hiding behind the veil of what is "lawful use" today. They aren't defining their own redlines and are ignoring the executive branch's authority to change what is "lawful" tomorrow.
Or the increasing impunity all three branches of government are giving themselves with regard to bad faith interpretations of the law, and a lack of government accountability when they color outside the lines.
Much of the impunity is now Supreme Court settled law.
We see clearly unconstitutional behavior every day, and there is no systematic, timely or effective, push back from any constitutionally enabled oversight.
Checks and balances don't work, when players are more loyal to party than branch or constitution.
Unfortunately, there are no constitutional checks, balances or limits on single party control. And single party control negates all the others. That one party can majority control all three branches is a serious failure mode in political incentives (bipartisanship is highly disincentivized) and governance (even temporary or shaky full control incentivizes making full control permanent over all other "policies").
Until the last few decades, diverse concerns across states avoided tight centralization within parties, and therefore across branches.
What exactly is considered "settled" law when the SCOTUS can unilaterally overturn Roe v. Wade overnight after almost 50 years of precedent?
In this case, "settled" means for everyone else, unfortunately.
However there's one overriding concern which has got American to this point: "anti woke". That is, reinstating the load bearing racism and sexism.
A lot of that turned out to be pushed by Epstein and his associates. It's not hard to figure out why they would enjoy a world with lots of racism, sexism and general inequity. Its really disturbing when you consider how much power this network still has.
I mean yeah... one of his co-conspirators is the President.
Anthropic's whole worry with mass surveillance was that current law is too loose in the age of AI to offer enough restraint.
My understanding of the difference, influenced mostly by consuming too many anonymous tweets on the matter over the past day so could be entirely incorrect, is: Anthropic wanted control of a kill switch actively in the loop to stop usage that went against the terms of use (maybe this is a system prompt-level thing that stops it, maybe monitoring systems, humans with this authority, etc). OpenAI's position was more like "if you break the contract, the contract is over" without going so far as to say they'd immediately stop service (maybe there's an offboarding period, transition of service, etc).
Brockman donating $25 million dollars in January might have a little something to do with it..
> more stringent safeguards than previous agreements, including Anthropic's.
Except they are not "more stringent".
Sam Altman is being brazen to say that.
In their own agreement as Altman relays:
> The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
> any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives
> The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
I don't think their take is completely unreasonable, but it doesn't come close to Anthropic's stance. They are not putting their neck out to hold back any abuse - despite many of their employees requesting a joint stand with Anthropic.
Their wording gives the DoD carte blanch to do anything it wants, as long as they adopt a rationale that they are obeying the law. That is already the status quo. And we know how that goes.
In other words, no OpenAI restriction at all.
That is not at all comparable to a requirement the DoD agree not to do certain things (with Anthropic's AI), regardless of legal "interpretation" fig leaves. Which makes Anthropic's position much "more stringent". And a rare and significant pushback against governmental AI abuse.
(Altman has a reputation for being a Slippery Sam. We can each decide for ourselves if there is evidence of that here.)
Yep. It's the difference between "Don't do these things, regardless of what the law says." and "Do whatever you want, but please follow your own laws while you do it".
As Paul Graham said, "Sam gets what he wants" and "He’s good at convincing people of things. He’s good at getting people to do what he wants." and "So if the only way Sam could succeed in life was by [something] succeeding, then [that thing] would succeed"
Easy way to summarize it: "You're not allowed to do these things, except for all of the laws that allow you to do these things."
It’s a non-clause that is written to sound like they are doing something to prevent these uses when they aren’t. “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things. Plus the administration itself gets to decide if it meets legal use.
> “You are not allowed to do illegal things” is meaningless, since they already can’t legally do illegal things.
That's not quite right.
First off, I don't expect that "you used my service to commit a crime" is in and of itself enough to break a contract, so having your contract state that you're not allowed to use my service to commit a crime does give me tools to cut you off.
Second, I don't want the contract to say "if you're convicted of committing a crime using my service", I want it to say "if you do these specific things". This is for two reasons. First, because I don't want to depend on criminal prosecutors to act before I have standing. Second, because I want to only have to meet the balance of probabilities ("preponderance of evidence" if you're American) standard of evidence in civil court, rather than needing a conviction secured under "beyond a reasonable doubt" standard. IANAL, but I expect that having this "you can't do these illegal things except when they aren't illegal" language in the contract does put me in that position.
I don’t think the language does, or is intended to, give OpenAI any special standing in the courts.
They literally asked the DoD to continue as is.
Their is no safety enforcement standing created because their is no safety enforcement intended.
It is transparently written, as a completely reactive response to Anthropic’s stand, in an attempt to create a perception that they care. And reduce perceived contrast with Anthropic.
If they had any interest in safety or ethics, Anthropic’s stand just made that far easier than they could have imagined. Just join Anthropic and together set a new bar of expectations for the industry and public as a whole.
They could collaborate with Anthropic on a common expectation, if they have a different take on safety.
The upside safety culture impact of such collaboration by two competitive leaders in the industry would be felt globally. Going far beyond any current contracts.
But, no. Nothing.
Except the legalese and an attempt to misleadingly pass it off as “more stringent”. These are not the actions of anyone who cares at all about the obvious potential for governmental abuse, or creating any civil legal leverage for safe use.
Let me clear it up
The Trump administration acts cartoonish and fickle. They can easily punish one group, and then agree to work with another group on the same terms, to save face, while continuing to punish the first group. It doesn't have to make consistent sense. This is exactly how they have done with tariffs for example.
Secondly, the terms are technically different because "all lawful uses" are preserved in this OpenAI deal, and it's just lawyering to the public. Really it was about the phrase "all lawful uses", internally at the DoD I'm sure. So the lawyers were able to agree to it and the public gets this mumbo-jumbo.
I thought mass surveillance of Americans was unlawful by the DoD, CIA and NSA? We have the FBI for that, right? :)
Sure, but OpenAI is also being disingenuous here pretending they’re operating under the same principles Anthropic is. It’s not and the things they’re comfortable with doing Anthropic said they’re not
> except for all of the laws that allow you to do these things.
It's even worse than that, because this administration has made it clear they will push as hard as possible to have the law mean whatever they says it means. The quoted agreement literally says "...in any case where law, regulation, or Department policy requires human control" - "Department policy" is obviously whatever Trump says it is ("unitary executive theory" and all that), and there are numerous cases where they have taken existing law and are stretching it to mean whatever they want. And when it comes to AI, any after-the-fact legal challenges are pretty moot when someone has already been killed or, you know, the planet gets destroyed because the AI system decide to go WarGames on us.
Brings to mind the infamous line from Nixon:
"When the president does it, that means it is not illegal".
This was during the Frost/Nixon interviews, years after he had already resigned. Even after all that, he still believed this and was willing to say it into a camera to the American people. It is apparent many of the people pushing the excesses going on today in government share a shameless adherence to this creed.
If only Nixon had had the current supreme court, which actually agrees with him.
Nixon's issue wasn't a lack of support in the courts but in Congress[1]:
> On August 7, Nixon met in the Oval Office with Republican congressional leaders "to discuss the impeachment picture," and was told that his support in Congress had all but disappeared. They painted a gloomy picture for the president: he would face certain impeachment when the articles came up for vote in the full House, and in the Senate, there were not only enough votes to convict him, but no more than 15 or so senators were willing to vote for acquittal. That night, knowing his presidency was effectively over, Nixon finalized his decision to resign.
The contrast with how compliant the majorities in Congress are today to the whims of the White House cannot be overstated. The past decade has pretty much completely eliminated any semblance of a Republican Party that stood for anything other than the whims of Trump. Everyone either got on board or was exiled from power; the third highest member of House leadership got driven from Congress for taking a stand on the events of January 6, whereas the senator who in a debate in 2016 alleged that Trump's small hands implied a similar proportion for one of his less-visible body parts faded into the background for the next eight years and was rewarded with a prominent position in the cabinet this time around.
> https://en.wikipedia.org/wiki/Presidency_of_Richard_Nixon#Re...
Each of those clauses have a DoD policy carve out as an exception which says basically they can do whatever they want if they want to do it, but won’t be able to if they don’t want to do it.
This is the same government caught spying on its citizens by Snowden so I don’t trust them at all.
This implies that OpenAI must build and release and maintain a model without any safeguards, which is probably the big win and maybe something Anthropic never wants to do.
So you want OpenAI to create “laws”?
I for one do not want ai labs to designate what is legally ok to do.
I much prefer the demos to take care of that.
Who said anything about OpenAI passing laws? (Where did that come from?)
Civilians are allowed to put conditions on working for, or supplying, the DoD or any governmental customer.
Tremendous good comes from those that are not willing to facilitate harms, simply because they are legal.
Equating legal with ethical or safe, makes no sense. [0]
[0] All of human history.
No, I want the government to prove that they understand the limitations of software that they're purchasing to use to kill people, and I want AI companies to be clear about what they think those limitations are. Just because it would be legal to use software for something doesn't mean it's capable of doing it safely, and there's absolutely no reason that the government is in a better position to judge that than the people who make it and literally have a financial incentive to lie about it. If they're that confident it wouldn't be safe to use it for that, maybe it's a good idea to consider that.
So we should just do whatever unethical thing we want, as long as it isn't strictly illegal/we won't get caught? Actually that does seem to be the Silicon Valley mantra. I dont know how people think this is going to play out well, particularly when the entities in question are often powerful enough to change the laws themselves.
I have never used AI to generate an answer for HN but just this once I thought it would be good to hit ChatGPT specifically and ask it for 'a list of times Sam Altman has gone against his word.' Here was its response:
Shift from Nonprofit Mission to For-Profit Orientation – OpenAI was founded as a nonprofit with a charter focused on “benefit to humanity,” but under Altman it created a capped-profit subsidiary, accepted large investments (e.g., from Microsoft), and critics (including Elon Musk in a 2024 lawsuit) argue this departed from that original mission. A federal judge allowed Musk’s claim that Altman and OpenAI broke promises about nonprofit governance to proceed to trial.
Nonprofit Control Reorganization Drama (2023) – In November 2023, the original nonprofit board cited a lack of transparency and confidence in Altman’s candor as a reason for firing him. He was reinstated days later after investor and employee pressure, highlighting internal conflict over governance and communication.
Dust-Up Over Military Usage Policies – OpenAI initially had explicit public policies restricting AI use in “military and warfare” contexts, but those clauses were reportedly removed quietly in 2024, allowing the company to pursue Department of Defense contracts — a turnaround from earlier language that appeared to preclude such use.
Statements on Pentagon Deal vs. Prior Positioning – In early 2026, Altman publicly said OpenAI shared safety “red lines” (e.g., prohibiting mass surveillance and autonomous weapons) similar to some competitors, but hours later OpenAI signed a deal to deploy its models on classified military networks, leading critics to argue this contradicts earlier positioning on limits for military use.
Regulation Stance Shifts in Congressional Testimony – Altman has advocated for strong regulation of AI in some public settings but in later congressional hearings opposed specific regulatory requirements (like mandatory pre-deployment vetting), aligning more with industry concerns about overregulation — a shift in tone compared with earlier support of regulatory frameworks.
I found this interesting. But the best approach is start with LLM, then check every point yourself, and summarize with real links. The moment we are ok with LLM output just once, it won't be just once, and things get too murky.
The purpose of the exercise was to see what OpenAI thinks of itself to a large degree. I hope nobody takes the answers at face value considering they clearly have a conflict of interest at their very core. It has turned into an interesting social experiment though. There is a very real instant negative reaction to saying 'an LLM generated this' no matter the context or intent.
And the powerful win even more
That seems exactly what it should be. The United States military should be able to do what the law allows. If we don't think they should be allowed to do something, we should pass laws. Not rely on the goodness of Sam Altman.
So don’t stand up for ethics and safety where there isn’t a law for it? Backwards day?
Nobody is prosecuting the DoD with non-laws here. But one company is using their legal right to refuse to facilitate great harms.
> Not rely on the goodness of Sam Altman.
(Who said anything about that? Where did that come from?)
Nobody wants to rely on Altman!
For anything. But it would be better if he would stand up for safety, instead of undermining it.
Your logic is backwards.
If we don’t want to rely entirely on a centralized government alone, increasingly interested in giving its leaders unfettered power, with all three branches increasingly willing to bend our laws and give itself impunity, then a widespread civilian culture of upholding safety by many and all actors is a necessity.
The need for the latter is always a necessity. But the risks of power consolidation, with the help of AI, are rising.
No. Sam Altman should have no say in what the government can or can't do for national security. Sorry.
The red lines are not the same.
Anthropic refuses to allow their models to be used for any mass surveillance or fully-automated weapons systems.
OpenAI only requires that the DoD follows existing law/regulation when it comes to those uses.
Unfortunately, existing law is more permissive than Anthropic would have been.
Altman donated a million to the Trump inauguration fund. Brockman is the largest private maga donor. You don't have to be a rocket scientist to understand what's going on here.
Agreed. These guys are traitors.
It's probably a combination of "Altman is simply lying" (as he has been repeatedly known to do) and "the redlines in OpenAI's contract are 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor". Which, of course, effectively means they don't exist.
> 'mass surveillance' and 'autonomous killbot' as defined by the government and not the vendor
Ah, so they’ll be applying the good ol’ Three-Fifths Rule[0], a classic.The OpenAI PR implies that Anthropic had a "usage-policy" clause with no actual enforcement.
Whereas OpenAI won their contract on the ability to operationally enforce the red lines with their cloud-only deployment model.
Exactly. What are we not being told? There is some missing element in the agreement, or the reasoning for the action against Anthropic is unrelated to the agreement.
The demand was that Anthropic permit any use that complied with the law. They refused. OpenAI claims to have the same red lines but in reality has agreed to permit anything that complies with the law.
In other words OpenAI is intentionally attempting to mislead the public. (At least AFAICT.)
Turns out both companies ran the agreement through their legal departments (Claude and GPT), and one of them did a poor summary. I (think I) jest, but this is probably going to be a thing as more and more companies use LLMs for legal work.
president of openai donated $25 mil to trump last month, openai uses oracle services (larry ellison), kushners have lots invested in openai, altman is pals with peter thiel
One nuance I've noticed: the statement from Anthropic specifically stated the use of their products for these purposes was not included in the contract with DoD but it stops short of saying it was prohibited by the contract.
Maybe it's just a weak choice of words in anthropic's statement, but the way I read it I get the impression that anthropic is assuming they retain discretion over how their products are used for any purposes not outlined in the contract, while the DoD sees it more along the lines of a traditional sale in which the seller relinquishes all rights to the product by default, and has to enumerate any rights over the product they will retain in the contract.
Punish one, teach a hundred (companies).
The reasoning is one company is ‘left and woke’ the other gives money to Trump.
$25 million to be exact, one of Trump's largest individual donors. From a guy who "doesn't consider himself political", lol. [0]
[0]: https://www.wired.com/story/openai-president-greg-brockman-p...
“I think there's no decision ever that everyone at OpenAI agrees with,” Brockman says when I ask what his team thinks about the donations. “Even when we were 10 people. We’ve always been a truth-seeking culture. We have this scientific mission of discovery, and reality kind of doesn't care for your own opinion. It cares about what's true.”
After our interview, Brockman declined WIRED’s request for comment on the ICE shootings. Separately, he offered a more general statement clarifying his thoughts on the conversation with WIRED. "AI is a uniting technology, and can be so much bigger than what divides us today,” he said.
His justifications are just an ever changing rambling mess of word salad that never even come close to addressing the MAGA Inc donation specifically, who is this even for?
We're talking about a pretty straightforward donation to the incumbent President's Super PAC, not ASI solving world hunger or whatever.
[dead]
OpenAI donated $25,000,000 to Trump, that's why. Now people are cancelling ChatGPT subscriptions, so he needs to walk back the optics.
To what end?
It's called corruption.
They can say it on X. But will they refuse to do work?
They are obviously lying. OpenAI is not to be trusted anymore.
The difference is Anthropic wants contractual limitations on usage, explicitly spelling out cases of Mass Surveillance.
OpenAI has more of an understanding that the technology will follow the law.
There may not be explicit laws about the cases Anthropic wanted to limit. Or at least it’s open for judicial interpretation.
The actual solution is Congress should stop being feckless and imbecilic about technology and create actual laws here.
Between Anthropic, the military, and Congress, I have the least faith in Congress to make knowledgeable policy around tech.
We used to have nice things
https://en.wikipedia.org/wiki/Office_of_Technology_Assessmen...
> based on Altman's statements
The dude is notorious for being a compulsive liar, even if supporters have to admit as much.
It's almost like the Trump administration wanted to switch providers and this whole debate over red lines was a pretext. With this administration, decisions often come down to money. There are already reports that Brockman and Altman have either donated or promised large sums of money to Trump/Trump super pacs
Can't recall the source right now (it would've been on one of the several podcasts I listened to on Friday I think), but there's a story/rumor to the effect that at some point during Claude's earlier deployment at the Pentagon — might've well been in the context of the Venezuela/Maduro operation — someone at Anthropic had in one way or another flagged some kind of legal(ity) concerns regarding the relevant operation (and/or perhaps Anthropic's role in it) with Palantir, who was maintaining the Claude deployments for the DoD. The story goes that after Palantir had then relayed this information further to DoD, Hegseth had this major fit over how Anthropic's hippie-ass North California woke bros should have no say in matters relating to national security, that of Hegseth's "warfighters" or whatever, etc...
Also, in the latest Hard Fork episode, Casey or Kevin mentions how the DoD undersecretary in charge of this contract doesn't apparently get along with or even pretty much hates Amodei for some reason. I think this might be the same undersecretary dude who actively commented the whole contract term controversy on X yesterday. Too bad I can't recall his name either.
Exactly. This is very shady. Too many openAI investors in Trump’s orbit. And it could be that openAI will say it’s their policy but whereas Anthropic wanted oversight that their redlines were enforced OpenAI I think will just turn a blind eye. It’s double speak. It’s disingenuous. It’s the kind of business play Trump Likes because it’s nefarious and screws someone over like Trump’s very delayed if paid at all contractors and staff.
[dead]
[dead]
The problem with "Any Lawful Use" is that the DoD can essentially make that up. They can have an attorney draft a memo and put it in a drawer. The memo can say pretty much anything is legal - there is no judicial or external review outside the executive. If they are caught doing $illegal_thing, they then just need to point the memo. And we've seen this happen numerous times.
Did you guys really think that the jurisprudential issues that became endemic after 9/11 suddenly disappeared because we discovered LLMs?
Let’s put pressure on our government to fix the FISA issues. Let’s reign in the executive branch. But let’s do it through voting. Let’s not give up on our system of government because we have new shiny technology.
You were naive if you thought developing new technologies was the solution to our government problems. You’re wrong to support anyone leveraging their control over new technology as a potential solution or weapon of the weak against those governmental issues.
That is not how you effect change in a democracy.
And, to be clear, the way you affect change in democracy is coalition building, listening to others, supporting your allies in their aims, and in turn having them support you, even when you don’t fully agree or understand. There’s no magic wand, none of us are right, there’s no big picture, just a bunch of people working together.
> But let’s do it through voting.
You don't get a successful vote without a tremendous amount of coordination and activism preceding it.
Laws that constrain government from bad things are very difficult things to get the government to pass.
In the meantime, using completely legal civil power to push back on legally allowed harms seems beyond sensible.
But if you just vote and it works without all that, please let us know how you did it!
While I agree that we should be voting in people who will respect the power and authority they're given, I can't imagine we will vote away all these problems.
We would need to vote in a president and 60%+ into congress that is willing to throw away their own power and authority. I just don't see that happening, especially not in a political system so corrupted already.
The US needs a organization doing the equivalent of the Nation Popular Vote Interstate Compact but for candidates and for fixing the US voting system. Get running politicians to sign up for if 60% of you are in office you'll table and vote for a specific already spelled out constitutional reform for more representative voting.
The goal being more than two parties in government so that democrats and republicans can fracture into more functional bodies (MAGA, RINOs, neo-liberal, progressive etc) and people can vote closer to their issues/beliefs and that multiple parties mean 1 party isn't running rushod over the other.
Take a step back: Americans voted for this. They want unaccountable police and courts for the Dirty Harry legal system: maximum indiscriminate violence against those designated as criminals.
I've never seen this on a ballot and, maybe with the exclusion of Trump, never heard a candidate campaign on anything similar.
You probably could make the case that Trump did campaign on it so I'll grant that, but this problem started well before he was even firing people on TV.
Off the top of my head: Joe Arpeio. George Wallace. Rudy Giuliani. Paul Gosar. Louie Gohmert.
George Wallace has been dead for something like 30 years, but yes he was very blatant. I have family that knew him in Montgomery, friends of friends kind of a situation. They don't have good things to say about him.
I don't remember Rudy running on such ideas but maybe he did. Arpeio was running as a sheriff, I would never have voted for him but agreed people did absolutely vote for him in a law enforcement capacity with pretty clear views.
I don't know enough about Gosar or Gohmert to comment well about either.
You are right that this happens in practice (e.g. John Yoo torture memo). However, it is not how the system was intended to function, nor how it ought to function. I don’t want to lose sight of that.
We shouldn't be stacking up so many incentives for it to happen though.
This is all happening in secret. That don't need any memo.
In the unlikely case anyone finds out, those acting in the interests of the administration will have "absolute immunity", as they are "great American Patriots".
That's what "all lawful use" means.
It's lawful use with specific laws called out though? New laws won't supercede what is agreed in the contract at the time of signing.
It's the DoW now, not DoD
not to mention that the government is already bound against using things it buys for unlawful uses. Its a totally redundant clause in a contract that OpenAI is touting to confuse people.
Or best case by the time it’s found out it’s years later, theres a “committee” who releases a big report everyone shrugs their shoulders and moves on. It’s a playbook.
Exactly, and its easy to hide behind things like the Patriot Act if challenged legally.
Its interesting to see the parties flip in real time. The Democrats seem to be realizing why a small federal government is so important, a fact that for quite a few years their were on the other side of.
I think the problem is exactly the opposite. The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable. The problem that we are seeing is that the reigns to that power can be held by too few people it turns out. The checks and balances have ceased to exist. No one is held accountable and people are allowed to be above the law.
The power and scale of governments doesn't have to be correlated with the scale of the society. The concept of nations themselves aren't even a necessity.
I get that this is what we have today and all we've had in recent history, but we are ignoring a huge number of possibilities to assume that being human means always inventing new things, using more resources, creating more weapons, and needing larger and larger governments because someone had to be in charge.
> The federal government has the total combined power and scale that it does because we are a massive and complex modern nation. That's inevitable.
Perhaps massive and complex (I'd say complicated) nation-states inevitably create industrial complexes, but it's certainly not inevitable that nation-states grow so large (or even exist) in 2026.
The idea that we still need soverign-esque entites across entire continents, when we can now communicate and coordinate instantly across them, and use cameras to documents truth all around us at all times, is just downright silly.
We can reduce states to the size that you can walk across in a day or two, and everybody will be much happier and healthier.
I don’t see the connection to a small federal government here. Mind connecting the dots?
The government is forcing a company to change their terms of service, and "threatening" to have them effectively shut down. I say threat, because the SecWar issued an illegal command that no employees, or contractors of the federal could use any Anthropic product at all. He does not have that power.
He has power over DoD and his boss has power over the whole federal government.
From what I can tell, the key difference between Anthropic and OpenAI in this whole thing is that both want the same contract terms, but Antropic wants to enforce those terms via technology, and OpenAI wants to enforce them by ... telling the Government not to violate them.
It's telling that the government is blacklisting the company that wants to do more than enforce the contract with words on paper.
I think it's dumber than that; the terms of the contract, as posted by OpenAI (https://openai.com/index/our-agreement-with-the-department-o...), are basically just "all lawful purposes" plus some extra words that don't modify that in any significant way.
> The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities. Per DoD Directive 3000.09 (dtd 25 January 2023), any use of AI in autonomous and semi-autonomous systems must undergo rigorous verification, validation, and testing to ensure they perform as intended in realistic environments before deployment.
> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.
So it seems that Anthropic's terms were 'no mass domestic surveillance or fully autonomous killbots', the government demanded 'all lawful use', and the OpenAI deal is 'all lawful use, but not mass domestic surveillance or fully autonomous killbots... unless mass domestic surveillance or fully autonomous killbots are lawful, in which case go ahead'.
> will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control
That says it all. Those laws get issued the same way the tariffs did.
That isn't my understanding. OpenAI and others are wanting to limit the government to doing what is lawful based on what laws the government writes. Anthropic is wanting to draw their own line on what is allowed regardless of laws passed.
I’m so confused by the focus on “all lawful use.” Yea of course a contract without terms of use implicitly is restricted by laws. But contracts with terms of use are incredibly common, if not almost every single contract ever signed.
The administration objected to those terms of use. Anthropic refused to compromise on them. OpenAI agreed to permit "all lawful use" but claims to have insisted on what at first glance appears to be terms of use in their contract. But in reality those terms permit all lawful use and thus are a noop.
"All lawful use" is the weasel word that makes the whole contract useless for the purposes of safety.
That is why it is the focus of this debate.
If the president does it, it's not illegal.
These were words issued by the president - which means at face value, if Trump orders it, it's not illegal - that was the fight that was lost today.
Not just the president — the Supreme Court agreed.
"All lawful USS" in the hands of those that decide what is lawful is effectively a blank check. They want a terms of use that says "I do what I want."
more based on what the government permits by not litigating rather than written law
The key difference is that Anthropic aired their disagreement with the DoD publicly, and the DoD is not going to work with a company that tries to exert any amount of control over their relationship via the public sphere. Same goes for Trump.
I think Anthropic knew full well that by publishing their disagreement, it would sink the deal and relationship, and I think they also calculated (correctly) that that act of defiance would get them good publicity and potentially peel away some of OpenAIs user base. I think this profit incentive happened to align with their morals, and now here we are.
No, it’s significantly worse than that. OpenAI has required zero actual guarantees from the government and Sam. The psychopath is lying to you. All the government has to do is have a lawyer say it’s legal, and most of the government’s lawyers are folks who were involved in attempting to overthrow the last election and should’ve been convicted of treason, so that means very little.
Sam stands for nothing except his own greed
I thought the key difference was that Brockman is top Trump donor, with USD 25M total [1]. I know it's technically not allowed, but do you think such a large amount of money would have swayed Trump in his decision?
[1] - https://the-decoder.com/openai-co-founder-greg-brockman-dona...
Anthropic wants to enforce them via language of the contracts and take a hands off approach. OpenAI has a contract that is paired with humans in the room (FDEs) that can pull the plug.