OpenAI resets spending expectations, from $1.4T to $600B

2026-02-2418:22225193www.cnbc.com

OpenAI told investors it's targeting roughly $600 billion in total compute spend by 2030, trying to clarify its plans after touting a $1.4 trillion figure.

Sam Altman, CEO of OpenAI, is pictured on Sept. 25, 2025, in Berlin.

OpenAI is telling investors that it's now targeting roughly $600 billion in total compute spend by 2030, months after CEO Sam Altman touted $1.4 trillion in infrastructure commitments.

The artificial intelligence company is providing a lower number and more defined timeline for its planned spending, sources told CNBC, as broader concerns mounted that expansion ambitions were too great for the potential revenue that would follow.

OpenAI is projecting that its total revenue for 2030 will be more than $280 billion, with nearly equal contributions from its consumer and enterprise businesses, said the sources, who asked not to be named because the information is private. The spending plan the company is offering is meant to more directly tie to its expected revenue growth, the people said.

In the back half of last year, OpenAI announced a flurry of multibillion-dollar infrastructure deals, partnering with leading chipmakers and cloud companies.

OpenAI is finalizing a massive funding round that could total more than $100 billion, with about 90% coming from strategic investors, one person said. Nvidia is in discussions to invest up to $30 billion in OpenAI as part of the round that could value the company at a $730 billion pre-money valuation, CNBC has confirmed.

In addition to Nvidia, strategic investors in the funding include SoftBank and Amazon.

OpenAI generated $13.1 billion in revenue in 2025, the sources said, up from its $10 billion target. The company burned through $8 billion, lower than its $9 billion target, they said.

The startup was founded as a nonprofit research lab in 2015, and it rocketed into the mainstream following the launch of its chatbot ChatGPT in 2022. ChatGPT now supports more than 900 million weekly active users, the people said, up from 800 million as of October.

OpenAI declared a "code red" in December to focus on improving the chatbot in the face of competition from rivals Google and Anthropic. ChatGPT had a dip in growth in the fall, but is back to record highs in both weekly active and daily active users, the people said.

The company's coding product, Codex, has surpassed more than 1.5 million weekly active users, the people said. Codex competes directly with Anthropic's Claude Code, which has seen a wave of adoption over the last year.

WATCH: Watch CNBC’s full interview with OpenAI CEO Sam Altman

Watch CNBC's full interview with OpenAI CEO Sam Altman

Read the original article

Comments

  • By paxys 2026-02-2419:2218 reply

    > OpenAI is projecting that its total revenue for 2030 will be more than $280 billion

    For context, that is more than the annual revenue of all but 3 tech companies in the world (Nvidia, Apple, Google), and about the same as Microsoft.

    OpenAI meanwhile is projected to make $20 billion in 2026. So a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.

    Must be nice to pull numbers out of one's ass with zero consequence.

    • By raincole 2026-02-2419:488 reply

      > a casual 1300% revenue growth in under 4 years for a company that is already valued in the hundreds of billions.

      Such a weird sentence. The correct causality should be: It's valued in the hundreds of billions because the investors expect a 1300% revenue growth.

      • By AvAn12 2026-02-2419:582 reply

        And if we all buy umbrellas, then it will start to rain??

        • By tibbar 2026-02-2420:01

          The metaphor for the original post was more like "You're already wearing a raincoat and umbrella, and you're forecasting a flood warning?" So, the flood warning (project revenue) may be completely incorrect, but it's not incongruous with the fact that I'm wearing a raincoat and umbrella (current investor valuation). :-)

        • By jonas21 2026-02-2420:026 reply

          I mean, if you go outside and everyone else is carrying an umbrella, it's probably going to rain.

          • By camdenreslink 2026-02-2420:122 reply

            Or the town has been hoodwinked by a smooth talking umbrella salesman.

          • By quxbar 2026-02-2420:15

            If you go outside and they are burning witches, it's best to go along with it.

          • By throwaway27448 2026-02-2420:53

            This greatly overestimates the rationality of markets.

          • By irthomasthomas 2026-02-2420:282 reply

            perhaps tis not the rain but the sun they fear.

          • By Imustaskforhelp 2026-02-2420:192 reply

            If you go outside and see people buying tulips, it doesn't mean that tulips are great investments.

            Another example is how Isaac Newton lost money on some other bubble as well: https://www.smithsonianmag.com/smart-news/market-crash-cost-... [ The market crash which cost newton fortune]

            So even if NEWTON, the legendary ISAAC NEWTON could lose money in bubble and was left holding umbrellas when there was no rain.

            From the book Intelligent investor, I want to get a quote so here it goes (opened the book from my shelf, the page number is 13)

            The great physicist muttered that he "could calculate the motions of the heavenly bodies, but not hte madness of the people"

            This quote seems soo applicable in today's world, I am gonna create a parent comment about it as well.

            Also, For the rest of Newton's life, he forbade anyone to speak the words "South Sea" in his pressence.

            Newton lost more than $3 Million in today's money because of the south sea company bubble.

            • By tchalla 2026-02-2423:04

              > So even if NEWTON, the legendary ISAAC NEWTON could lose money in bubble and was left holding umbrellas when there was no rain.

              The moral of that story is that being a legend or smart doesn’t count for much in investing.

            • By seanhunter 2026-02-2420:553 reply

              People often use that example, but Newton, for all he was unquestionably a giant of physics, was a bit of a weird dude and not 100% rationalist[1]. Additionally, just because he was a great physicist doesn't mean he knew anything at all about investment. You can be an expert in one field and pretty dumb in others. Linus Pauling (a giant in chemistry) had beliefs in terms of medicine that were basically pseudoscience.

              Intelligent investor is a great book though.

              [1] eg he wrote more than a million words on alchemy during his lifetime https://webapp1.dlib.indiana.edu/newton/project/about.do

              • By roenxi 2026-02-2421:32

                > ...was a bit of a weird dude and not 100% rationalist...

                That covers everyone. Especially and including the rationalists. Part of being highly intelligent is being a bit weird because the habits and beliefs of ordinary people are those you'd expect of people with ordinary intelligence.

                Anyone involved in small-time investing should be considering that they aren't rational when setting their strategy. Larger investment houses do what they can but even then every so often will suffer from group-think episodes.

              • By duskwuff 2026-02-2423:571 reply

                > Newton, for all he was unquestionably a giant of physics, was a bit of a weird dude and not 100% rationalist

                The norms of "rational" science hadn't really been established yet. There wasn't really a clear line drawn between alchemy and what we would consider chemistry today.

                • By seanhunter 2026-02-257:49

                  That is what I used to think, but if you dig a little deeper I'm not sure it's quite that simple. If you read the link I posted, all that work on alchemy was not printed after his death because people examined it and deemed it "not fit to print". So it definitely seems that even at the time, there may not have been a clear line, but people felt that his alchemical writings were on the wrong side of whatever line might in future be drawn.

                  Newton was also definitely in favour of an empirical/axiomatic basis for science in general. If you read principia he proves almost everything[1] and of course he famously deformed his own eyeballs with wooden gadgets to do his experiments in optics.

                  [1] In fact pretty much the one thing he doesn't prove is the calculus, which Alex Kontorovich once said in a lecture on youtube that he has a pet theory that the reason that Newton never published the calculus was not the one everyone says about his rivalry with Hooke etc but that he wanted a rigorous proof first (which of course didn't come about until much later with Cauchy, Weierstrass, Dedekind etc for normal calculus and the 1960s for non-standard analysis to prove Newton's fluxions rigorously).

              • By bbatha 2026-02-2513:041 reply

                Newton knew a lot about investing for the time. He was a master of the mint for much of his adult life.

                • By seanhunter 2026-02-2513:11

                  As I understand it, Master of the mint was more about knowing enough metallurgy to not be ripped off by people using weak alloys to smelt coins. It wasn’t like a modern central banker or anything like that.

          • By cyanydeez 2026-02-252:38

            Hint: its a classic LLM misunderstanding; theyre protecting from sunshine. Theyre parasols.

      • By paxys 2026-02-2420:32

        Investors are valuing it at ~$500B, which already projects massive revenue growth. OpenAI is saying "actually we are going to grow 10x faster than that". And all of this is without bringing up the “profit” word.

      • By rchaud 2026-02-2420:161 reply

        How much money was WeWork supposed to bring in when they were valued at $50 billion and it dropped to $10b when they put out their S-1 and faced some public scrutiny for the first time? This happened before covid and the switch to WFH. Were their investors unaware of their actual finances?

        • By lerchmo 2026-02-2421:44

          This is important. 20 billion is just like a rumor? would love to see the breakdown in the form of audited financials.

      • By Guvante 2026-02-254:48

        If investors expect Microsoft profitability that means their stock is worth 1/6th today what it will be in 5 years.

        That is a cost of capital estimate of 40%.

        Which points to investors not believing the company will be that profitable.

        I am not saying investors don't think they will be profitable just they certainly don't believe that profitable.

      • By mandeepj 2026-02-2421:17

        Oracle said something very similar, a short while ago. Besides a short lived peak, it didn’t do any good to their stock thus market valuation.

      • By scoofy 2026-02-253:39

        >investors

        The marginal investor does.

      • By jwolfe 2026-02-2420:081 reply

        They said casual, not causal.

        • By raincole 2026-02-2420:10

          I didn't read it wrong. And the illogical part isn't 'casual.' It's the whole sentence, especially 'already.'

    • By 0cf8612b2e1e 2026-02-2419:342 reply

      I like the little blurb at the end which said that Codex had 1.5 million users. So, if you can get each of them to pony up a mere $186k a piece, they can hit those revenue numbers.

      • By lm28469 2026-02-2419:42

        > Codex had 1.5 million users

        I'm three of them and I never spent a cent on any llms, I doubt I'm the only one

      • By sunaookami 2026-02-2419:35

        Don't forget that Codex is free until March so the numbers are heavily inflated.

    • By parliament32 2026-02-2421:13

      I, too, can make $280B in revenue by 2030 (by selling $10 bills for $5 (as long as I bamboozle enough investors into giving me sufficient capital, of course)).

    • By TimPC 2026-02-2421:10

      OpenAI is a bet on LLMs replacing a large chunk of the labour force in whatever sector it’s best at replacing. It’s essentially looking to get companies to pay $5k-$10k a month to have coding agents replace the output of a single software engineer.

      If the S-curve levels off below that level OpenAI will be an unsuccessful company.

    • By re-thc 2026-02-2419:57

      > and about the same as Microsoft

      > Must be nice to pull numbers out of one's ass with zero consequence.

      Seems accurate?

      What they are saying is if Microsoft ends up buying the rest of their shares then i.e. Microsoft's total revenue by 2030 will be more than $280 billion.

    • By akudha 2026-02-2420:132 reply

      I have used AI a bit, like it for a bunch of use cases. But god damn, these numbers are so big. Gotta wonder, are the returns even worth it? RAM prices up, electricity prices up, hard disk prices up… Maybe this is the price to pay for “progress”, but it sure is wild

      • By m4rtink 2026-02-2420:36

        Simple - they returns are not worth it. :-)

      • By chrisandchris 2026-02-2422:11

        You're missing one point: they are just talking about revenue. Nobody said something about making profit.

    • By guelo 2026-02-2422:43

      Doubling every year gets you to 1300% in 4 years. They have exceeded that for the past 5 years.

    • By fxtentacle 2026-02-250:031 reply

      If all the AI revenue projections were correct, then 1% of worldwide GDP would end up at AI companies. Or said differently: you buy a sandwich for $5 and somehow AI gets $0.05 out of that transaction.

      • By jononor 2026-02-258:21

        This is basically what happens with the advertising/social media giants (Facebook, Google, etc) because everyone needs msrketing, and mobile companies (Apple, Google) because they handle payments.

    • By avrionov 2026-02-2423:09

      Don't forget that Anthropic also predicts similar revenue in in the next 4 years.

    • By ActionHank 2026-02-2419:561 reply

      Consequences come later friendo.

    • By paul7986 2026-02-2420:08

      I was a paying customer ($20 a month) until AI prompted a layoff in my dying field that is web design and front end design coding. Now everytime chatGPT yells at me about memory i tell it fine Im just gonna use Gemini! I bet a lot of ppl are doing the same thing as both sit at the top of the iPhone charts.

    • By Betelbuddy 2026-02-2419:343 reply

      Its a circular economy...He is talking about the money moving from Nvidia to OpenAI and back to Nvidia. You got to go with the flow...

      He is counting on hundreds of husbands: https://xkcd.com/605/

      • By mirekrusin 2026-02-2420:47

        1.4T was estimate by gpt4/5, 600b by gpt5.3?

        they'll probably fix it just like they did fix strawberry

        their estimates will drop by ~20x which will be their max

        as underdog in the race they'll grab fraction of even that

        where are they planning to get that much money from? by showing adverts for 14h before you can prompt?

      • By YetAnotherNick 2026-02-2419:373 reply

        How will Nvidia give revenue to OpenAI?

        • By AtheistOfFail 2026-02-2419:41

          Nvidia gives money to OpenAI so they can buy GPUs that don't exist yet with memory that doesn't exist yet so they can plug them into their datacenters that don't exist yet powered by infrastructure that doesn't exist yet so they can all make profit that is mathematically impossible at this point - Stolen from someone else.

        • By ceejayoz 2026-02-2419:411 reply

          There are other forms of money transfer than revenue.

          • By YetAnotherNick 2026-02-254:54

            Yes but we are talking about OpenAI's projected revenue here, not funding.

      • By AtheistOfFail 2026-02-2419:40

        > Its a circular economy

        Garbage in, garbage out, same as before.

    • By tempodox 2026-02-2516:17

      I don’t believe one word of what they say publicly.

    • By mirekrusin 2026-02-2420:37

      I think he meant for Anthropic?

    • By rprend 2026-02-251:12

      Honestly seems reasonable to me. ChatGPT alone will be be at least as valuable as Google, plus all the enterprise adoption around API use cases.

    • By crystal_revenge 2026-02-2421:42

      I honestly don't think that sounds terribly outrageous.

      OpenAI and Anthropic aren't building companies that aim to be API endpoints or chatbots forever, their vision is clearly: you will do everything through them.

      The gamble is that this change is going to reach deeper into every business it touches than Microsoft Office ever did, and that this will happen extremely quickly. The way things are headed I increasingly think that's not a terrible bet.

    • By tibbar 2026-02-2420:104 reply

      Today I got a feature request from another team in a call. I typed into our slack channel as a note. Someone typed @cursor and moments later the feature was implemented (correctly) and ready to merge.

      The tools are good! The main bottleneck right now is better scaffolding so that they can be thoroughly adopted and so that the agents can QA their own work.

      I see no particular reason not to think that software engineering as we know it will be massively disrupted in the next few years, and probably other industries close behind.

      • By nemooperans 2026-02-2421:352 reply

        The anecdote is compelling, but there's an interesting measurement gap. METR ran a randomized controlled trial with experienced open-source developers — they were actually 19% slower with AI assistance, but self-reported being 24% faster. A ~40 point perception gap.

        Doesn't mean the tools aren't useful — it means we're probably measuring the wrong thing. "Prompt engineering" was always a dead end that obscured the deeper question: the structure an AI operates within — persistent context, feedback loops, behavioral constraints — matters more than the model or the prompts you feed it. The real intelligence might be in the harness, not the horse.

        • By rodonn 2026-03-014:56

          There's been a huge amount of improvement in coding agent effectiveness since they ran that experiment. In a more recent follow up experiment, METR found 20% speed up from AI assistance and says they believe that is likely an underestimate of the impact. https://metr.org/blog/2026-02-24-uplift-update/

          They are working on making a new measurement approach that will be more accurate.

        • By tibbar 2026-02-2422:551 reply

          Respectfully, was this comment AI generated? It has all the signs.

          And scaffolding does matter a lot, but mostly because the models just got a lot better and the corresponding scaffolding for long running tasks hasn't really caught up yet.

          • By nemooperans 2026-02-251:48

            Ha, fair call. I use Claude a lot and it's definitely rubbed off on how I write and even think (which is something to explore in itself sometime). The scaffolding point is from building though, not prompting. Been doing AI-integrated dev for about a year and the gap between "better model" and "actually useful in production" is almost entirely the surrounding architecture. You're right the infrastructure hasn't caught up yet, that's kind of the whole problem right now. Most teams are building fancier autocomplete when the real problems are things like persistent memory and letting learned patterns earn trust over time.

      • By JohnMakin 2026-02-2420:492 reply

        It really doesn't matter how "good" these tools feel, or whatever vague metric you want - they hemorrhage cash at a rate perhaps not seen in human history. In other words, that usage you like is costing them tons of money - the bet is that energy/compute will become vastly cheaper in a matter of a couple of years (extremely unlikely), or they find other ways to monetize that don't absolutely destroy the utility of their product (ads, an area we have seen google flop in spectacularly).

        And even say the latter strategy works - ads are driven by consumption. If you believe 100% openAI's vision of these tools replacing huge swaths of the workforce reasonably quickly, who will be left to consume? It's all nonsense, and the numbers are nonsense if you spend any real time considering it. The fact SoftBank is a major investor should be a dead giveaway.

        • By df2dd 2026-02-251:142 reply

          Indeed. Many of the posts I see on here are hilarious.

          Have any of you tried re-producing an identical output, given an identical set of inputs? It simply doesn't happen. Its like a lottery.

          This lack of reproducibility is a huge problem and limits how far the thing can go.

          • By tvbusy 2026-02-257:081 reply

            LLMs have randomness baked into every single token it generates. You can try running LLMs locally and set the temperature to low and it immediately feels boring to always have the same reply every time. It's the randomness that makes them feel "smart". Put it another way, randomness is required for the illusion of intelligence.

            • By df2dd 2026-02-2513:46

              Im fully aware of that. However, this illusion is a dangerous mirage. It doesnt equate to reality. In some cases thats OK. But in most cases its not, especially so in the context of business operations.

          • By tibbar 2026-02-257:051 reply

            Determinism in agents is a complex topic because there are several different layers of abstraction, each of which may introduce its own non-determinism. But yeah, it is going to be difficult to induce determinism in a commercial coding agent, for reasons discussed below.

            However, we can start by claiming that non-determinism is not necessarily a bad thing - non-greedy token sampling helps prevent certain degenerate/repetitive states and tends to produce overall higher quality responses [0]. I would also observe that part of the yin-yang of working with the agents is letting go of the idea that one is working with a "compiler" and thinking of it more as a promising but fallible collaborator.

            With that out of the way, what leads to non-determinism? The classic explanation is the sampling strategy used to select the next token from the LLM. As mentioned above, there are incentives to use a non-zero temperature for this, which means that most LLM APIs are intentionally non-deterministic by default. And, even at temperature zero LLMs are not 100% deterministic [1]. But it's usually pretty close; I am running a local LLM as we speak with greedy sampling and the result is predictably the same each time.

            Proprietary reasoning models are another layer of abstraction that may not even offer temperature as knob anymore[2]. I think Claude still offers it, but it doesn't guarantee 100% determinism at temperature 0 either. [3]

            Finally, an agentic tool loop may encounter different results from run to run via tool calls -- it's pretty hard to force a truly reproducible environment from run to run.

            So, yeah, at best you could get something that is "mostly" deterministic if you coded up your own coding agent that focused on using models that support temperature and always forced it to zero, while carefully ensuring that your environment has not changed from run to run. And this would, unfortunately, probably produce worse output than a non-deterministic model.

            [0] https://arxiv.org/abs/2007.14966 [1] https://thinkingmachines.ai/blog/defeating-nondeterminism-in... [2] https://learn.microsoft.com/en-us/azure/ai-foundry/openai/ho... [3] https://platform.claude.com/docs/en/about-claude/glossary

            • By df2dd 2026-02-2513:591 reply

              Appreciate the response. I agree that non-determinism isnt a bad thing. However LLMs are being pushed as the thing to replace much of the deterministic things that exist in the world - and anyone seen to be thinking otherwise gets punished e.g. in the stock market.

              This world of extremes is annoying for people who have the ability to think more broadly and see a world where deterministic systems and non-deterministic systems can work together, where it makes sense.

              • By tibbar 2026-02-2521:58

                Yeah, I think you're right that LLMs are overused. In most cases where a deterministic system is feasible and desirable, it's also much faster and cheaper than using an LLM, too..

        • By nfg 2026-02-2421:001 reply

          > In other words, that usage you like is costing them tons of money

          Evidence? I’m sure someone will argue, but I think it’s generally accepted that inference can be done profitably at this point. The cost for equivalent capability is also plummeting.

          • By JohnMakin 2026-02-2421:041 reply

            I didn't think there would need to be more evidence than the fact they are saying they need to spend $600 billion in 4 years on $13bn revenue currently, but here we are.

            Here you go: https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...

            • By tibbar 2026-02-2421:122 reply

              Right, but if OpenAI wanted to stop doing research and just monetize its current models, all indications are that it would be profitable. If not, various adjustments to pricing/ads/ etc could get it there. However, it has no reason to do this, and like all the other labs is going insanely into debt to develop more models. I'm not saying that it's necessarily going to work out, but they're far from the first company to prioritize growth over profitability

              • By mike_hearn 2026-02-259:072 reply

                This meme needs to go in the bin. Loss making companies love inventing strange new accounting metrics, which is one reason public companies are forced to report in standardized ways.

                There's no such thing as "profitable inference". A company is either profitable or it isn't.

                Let's for a second assume all the labs somehow manage to form a secret OPEC-style cartel that agrees to slow training to a halt, and nobody notices or investigates. This is already hard to imagine with the amount of scrutiny they're under and given that China views this as a military priority. But let's pretend they manage it. These firms also have lots of other costs:

                • Staffing and comp! That's huge!

                • User subsidies to allow flat rate plans

                • Support (including abuse control and handling the escalations from their support bots)

                • Marketing

                • Legal fees and data licensing

                • Corporate/enterprise sales, which is expensive as hell even though it's often worth it

                • Debt servicing (!!)

                • Generating returns for investors

                Inferencing margins have to cover all of those, even if progress stops tomorrow and the RoI to investors has to be likewise very large, so margins can't be trivial. Yet what these firms have said about their margins is very ambiguous. As they're arriving at this statement by excluding major cost components like training, it's not clear what they think the cost of inferencing actually is. Are they excluding other things too like hw depreciation and upgrades? Are they excluding the cost of the corporate sales/support infrastructure around the inferencing?

                • By tibbar 2026-02-2516:37

                  To be clear, it's absolutely impossible for OpenAI and the others to stop. The valuation and honestly the global markets depend on them staying leveraged to the hilt. So they're not going to stop. However, the point is that the models are genuinely useful and people pay for them, and if we reset the timeline with a company that has just the current proprietary models, they could turn a profit. That might involve charging more than they do now, etc. But this is much different than OpenAI, specifically, trying to turn a profit today, which wouldn't work for many reasons.

                  But also, "profitable inference" IS a thing! "Gross margin" is important and meaningful, even if a company has other obligations that mean it's overall not profitable.

                • By rodonn 2026-03-014:59

                  "profitable on inference" means "marginal costs of inference are lower than revenue". It is very common to distinguish between upfront costs vs. marginal costs when judging the economic viability of a business.

                  You mention "debt servicing", but OpenAI has no debt. All the money they have raised is equity not debt.

              • By zippothrowaway 2026-02-2421:51

                Nope. The only "all indications" are that they say so. They may be making a profit on API usage, but even that is very suspect - compare against how much it actually costs to rent a rack of B200s from Microsoft. But for the millions of people using Codex/Claude Code/Copilot, the costs of $20-$30-$200 clearly don't compare to the actual cost of inference.

      • By javascriptfan69 2026-02-2420:541 reply

        What was the feature and what was the note?

        • By tibbar 2026-02-2421:081 reply

          It was a modest update to a UX ... certainly nothing world-changing. (It's also had success with some backend performance refactors, but this particular change was all frontend.) The note was basically just a transcription of what I was asked to do, and did not provide any technical hints as to how to go about the work. The agent figured out what codebase, application, and file to modify and made the correct edit.

      • By tapoxi 2026-02-2422:57

        Yeah but was Cursor using Claude? What's the moat that any of these companies have that prevents me from using another LLM?

  • By mnky9800n 2026-02-2419:225 reply

    I too have reset my spending expectations down from $1.4T.

    • By AvAn12 2026-02-2419:59

      A wise move.

    • By kylehotchkiss 2026-02-2420:13

      ugh lower the interest rates Jerome, I'll do anything to tank the economy until you finally do.

    • By rhelz 2026-02-2420:30

      I have done my fair share of misunderestimating before, but I've never been off by that much.

    • By nova22033 2026-02-2421:35

      and that's just my doordash order..

    • By johnwheeler 2026-02-2419:58

      best comment

  • By Saig6 2026-02-2419:421 reply

    The 1.4T commitments was over 8 years, not by 2030.

    https://x.com/sama/status/1986514377470845007

    • By blitzar 2026-02-259:38

      the tweet has capital letters and punctuation - I doubt the authenticity of this post

HackerNews