OpenAI raises $110B on $730B pre-money valuation

2026-02-2714:56561591techcrunch.com

The new funding consists of a $50 billion investment from Amazon as well as $30 billion each from Nvidia and SoftBank, against a $730 billion valuation.

OpenAI has raised $110 billion in private funding, the company announced Friday morning, commencing one of the largest private funding rounds in history. The new funding consists of a $50 billion investment from Amazon as well as $30 billion each from Nvidia and SoftBank, against a $730 billion pre-money valuation.

Notably, the round remains open, and OpenAI expects more investors to join as it proceeds.

“We are entering a new phase where frontier AI moves from research into daily use at global scale,” OpenAI said. “Leadership will be defined by who can scale infrastructure fast enough to meet demand, and turn that capacity into products people rely on.”

As part of the investment, OpenAI is launching significant infrastructure partnerships with both Amazon and Nvidia. As in previous rounds, it is likely that a significant portion of the dollar amount comes in the form of services rather than cash, although the precise split was not disclosed.

The company’s previous round closed in March 2025, raising $40 billion against a $300 billion valuation. At the time, it was the largest private funding round on record.

As part of its Amazon partnership, OpenAI plans to develop a new “stateful runtime environment” where OpenAI models will run on Amazon’s Bedrock platform. The company will also expand its previously announced AWS partnership, which committed $38 billion in compute services, by $100 billion. OpenAI has committed to consuming at least 2GW of AWS Tranium compute as part of the deal, and also plans to build custom models to support Amazon consumer products.

“We have lots of developers and companies eager to run services powered by OpenAI models on AWS,” said Amazon CEO Andy Jassy in a statement, “and our unique collaboration with OpenAI to provide stateful runtime environments will change what’s possible for customers building AI apps and agents.”

Techcrunch event

Boston, MA | June 9, 2026

The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”

OpenAI gave fewer details on the Nvidia partnership, but said it had committed to using “3GW of dedicated inference capacity and 2GW of training on Vera Rubin systems” as part of the deal.

Nvidia’s participation in the round has been the subject of intense speculation, particularly as reports of a $100 billion investment in September gave way to reports of a smaller investment in the months that followed.

In January, Huang dismissed the idea that Nvidia was backing away from OpenAI, saying, “we will invest a great deal of money. I believe in OpenAI. The work that they do is incredible.”


Read the original article

Comments

  • By _fat_santa 2026-02-2715:268 reply

    IMO this looks largely like another circular investment. Amazon's investment is tied to OpenAI using AWS for their Frontier product and I assume Nvidia's conditions are that OpenAI continue buying hardware from them. Then there's SoftBank though given that those are the same guys that invested heavily in WeWork, I assume this is just very brash bullishness on their part.

    From my perspective, I hope that OpenAI survives and can pull of their IPO but I just have that nagging feeling in my gut that their IPO will be rejected in much the same way that the WeWork IPO was rejected.

    On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.

    When their IPO hits later this year I hope that it's the former case and there's actually some good underlying fundamentals to invest in. But based on everything I've read, my gut is telling me they will eventually implode under the weight of their business model and spending commitments.

    • By mizzao 2026-02-2721:13

      This piece that was on HN yesterday corroborates your gut: https://www.ben-evans.com/benedictevans/2026/2/19/how-will-o...

    • By max51 2026-02-2720:413 reply

      The "circular investment" is mostly start up companies using their stocks instead of cash to pay for server hardware and cloud computing. There is a few extra steps in between that make things look weird and convoluted, but the end results is really just big companies giving hardware and getting shares of ai companies in exchange for it.

      • By dangus 2026-02-2721:123 reply

        I think you’re just describing how it’s circular.

        It’s like Toys R Us not having enough money to pay Mattel for Barbie dolls and telling Mattel they can have partial ownership of the company if they just supply them with some more toys.

        But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

        Toys R Us continues selling toys faster and faster despite a lack of profit, making Mattel even more dependent on Toys R Us as a customer. It blows up the bubble where a more natural course of action would be for Toys R Us to go bankrupt or scale back ambitions earlier.

        Because it’s circular like this, it lends toward bigger crashing and burning. If OpenAI fails, all these investors that are deeply integrated into their supply chains lose both their investment and customer.

        • By est31 2026-02-282:296 reply

          > But the problem is that Toys R Us is spending $15, 20, or maybe even $50 (who knows?) to sell a $10 toy.

          It's like how Uber and Airbnb in the early days were burning loads of cash to build market share. People went to these services because they were cheaper. Then they would increase prices once they had a comfortable position.

          OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc. Compared to say Uber which didn't provide a lot of efficiency gains.

          • By chemmail 2026-02-288:081 reply

            A little bit, but the scale is another magnitude higher. I just saw a chart yesterday that shows Uber burning $18B, Tesla burning $9B, and Netflix burning 11B before reaching profitability. Open AI so far spent $218 Billion.

            • By fauigerzigerk 2026-02-2811:331 reply

              The opportunity is disproportionately greater as well though.

              Unfortunately that doesn't change the fact even a small miscalculation could have an enormous impact. We are approaching levels of risk comparable in size to the subprime crisis of 2008.

              • By SR2Z 2026-03-0216:31

                Is it? AI isn't going to be a winner take all market. Competition between American AI labs and even Chinese ones have seen to that.

                The winners for AI will be the product companies, because soon enough the top-tier models are all going to have good enough performance that companies can just pick the cheapest. It'll be a race to the bottom for inference and OpenAI is very poorly placed to compete in that kind of thing.

          • By chrisandchris 2026-02-287:161 reply

            > It's like how Uber and Airbnb [...]

            I disagree. It's like Uber and Airbnb in how they try to gain market share. Big difference: For Uber (and when it got big, basically everybody I know has used it once in a while) and Airbnb, you oaid for each transaction. With OpenAI, most peopme are on the free tier. And if there is something incredibly hard, it's converting free users to paid users. That will, IMHO, be the thong that blows (many) of the AI companies up. They won't ever reach a profit/loss-equality.

            • By RyanOD 2026-02-2817:43

              I agree with this. For the casual user, I feel AI is only a "nice to have".

          • By oblio 2026-02-284:51

            > OpenAI is also in a rapidly transforming field where there are a lot of cost reductions happening, efficiency gains etc.

            But also ever increasing quality requirements. So we can't possibly know at this point if this is a market with high margins or not.

          • By dangus 2026-02-2810:031 reply

            And unlike Uber and Airbnb, OpenAI has no way to maintain marketshare. It’s a domain name with no moat.

            Google has to pay Apple billions of dollars to make Google.com the default search engine. I just looked it up, over 15% of search revenue goes to pay to be the default search engine.

            Every Android device defaults to Gemini.

            Every Microsoft device defaults to Copilot.

            I’d love to see where these cost reductions are. If costs are going to decrease rapidly why does OpenAI’s spending plan look so insane?

            • By abustamam 2026-02-2813:58

              > Every Android device defaults to Gemini.

              > Every Microsoft device defaults to Copilot.

              I don't think it's right to say that these devices "default" to their vendors' AI software when it's impossible to replace it with something else. Yes I can install Claude as a standalone app but I don't have the OS-wide integration that Gemini does for Android for example.

          • By tovej 2026-02-289:19

            Where are the cost reductions exactly? Except for using AI hype as an excuse for layoffs. Can you showe a reference? Genuinely interested.

          • By tarsinge 2026-02-287:121 reply

            Uber and Airbnb have network effects. You cant increase price when there is no cost in switching.

            • By cylemons 2026-02-2811:512 reply

              I dont see how network effects applies to Uber/Airbnb because nothing stops drivers/hosts from listing their property in multiple such apps

              • By matusp 2026-02-2813:341 reply

                People continue using Airbnb because that's where the properties are listed. And owners keep listing properties because that's where the users are.

                • By cylemons 2026-02-2814:19

                  My point was that nothing stops hosts from listing their properties in AirBnb as well as a competitor. Unless AirBnb penalizes delisting or enforces price parity I guess?

              • By enos_feedler 2026-03-019:11

                Do you understand network effects? It’s not hand cuffs. I can also sell my rare baseball cards outside of ebay. But…

        • By Aditya_Garg 2026-02-285:434 reply

          This is a common misconception

          OpenAI and others are already profitable on inference (inference is really really cheap)

          They are just heavily investing into the latest frontier

          The biggest risk is whether they can stay cutting edge, or if open source or others will catch up quickly.

          • By lelanthran 2026-02-287:061 reply

            > OpenAI and others are already profitable on inference (inference is really really cheap)

            If it's that cheap I'll soon be doing it self-hosted, or switching to a local provider.

            It's a race to the bottom for tokens-providers.

            • By jychang 2026-02-288:321 reply

              It is that cheap. Look at Deepseek or GLM pricing.

              • By lelanthran 2026-02-289:351 reply

                > It is that cheap. Look at Deepseek or GLM pricing.

                Then it's a race to the bottom.

                • By dangus 2026-02-2810:07

                  Yep.

                  And unlike competitors, OpenAI has no ecosystem. Just a website and a domain name. Even a VSCode fork like Cursor is an improvement over that state.

                  Google pays over 15% of search revenue to be the default search engine on various browsers.

          • By vasco 2026-02-286:50

            If you need to do the latter to be able to make money on the former, then you're not making money. Because if the latter requirement would disappear, inference margins would also drop.

          • By parineum 2026-02-286:351 reply

            At the end of the day, they're still burning cash. Even if inference is cheap, it's also not hard to compete on. They aren't going to be a trillion dollar inference company.

            Eventually there will be a race to the bottom on inference price to the customer by companies that aren't trying to subsidize their GPU investments.

            OpenAI is spending money because they think they need to for their business to survive. They're hoping that the next big breakthrough just requires more compute and, somehow, that'll build them a moat.

            • By zaphar 2026-02-2813:28

              OpenAI and quite honestly the others think they are in a race to AGI not the bottom. That's why they aren't concerning themselves with moats or cost. This is quite simply a massive bet that we've already cracked AGI and the rest is just funding the engineering to make it happen.

              I personally think we haven't cracked AGI yet but it doesn't change their calculus.

          • By rasz 2026-02-286:04

            >inference is really really cheap

            cough Sora cough

        • By SV_BubbleTime 2026-02-2721:483 reply

          OK, so absolutely good faith here what is the end game?

          Obviously, there’s a scenario of super power AI and then it’s a matter of continuing course. Electricity and silicon.

          What if you are right, and the scaling doesn’t work. It is too much power, time, hardware to improve… does openAI fold?

          Do they just actual use the models they have?

          Does everyone just decide that AI didn’t work and go back 5 years like it didn’t happen?

          Does the price change so that they have to be profitable making AI services expensive and rare instead of today where they are everywhere pointlessly?

          Or does this insane valuation only make sense with information you don’t have like insider scaling or efficiency news?

          Does China’s strategy of undercutting US value of models pay off bigly?

          • By Flatterer3544 2026-02-2722:02

            Why so extreme, most likely just AI winter for a while, then when tech and societies has caught up, the advancements begins again.

            It is not like we threw away the dotcom advances, they were just put on hold for a while..

          • By underlipton 2026-02-2723:262 reply

            The people running these companies have a perverse incentive to keep the ball rolling as long as possible so that they can extricate as much personal wealth and influence as possible. Maybe AGI makes all the problems go away. But, failing that, they get out relatively scot-free when it all collapses. And they don't owe anything to the public. And no one is going to bring them up on fraud charges or any other kind of criminal charges. So, while the world is burning around them (including their former companies), they have the money and connections to acquire property and businesses that are actually productive. It's the Russian oligarch playbook. They're the kings of a struggling society on the brink of failure, but they heard "kings" and said, "Let's go."

            • By Muromec 2026-02-282:201 reply

              I generally agree with the sentiment, but it's not the russian oligarch playbook. The playbook is some kind of a variation of buying out a productive asset in a legacy industry under it's market price (because everything is on fire already), then using political or monopoly power to funnel (tax) money through it and into your pockets (the asset has to function, but doesn't have to provide a good quality of service due to not allocating proper maintenance). Sovereign AI fund and Microsoft are very close to that setup. If NYC subway would be sold to certain Elon and he will then jack up the prices and have the city hall to subsidize it still, but keep the quality of service the same, that would be more or less it.

              The other variation goes in reverse -- using the legacy asset and it's capture labor force to output some kind of a commodity that is sold below market price to a controlled company in a different jurisdiction, where it's resold at small discount of a market price. The company still has to function here too.

              Bonus points for not even owning the asset in question, but having effective control over it through the corrupt management, this way the government still pays the bills to keep it running at loss.

              What you are describing is actually very western thing, because it assumes you can exchange the asset into cash directly and then buy something with that liquidity, which assumes solid property rights. I'm not even talking about OpenAI being an actual tech company that just wasn't there before. It's not how oligarchy works in the places.

              Since the US is slowly moving in a direction of oligarchy, I think the actual reference will be helpful.

              • By underlipton 2026-02-2816:041 reply

                Please read Sarah Kendzior. What's happening under Trump is different from what's happened under other admins precisely because he's drawing from the Russian quasi-state/mob playbook, and not from the normal "socially-caustic Capitalism" one. The difference is that one seeks to maintain a state, and one seeks to dismantle it and replace it with a quasi-state, which exists mainly to interface with other the entities that are still playing in the nation-state system, but which internally functions almost completely as a projection of the power of the elites.

                You're conflating the assets the elites own before the state collapse with the ones they seek to acquire afterwards. The don't care if the ones from before function, because their only purpose is to be maximally extractive. Afterwards, there's no need to funnel tax money through the functional businesses they acquire; they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money. No laundering games necessary.

                • By Muromec 2026-02-2817:29

                  >replace it with a quasi-state, which exists mainly to interface with other the entities

                  I don't exactly disagree with that assessment and I think you should stay vigilant for that indeed. What I'm saying, that selling a hot potato to get cash is the opposite of what oligarchs are known to do. I could be that it's but a step to buy something else with oligarchic intentions in mind, but alternatively it could a normal westerner money-handling behavior.

                  >they are the company and state and the company is the service or product, so anyone interfacing with the product or service within the state is handing them their money.

                  That doesn't contradict what I wrote or at least meant. The asset in question is not the means of laundering, but a pretext for extracting money from everyone unfortunate enough to live in the forsaken place.

                  The laundering part usually comes when the oligarch wants to safeguard their own money from political risks, which they do by keeping the funds in a place that is outside of their (and their potential rivals) political influence. Otherwise, once the political balance shifts, the money is just gone, because no laws exist to guard it anymore. I'm not sure what this "outside" place could be for Americans, but could guess (with no confidence in the answer at all), it's either Swiss or Gulf banks. Maybe UK or whatnot. Some structures that have a combination of impartiality to their disputes, strong enough property and privacy regimes, but with zero to none ethical constrains to walk away from it.

            • By RyanOD 2026-02-2817:56

              "so that they can extricate as much personal wealth and influence as possible"

              I've always thought this. If you're running something like OpenAI, it really doesn't matter to you if the company fails because you're already comfortably wealthy. But, it sure would be nice to be worth another 10x billion - though I'm not totally sure why.

              So these individuals perceive a large upside and no downside. It's more of a hobby than a job. Like learning to play piano. It would be amazing to be a badass pianist...but not a big deal if that never happens.

          • By arthurcolle 2026-02-283:19

            Growth decoupled from labor costs

      • By lysace 2026-02-2723:001 reply

        Cisco did this in 1999. That's how my smallish apartment building in Sweden ended up with a kick-ass Cisco 10 Gbps switch in its basement a year later - when these cost real money.

        I think the HOA still only pays like $10/month/apartment for an entry level that's now defined as 250/250 Mbit/s. Someone must have been unusually savvy with the contracts.

        https://newsroom.cisco.com/c/r/newsroom/en/us/a/y1999/m11/ci...

        Cisco survived but it took them until late last year to recover their 1999 stock value (that's 26 years).

      • By dfp33 2026-02-2721:172 reply

        Nope wrong framing.

        Nvidia is investing assets into OAI - it has to. Because OAI needs to become successful for Nvidia's story in the long-term to play out, to justify its current stock price.

        • By malfist 2026-02-2722:04

          You say calling it circular is wrong framing and the immediately proceeded to describe a circle.

        • By trvz 2026-02-2723:182 reply

          Nvidia just needs the winner to be an Nvidia customer. OpenAI is replacable.

          • By moregrist 2026-02-2723:572 reply

            If OpenAI folded, you’d have the one LLM company that consumers know suddenly gone. Which seems like the opposite of an AI success story.

            People will start looking at valuations more carefully. Investors will get jittery. Spending on GPUs will drop, as will NVidia’s stock price.

            I’m not sure that NVidia views OpenAI as replaceable.

            • By rfrey 2026-02-282:321 reply

              If OAI folded, there would also be a sudden tsunami of recent Nvidia hardware on the used market.

              • By nehal3m 2026-02-283:411 reply

                Specifically built for training and inference and not much else, and also they age like milk. I don’t see how that helps anyone.

                • By rwyinuse 2026-02-2810:58

                  It would be a fun day for hobbyists who want to run big open source models locally, if nothing else.

            • By dfp33 2026-02-280:51

              [flagged]

          • By 7speter 2026-02-2723:571 reply

            Customers comparable to openai are trending towards designing and/or using their own silicon, though.

    • By bandrami 2026-02-2720:342 reply

      It's not "continue" buying as much as this is NVIDIA fronting the money for (most of) the hardware OpenAI has already ordered from them. It's like borrowing rent money from your drug dealer.

      • By nelsonic 2026-02-2720:514 reply

        Great analogy. ;-)

        Doubt Jensen sees himself as a “dealer” but considering the vendor lock-in and margins, he pretty much is the Tony Montana of Ai Chips.

        It’s nuts that this type of financing is legal.

        • By coliveira 2026-02-2721:05

          It's like credit cards loaning money to people who are unemployed and will default on payments. It's a risky business that is legal and can be very profitable, but may also be disastrous in the future.

        • By kace91 2026-02-2722:43

          >It’s nuts that this type of financing is legal.

          You need people to burn in house fires for regulation to require extinguishers.

          We're going to be the next generation’s cautionary tale.

        • By nradov 2026-02-2721:371 reply

          I don't see the problem as long as materially significant transactions by publicly traded companies are properly disclosed to investors. If someone loses money by buying NVDA then they have only themselves to blame.

          • By bandrami 2026-02-281:001 reply

            This is Jeremy Irons' argument in "Margin Call" too. But most people were unhappy with the secular result.

            • By nradov 2026-02-282:09

              Tuld wasn't wrong. There will always be financial bubbles and misallocation of capital. It can't be prevented, and even trying to prevent it would involve intrusive government overreach that would make most people even more unhappy. Investors who want safety are free to buy Treasuries.

        • By Aperocky 2026-02-2721:001 reply

          It is legal because Jensen isn't selling drugs, payday loans are legal too!

          • By lotsofpulp 2026-02-2721:101 reply

            It’s legal because both sides have armies of lawyers and are voluntarily entering into contracts where each party gets consideration.

            How someone can compare the above situation to a person getting a payday loan to put a roof over their head or food on their plate is beyond me.

            The “it’s like <insert wild and inappropriate analogy to stoke emotion>” is a tired trope.

            • By bandrami 2026-02-281:011 reply

              Come on, calling a round of vendor financing (which is what the NVIDIA money is) "funding" is eggregiously misleading. The only new money entering the sector from this is SoftBank's stake.

              • By lotsofpulp 2026-02-283:01

                They might have dressed up the wording, but the details are all there for anyone who wants to objectively look at the deal. It is a group of two executives making a non coerced deal and disclosing the required information to investors.

                Might be a stupid gamble, but it's not akin to a loan shark shaking down a hungry, cold person for life's essentials.

      • By kristjansson 2026-02-2720:53

        Conversely it’s equity for an in-kind investment. Dave Choe taking the Facebook shares writ large.

    • By ChadNauseam 2026-02-2720:2911 reply

      > On the one hand you can look at these companies investing and take it as a signal that there is something there (in OpenAI) that's worth investing in. On the other hand all these companies that are investing are basically getting that investment back through spending commitments and such and are just using OpenAI as a proxy for what is essentially buying more revenue for themselves.

      I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".

      • By bradfa 2026-02-2720:384 reply

        I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?

        NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.

        • By ethbr1 2026-02-2721:171 reply

          > hence I end up with $100 worth of stock in your company and it only cost me $25.

          You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.

          It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.

          So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.

          If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.

          • By nsvd2 2026-02-2817:19

            Sure, but OpenAI doesn't have cash. It does have stock.

            Even if Nvidia has capped production for now, increased demand still allows them to sell chips at a greater margin. Or, to put another way, presumably Nvidia is charging OpenAI a premium for the privilege of paying with stock.

        • By Rury 2026-02-280:32

          In that case, you spent $80 to produce an item and exchanged it for $100 worth of their stock.

          Now if you check, these companies selling their stock like this tend to have large amounts of debt. If their stock becomes worthless, you just wasted $80 producing an item that their creditors have first dibs on. And liquidating your shares immediately to ensure your gain, would weigh on their stock's value, potentially to the point where their stock would be only $80 worth, and you wouldn't be gaining anything anymore. Your earnings would then tank, alongside them.

        • By ben_w 2026-02-2720:471 reply

          > I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?

          Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.

          • By danielheath 2026-02-2720:561 reply

            They wouldn’t have bought $100 worth of product if the deal weren’t offered, because they didn’t have $100 to spend.

            • By ben_w 2026-02-2721:031 reply

              If they couldn't borrow $100, or get $100 from any other investor, that just puts you in the position of being an investor, and even then the difference between bradfa's version and mine is simply when you became an investor, not that you became one.

              Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.

              • By ethbr1 2026-02-2721:19

                The primary cheat code here would actually seem to be (a) getting preferential access to Nvidia's production through these deals and (b) creating a paper story of increasing OpenAI private valuation.

        • By Sleaker 2026-02-2720:421 reply

          Aaaannd get to claim the 100 as revenue to show investors that the company is performing better than if I had not made the deal, which also means that demand for the product stays inflated which also means I can keep my margins higher by not needing to discount my product.

          • By rvnx 2026-02-2721:22

            Urgently need an IPO so losers can chip in. If the sandcastle plummets before, funds and other AI companies lose a lot, so better bet again and again, even if this is nonsensical.

      • By overfeed 2026-02-2721:10

        > Isn't that basically the same as me giving you $80?

        In your accounting, you can claim that you have an investment worth $100 and book $100 worth of revenue. You're juicing your sales numbers to impress shareholders - presumably, without your $100, the investee wouldn't have bought $100 worth of your product. The last thing your shareholders want to see are your sales numbers stop growing, or heaven forbid, start shrinking.

        Nvidia is not the first company to "buy" sales of its own product via simple or convoluted incentive schemes. The scheme will work for a while until it doesn't.

      • By tsimionescu 2026-02-2721:21

        The problem is here:

        > Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80?

        Why limit myself to $100 for a product that costs $80? I could just as well give you $1 000 000 to buy this same product from me. That way, I have a $1 000 000 share of your company, and I have $1 000 000 in revenue, and it only cost me $80.

        This distorts the market for the product we're trading, and distorts the share price for both my company and yours.

      • By hirako2000 2026-02-2720:371 reply

        That's like giving them* $20.

        And inflate your revenue by $80.

        Laws on competition make this kind of arrangements illegal, so you would have to exerce influence and have the invested in company pretends you happen to have been picked among competitors.

        In any case the SEC will be focused on whether the filings aren't made up to fraud investors, so they could reject the IPO, of the invested in company. Your own entity also is at risk.

        We all know MS gets away with it, they have good legal goons who find way to make all of it appears fair with regards to the law.

        • By GolfPopper 2026-02-286:471 reply

          >they have good legal goons who find way to make all of it appears fair with regards to the law

          I thought it was more that the legal goons delay the final judgement until Microsoft can eventually find someone they can (technically legally) bribe to drop the case?

          • By hirako2000 2026-02-2813:43

            Swallowing a few millions dollars fine will do. The DOJ needs to fund the whole department. By then MS will have moved onto other things, rinse and repeat.

      • By ibeckermayer 2026-02-281:40

        > Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".

        It's a good question, what I think you're missing is that if the market is valuing me (NVIDIA) at 25x revenue then it's more like I traded you (OpenAI) a GPU it cost me $80 to make for $100 worth of OpenAI stock, and I got a bonus $2500 in market cap of my own stock (which existing shareholders like).

        IOW for every incremental "$100" in revenue (circular or otherwise), existing shareholders get paid "$2500" in equity (NVIDIA appreciation + OpenAI shares).

        This "works" for NVIDIA and its shareholders as long as they/the market keeps thinking $100 of OpenAI stock is a good price for a GPU. If OpenAI tangibly fails to deliver on this valuation then NVIDIA may wind up in the red on these deals.

        Caveat: it's a bit more complicated than that as OpenAI doesn't typically buy/operate GPUs directly afaict, rather they team up with the big cloud providers like AMZN (also part of the deal). But it's an useful way to wrap your head around the economics, I think (open to correction, not a domain of professional expertise).

        I don't see anything _inherently_ unethical about this as some comments seem to imply. It's definitely riskier than accepting cash, in which case you're free not to play, but it's a calculated risk based on future expectations of growth by OpenAI. Granted there are some sketchy incentives qua existing shareholders that could materialize in pump and dump dynamics.

      • By skydhash 2026-02-2720:41

        I'm not a finance expert, but it may be because investment and purchase are are taxed differently (I don't know). You gave $100 away as investments, got $100 back as revenue. Meanwhile you establish that your product are worth $100 (while costing $80) and you have $100 worth of shares. Without considering side effects, you gave away $80 worth of product for $100 (supposed) worth of shares. But shares are subject to side effects and those side effects can be quite nice (making the news, establishing price,...).

        The issue is that there's no organic force behind those changes and it makes everything hollow. You could create a market inside a deserted area and make it appear like a metropolis.

      • By SecretDreams 2026-02-2720:341 reply

        > I don't understand how this is some kind of cheat code. Let's say I give you $100 on the condition that you buy $100 worth of product from me. And let's say that product cost me $80 to produce. Isn't that basically the same as me giving you $80? I don't see at all how that's me "basically getting that investment back".

        What if the product only costs you $20 to produce?

        • By a_victorp 2026-02-281:44

          Or if the stock is not actually worth $100

      • By rafaelmn 2026-02-2720:34

        In exchange for 100$ of your stock AND making your revenue numbers look insane for the next cycle ?

        Also Nvidia margins are waaay higher than 20%

      • By _fat_santa 2026-02-2720:38

        How I see it is the companies want to jack their revenue and in turn jack the price of their stock and please shareholders. Those are the two main goals which this accomplishes, regardless of the underlying fundamentals.

      • By coliveira 2026-02-2721:09

        The reason this doesn't make sense is that this is the math of monopoly creation! The government should be making sure companies don't go around throwing money at circular deals that will make them and their friends a fortune while cornering the market, but it seems that capitalism rules don't exist anymore in the US.

      • By Alex3917 2026-02-2720:34

        For both Amazon and Nvidia, their marginal costs are probably much lower than their fixed costs.

    • By johnbarron 2026-02-2721:09

      Nvidia sells the picks, AWS rents the mine, OpenAI digs, and the money just loops around the table...

    • By loeber 2026-02-2721:156 reply

      Comparing OpenAI and WeWork is a nonsensical perspective. OpenAI is shipping the most revolutionary product in a generation, with 800 million monthly active users. It's the fastest revenue ramp ever, at incredible scale -- $20B+ ARR. These are real fundamentals. They matter. And the cost of inference is coming down all the time.

      WeWork was a short-term/long-term lease arbitrage business. The two are nothing alike.

      • By rvnx 2026-02-2721:291 reply

        They had a first-mover advantage for sure.

        It used to be revolutionary, but now there is a huge difference: plenty of competition, and a growing number of high-quality models that can run offline (for free!) or cheaper (Gemini-Flash for example).

        They are in some way the Nokia of AI, "we have the distribution, product will sell", but this is not enough if innovation is weak.

        They are even lagging behind (GPT-5 is a weaker coder than Claude, Sora is a toy compared to Seedance 2.0, etc).

        One Apple releases the AIPhone, running offline models, with 32 GB of unified memory, with optional cloud requests, then it's going to be super though for OpenAI.

        • By 7speter 2026-02-280:01

          Local ai is cool and all but the models that run on typical consumer hardware doesn’t really compare to the breadth of information available by the likes of chatGPT, lets be real.

      • By DauntingPear7 2026-02-2721:211 reply

        How will they make money on their product exactly? To the tune of being worth nearly a trillion dollars? There is no guarantee that inference will go down, we’ve seen some improvement with cheap models, but they aren’t what people want, and otherwise models stay expensive to run and use

        • By babelfish 2026-02-2721:242 reply

          Inference is already profitable (training is not)

          • By nradov 2026-02-2721:401 reply

            So what. In a highly competitive industry they can't keep selling inference unless they continually train better models. It's like saying my airline is profitable if you don't count the cost of buying new airplanes.

            • By stingraycharles 2026-02-280:411 reply

              This is a completely new market and players are currently burning money in order to capture market share. The money will stop flowing in at some point, but until then, you can’t compare it to an industry like aviation which is extremely mature and heavily optimized.

              • By nradov 2026-02-280:58

                Nah. The software industry never really becomes mature. Microsoft is still spending a fortune churning on new versions of Windows and Office. The moment that OpenAI cuts spending on training they'll start to slide into irrelevance. Training costs are no longer just for compute resources and engineers: now they need to pay for proprietary training data to differentiate from competitors.

          • By zippothrowaway 2026-02-2722:101 reply

            [citation needed]

            OpenAI have made this claim and maybe it is with API pay-per-use (there's also good evidence eveb that is not if you dive into how much a rack of B200s cost to operate), but I'd be very sceptical that the free, $20 or $200 a month plans are profitable.

            Then the questions are if the market will bear the real cost and if so how competitive OpenAI are with Google when Google can do what Microsoft did to Netscape and subsidize inference for far longer than OpenAI can.

            • By Kubuxu 2026-02-2810:56

              Just try using Claude with API for an hour and you will see that the subscriptions are definitely not profitable (unless they percent off “partying but dormant” is very high).

      • By surajrmal 2026-02-284:04

        They aren't making money on the vast majority of those 800 million monthly actives. I wonder how many will stick around once they roll out ads. If they keep those users with ads, they definitely will be worth their valuation.

      • By engineer_22 2026-02-2721:321 reply

        Will they maintain an edge over other AI companies long term? With so many market participants will it become a race to the bottom?

        This valuation puts their P/E around 40.

        Anthropic $380B valuation on $13B ARR. P/E around 30.

        5 years ago Uber was in similar territory. Tesla... Well we won't mention Tesla.

        • By putlake 2026-02-281:561 reply

          That's not P/E. That's Price to Sales. P/E is price to earnings ratio. Earnings is profit. Since neither of these companies is profitable, they don't have a P/E ratio today.

      • By dfp33 2026-02-2721:25

        The only reason to draw this comparison is to show SoftBank are not as competent as they'd like to appear to be - so putting their name in relation to investors of OAI does not strengthen the prospects we should share re. OAI.

      • By mountainriver 2026-02-2721:182 reply

        It’s one of the worst takes I’ve heard. OpenAI creates the fastest growing app ever, spawns a revolution bigger than the internet, and this guys take is they are like WeWork…

        • By yibg 2026-02-2721:571 reply

          Both can be true. Just because you've created a revolutionary product doesn't mean it's a viable business, let alone one worth $700+ billion. There is a lot of history of the first movers that created revolutionary products that eventually faded away into nothing, while others capitalized on the innovation.

          • By runlaszlorun 2026-02-280:01

            > There is a lot of history of the first movers that created revolutionary products that eventually faded away into nothing, while others capitalized on the innovation.

            I'd say most first movers fade away. Microsoft wasn't the first OS, Google wasn't the first search engine, Facebook wasn't the first social network... etc... etc... etc...

        • By hk__2 2026-02-2721:401 reply

          Being the first doesn’t mean you’ll win. They have no product, only a commodity that you can find at other companies or even for free (DeepSeek).

          • By rvnx 2026-02-2721:47

            They have a product but it’s a commodity now.

            They are in the business of selling compute / datacenter rack spaces. A server where you pay per GBs transferred in/out.

            If it’s Gemini or GPT behind, for most use cases users wouldn’t care.

    • By Nevermark 2026-02-2722:41

      Circular investing can be a smoke screen.

      But it can also simply be the financial framing for direct bartering. Which is even more direct than regular financial transactions.

      "I will provide these resources you need, in exchange for part ownership", and/or "a limited license to your tech", "right to provide access to our customers on these terms", Etc."

      Amazon doesn't need any frothy fake revenue. But they do want to offer their customers the most in demand models, with the best financial terms for Amazon.

      Nvidia wants customers, but not at the expense of throwing money away. Their market cap may be volatile, but their books are beyond solid.

      I would be a lot more concerned if OpenAI was getting "funding" from a quantum computer startup, and vice versa.

    • By system2 2026-02-2720:231 reply

      I am expecting OpenAI stock to be the most volatile in history. The first 3-6 months will be fun.

      • By leonflexo 2026-02-2720:37

        How far the volatility ripples out will give us a real look into just how self-reinforced the financials truly are.

  • By 7777777phil 2026-02-2721:176 reply

    $730B pre-money for a company where each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering. Research (Sara Hooker et al.) is not encouraging on that front, compact models already outperform massive predecessors on downstream tasks while scaling laws only predict pre-training loss reliably.

    Wrote about both the per-model math and the scaling question:

    (1) https://philippdubach.com/posts/ai-models-as-standalone-pls/

    (2) https://philippdubach.com/posts/the-most-expensive-assumptio...

    EDIT: Removed the dot after et; bc apparently it's an entire word (the more you know..)

    • By onlyrealcuzzo 2026-02-2721:5111 reply

      > each model is roughly 2x profitable on its own, but each next model costs 10x the last. The whole thing only works if scaling keeps delivering.

      This is a decent argument, but it's not the death knell you think.

      Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.

      The number of applications where AI is already "good enough" keeps growing every day. If the cost goes down 99% every three years, it doesn't take long until you can make a ton of money on those applications.

      If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.

      For the foreseeable future, there are MANY MANY uses of models where a company would not want to host its own models and would be GLAD to pay an 4-5x cost for someone else to host the model and hardware for them.

      I'm as bullish on OpenAI being "worth" $730B as I was on Snap being worth what it IPO'd for - which it's still down about 80% (AFTER inflation, or about ~95% adjusting for gold inflation).

      But guess what - these are MINIMUM valuations based on 50-80% margins - i.e. they're really getting about ~$30B - the rest is market value of hardware and hosting. OpenAI could be worth 80% less, and they could still make a metric fuck-ton of money selling at IPO with a $1T+ market cap to speculative morons easily...

      Realistically, very rich people with high risk tolerance are saying that they think OpenAI has a MINIMUM value of ~$100B. That seems very reasonable given the risk tolerance and wealth.

      • By janalsncm 2026-02-2723:381 reply

        When models get cheaper to run for OpenAI, they also get cheaper for everyone else. It gets commoditized. AI might be able to do more, but most people aren’t going to pay for a thing they could get for free. See the many models on Huggingface as examples of that.

        And as the number of things AI is “good enough” at increases, the list of things on the frontier that people will want to pay OpenAI for shrinks. Even if OpenAI can consistently churn out PhD level math, most companies don’t care about that.

        So a necessary (but not sufficient) condition for the math to work out is that frontier tasks still exist and are profitable. This is why CEOs keep hyping up AGI. But what they really want is for developers to keep paying to get AI to center a div.

        • By intrasight 2026-02-280:384 reply

          > get cheaper to run

          Irrelevant. The model is the moat

          > most companies don’t care about that.

          Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.

          > center a div

          For sure a common use case, but is bot what the CEO is concerned about with AI.

          • By vidarh 2026-02-280:581 reply

            > Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.

            For some tasks that matters. But for a lot of tasks, "good enough but cheaper" will win out.

            I'm sure there will be a market for whichever company has the best model, but just like most companies don't hire many PhD's, most companies won't feel a need for the highest end models either, above a certain level.

            E.g. with the release of Sonnet 4.6, I switched a lot of my processes from Opus to Sonnet, because Sonnet 4.6 is good enough, and it means I can do more for less.

            But I'm also experimenting with Kimi, Qwen, Deepseek, and others for a number of tasks, including fine-grained switching and interleaving. E.g. have a cheap but dumb model filter data or take over when a sub-task is simple enough, in order to have the smart model do less, for example.

            • By intrasight 2026-02-2811:162 reply

              Models will get smarter and cheaper. For those that are burned directly into silicon, there will be a market for old models - as the alternative is to dump that silicon in a landfill.

              For models that run on general-purpose AI hardware, I don't know why the vendors would waste that resource on old models.

              • By vidarh 2026-02-2812:47

                Who says anything about old models? What we're seeing is that as the frontier models get better, we get cheaper, better small models that leverage the advanced but cost a fraction. At the same time, hardware provides morez cheaper options. Sometimes far faster options too (e.g. Cerebras).

                In terms of price, I can get 1m output tokens from Deepseek for 40 cents vs. 25 dollars for Opus, and a number of models near the 1-2 dollar mark that are increasingly viable for a larger set of applications.

                Providers will keep running those cheaper models as long as there's demand.

              • By generallyjosh 2026-03-0114:10

                Larger models need more hardware resources to run

                And, depending on effort settings, they do more 'thinking', i.e., use more rounds of inference to generate longer internal chains of thought

                Both very good reasons to prefer a smaller model, if the small model is good enough for the task

          • By janalsncm 2026-02-289:251 reply

            > The model is the moat

            What model? GPT4o certainly isn’t a moat for open ai. They need to keep training better and better models because qwen3, kimi k2.5 etc constantly nipping at their heels.

            > Wrong. They will use the model that gives them an edge. If they are using a PhD but their competitors are using Einstein, they will lose.

            It depends on the business. As much as I’d love to engage a PhD or an Einstein in my Verizon customer support call, it isn’t going to net the call center any value to pay for that extra compute.

            • By intrasight 2026-02-2811:101 reply

              It's a moat. Yes, they must keep refilling it, but it's all they have.

              My PhD vs Einstein analogy was bad. What I mean is stupid vs smart. Nobody is going to pay for a stupid model when they can pay a bit more for smart.

              • By xyzsparetimexyz 2026-02-2813:04

                But what if all models that are smart enough for the task? Then its about price no?

          • By robotpepi 2026-02-2811:131 reply

            > If they are using a PhD but their competitors are using

            god what are these assumptions

            • By intrasight 2026-02-2811:35

              More analogy than assumption. And admittedly a poor analogy.

      • By christoff12 2026-02-2722:331 reply

        > Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.

        Even if true, this still doesn't bend the curve when paying for the next model.

        > If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it. So there is tons of forward looking revenue that isn't counted yet.

        If this is true, it's true for the technology overall, and not necessarily OpenAI since inference would get commoditized quickly at that point. OpenAI could continue to have a capital advantage as a public stock, but I don't think it would if the music stopped.

        • By XenophileJKO 2026-02-2722:47

          I would actually like to see the real math currently.

          The market adoption has increased a lot. The cost to serve has come down a lot per token.

          Model sizes have not increased exponentially recently (The high point being the aborted GPT-4.5), most refinement recently seems to be extending training on relatively smaller models.

          When you take this into account together, the relative training to inference income/cost ratio likely has actually changed dramatically.

      • By 7thpower 2026-02-2723:411 reply

        I love that you are already confident fitting a curve. I want some of that swagger in my life.

      • By dfp33 2026-02-2721:571 reply

        "If AI stopped progressing today, it would take probably a decade or longer for us to take full advantage of it."

        AI stopped progressing, or LLMs? I really dislike people throwing the term AI around.

        • By grosswait 2026-02-2722:11

          For the purposes of their argument, I don’t think the distinction matters.

      • By kortilla 2026-02-2721:56

        > Models are getting 99% more efficient every 3 years

        The LLM industry has only be around for like 4 years. Extrapolating trends from that is pretty naive.

      • By blmarket 2026-02-2721:591 reply

        > 99% more efficient every 3 years

        It's 2x efficiency. Then I'd take 50% less power instead of ridiculous 99% less power.

        • By sigmoid10 2026-02-2722:442 reply

          GPT-4 came out 3 years ago and you can run comparable models for 1% of the cost nowadays. That is not 2x efficiency. That's two orders of magnitude in end-to-end compute efficiency.

          • By danielparsons 2026-02-2722:563 reply

            you're looking at nearly the entire curve of the tech's development. that's like saying lightbulbs became 99% more energy efficient and therefore will become another 99% more energy efficient. but most techs follow an S curve.

            • By MengerSponge 2026-02-2723:12

              But S curves are boring and dont moon

            • By sigmoid10 2026-02-2817:421 reply

              >you're looking at nearly the entire curve of the tech's development

              That's a pretty strong statement that would need some data or at least a mathematical argument to back it up. Otherwise it's like saying in the 1980s that PCs with 640kB RAM have reached their pinnacle in terms of what users can expect in real life benefits and there's no reason to keep pushing the tech.

              • By danielparsons 2026-02-2820:551 reply

                *entire curve to-date (I should have clarified). Yes it will get better for a long time, but where we are on the curve is harder to say. Lots of metrics to choose from, like "well it's incorrect 90% less often than a year ago, so that's a 10x improvement!". But the real metric that matters is how useful it is to people, and based on user data it looks like the only area it's getting exponentially more useful YoY is for programming. Lot of coders using it 10x more than before to code 10x faster. Not sure any other profession uses it for more than a juiced-up search engine / proofreader.

                • By sigmoid10 2026-03-0115:04

                  Tbf that sounds like a strong bias from someone who works exclusively in software development and simply hasn't found other uses. But I have worked with integrating LLMs across quite a few applications and departments by now and I can comfortably say that programming is not the only thing where we see extreme benefits. I wouldn't even say it's the area that has seen the most benefit so far. There used to be a lot of mundane work outside of software development that was easy prey even for early models. And with the current cutting edge models I'm pretty sure that you could replace >75% white collar jobs if you just get the context engineering right. That's the hard part right now, not the raw intelligence necessary for arbitrary data processing. But frameworks are getting there fast.

            • By robotresearcher 2026-02-281:42

              > most techs follow an S curve.

              All techs, eventually.

          • By swingboy 2026-02-2722:521 reply

            How do we know how much it costs? Or is this just based off the token pricing?

            • By Xunjin 2026-02-282:47

              That's the bingo of the question... The entire argument is token pricing, which can be subsidized.

      • By moron4hire 2026-02-2722:082 reply

        We said all the same shit about VR, dude. Even had a global pandemic show up to boost everyone's interest in the key market of telepresence. Turns out the merry go round can stop abruptly.

        • By christophilus 2026-02-280:042 reply

          No. Like many of us, I never saw much value in VR. LLMs have undeniable value that is general and broad. Now, does that mean OpenAI has a moat? No, it does not.

        • By solumunus 2026-02-2722:431 reply

          Did we?! You and Mark Zuckerberg maybe.

          • By mixdup 2026-02-2723:50

            "Am I nothing to you?" --Tim Cook

      • By robotpepi 2026-02-2722:49

        ok, but everything depends on your numbers being correct. 99% improved efficiency seems kind of a way too optimistic prediction.

      • By shruubi 2026-02-2811:43

        > Models are getting 99% more efficient every 3 years - to get the same amount of output, combined with hardware and (mostly) software upgrades - you can use 99% less power.

        This is such a poor argument for a number of reasons.

        1. Three years ago is basically when the "AI race" really kicked off amongst the frontier companies. You're effectively comparing a car from the 1920/30's to a modern car.

        2. Past performance is not an indicator of future performance. You can't just say that LLM's will grow and improve at a fixed rate for all time, that isn't how they or anything else works in the real world.

        3. Since it's an open secret that companies like Anthropic and OpenAI are running their models at a loss, a static 99% cheaper every three years arc still puts these companies at a net negative position unless compute, energy and water all somehow start getting 99% cheaper every three years.

      • By paulcole 2026-02-280:001 reply

        > Models are getting 99% more efficient every 3 years

        How many years total are you basing this on?

      • By lelanthran 2026-02-287:351 reply

        > Models are getting 99% more efficient every 3 years

        Ugh. Someone has to do this: https://xkcd.com/605/

    • By credit_guy 2026-02-281:36

      > each next model costs 10x the last

      Yes, but there's a chance that actually training is done more or less for free by companies like OpenAI. The reason being that they do a gigantic amount of inference for end users (for which they get paid), but their servers can't be constantly utilized at 100% by inference. So, if they know how to schedule things correctly (and they probably do), they can do the training of their new model on the unutilized compute capacity. If you or I were to pay for that training, it would be billions of dollars, but for them it is just using compute that otherwise would be idle.

    • By nazgul17 2026-02-2722:29

      What makes you think this trend will continue? In a situation with finite resources (eg the number of parameters), the default is to assume things will plateau.

    • By rishabhaiover 2026-02-281:00

      I was reading a paper on dark silicon and how it broke the beautiful scaling laws of the past (Moore's law/Dennard Scaling). We hit a wall, innovated and at the moment, the hardware industry is thriving. To me, that means scaling the industry and riding that momentum wasn't wrong. In fact, it allowed us to be where we are today.

      Why are we so against, in principle, to the current pre-training scaling laws? Perhaps, we'll require new innovations at some point, but the momentum allows us to reach to newer heights that we've never climbed before.

    • By simonebrunozzi 2026-02-2821:33

      > EDIT: Removed the dot after et; bc apparently it's an entire word (the more you know..)

      From latin "et alia", abbreviated as "et al." - it's not a single word but an expression.

    • By ohyoutravel 2026-02-2722:28

      Et is an entire word and doesn’t need a period at the end.

  • By pier25 2026-02-2715:137 reply

    > Amazon will start with an initial $15 billion investment, followed by another $35 billion in the coming months when certain conditions are met.

    Those conditions are an IPO or reaching AGI [1].

    Nvidia and SofBank will pay in installments.

    Also very interesting that Microsoft decided to not invest in this round. A PR statement was made though [2].

    [1] https://americanbazaaronline.com/2026/02/26/amazon-to-invest...

    [2] https://openai.com/index/continuing-microsoft-partnership/

    • By Netcob 2026-02-2716:301 reply

      Once they "reach AGI", will they have a big party on a carrier with a "Mission Accomplished" banner?

      • By echelon 2026-02-2720:3912 reply

        [flagged]

        • By toraway 2026-02-280:532 reply

          A classic hype merchant sales pitch: believe me, I was a doubter just like you, but I saw the light thanks to [insert latest model]!

          (Which for anyone familiar with your long comment history as a regular HN poster, is comically absurd to imply. You've been reliably adamant that AI will demolish this or that entire industry overnight for years at this point).

          • By laserlight 2026-02-288:281 reply

            GP commenter got my attention during the last few days. Judging by their claims of productivity, they should have been a billionaire already. I'm curious to know their motivation behind making such outrageous claims.

            • By sph 2026-02-2821:40

              I’ve seen their outrageous comments so often I wonder if it’s Sam Altman’s alt account. Probably the biggest AI snake oil merchant on the forum these days, with a sadistic pleasure at seeing people losing their job to AI.

          • By echelon 2026-02-288:301 reply

            > You've been reliably adamant that AI will demolish this or that entire industry overnight for years at this point

            We'll see who's right. I never said "overnight". Let's check in at the decade's end.

            Y'all dunked on me in 2019 when I said AI was coming for Hollywood. Have you seen Seedance 2.0?

            It's coming for us too. I've written five nines, active-active systems that handle billions of dollars of money movement daily. These systems can work in those contexts. I didn't think we'd be here this soon, and I actually thought LLMs were a dead end. I was wrong.

            I'm not trying to sell Claude Code. I hate the concept of hyperscaler companies. I want there to be viable open source coding models - there just aren't. I'm merely reporting on my findings.

            I sit at my machine for hours now in a prompt, review, test cycle. It's addictive. I'm getting more done at a faster rate than any time in my professional career. I'm excited, and I'm also worried. I don't know what happens after this.

            If you've seen how much I praise AI, then you've also seen how much I rail against monopolies. I am worried these giant companies are going to take the means of production from us. I don't think enough people are freaking out about this. It's a very real possibility.

            I'm just going to keep building. But you should pay close attention to what's happening.

            • By CamperBob2 2026-02-2818:56

              You're not wrong in principle, but you've made some specific extraordinary claims. If you're really that productive, generating useful work product at a rate of 20 kloc/day by yourself, people would pay just to learn how you're doing it!

              Y'all dunked on me in 2019 when I said AI was coming for Hollywood. Have you seen Seedance 2.0?

              Being right at the wrong time is often worse than just being outright wrong, I've found.

        • By Aperocky 2026-02-2721:021 reply

          > write 20kloc of Rust code every day (and I review all of it)

          Only one of this can be true. It's not a shame to say you don't bother reviewing it, in the future that may well be the norm.

          • By echelon 2026-02-288:242 reply

            This isn't agentic coding. I haven't even tried that yet.

            I'm prompting every change set, reviewing the outputs, then reviewing the total changes.

            I'm sitting at my PC all day doing this - I used to be productive in short bursts, now I'm productive all the time. It's addictive.

            • By ThrowawayR2 2026-03-013:21

              You are reviewing 20kLOC per day, which is about 0.7 LOC per second in an 8 hour working day, assuming literally doing nothing else but reviewing? Pardon me but I don't find that very believable.

            • By Tesl 2026-02-2815:23

              I don't completely disagree with what you are saying -- but there's no way you produce and review 20k lines a day. That part is clearly false. Even if Claude generated it in 10 seconds and it needed no changes you probably still couldn't review that much sensibly.

              I used Claude a lot on a recent project where it probably wrote 15-20k lines in a month, and it was overall excellent.

        • By hn_acc1 2026-02-2721:011 reply

          So you now have 400Kloc of Rust code? Doing what? How much of that is "new"?

          I can't get Augment / Opus 4.5 to edit a few C++ files from within VSCode without going off on a wild goose chase or getting stuck in an infinite loop after I tell that it should be doing this: "oh, you're right, I need to do X", "To do X, I must understand how to do Y", "I see now that to do Y, I should look at at Z". "Let me look at Z", followed by: "oh, you're right, I need to do X"..

          • By drivebyhooting 2026-02-2723:431 reply

            To be fair, humans editing C++ also go on wild goose chases. Have you seen the insanity the C++ committee has ratified?

            • By harry8 2026-02-286:061 reply

              not seeing anyone at all suggesting the work of the c++ committee is going to radicaly alter the efficiency of labour.

              • By drivebyhooting 2026-02-287:501 reply

                My point is that C++ is a tarpit, famous for nerd sniping and leading otherwise intelligent people into Wild goose chases.

                Have you read Alexandrescu’s “modern c++”? It’s like a piece of modern art but completely not self aware. There’s just something about C++ that lures intellectuals in; like ice age mammoths to a tar pit.

                Small wonder LLMs also fall victims to C++’s deranged ways.

                • By harry8 2026-03-0110:10

                  I don’t love c++ but can get things done in it. The tarpit is Haskell ;-)

        • By bigfishrunning 2026-02-2721:401 reply

          > 20kloc of Rust code every day (and I review all of it).

          Reviewing 1k lines of code an hour is a breakneck pace, are you spending 20 hours a day reviewing code?

          • By xienze 2026-02-2721:50

            It’s clearly code so flawless you can tell at a glance that it’s correct.

        • By throwaway173738 2026-02-2721:56

          To do what, exactly, and are people paying you for your output or are you just making things for yourself?

          Building things at a mature company with a market is a lot different than hacking together your own tools. There are a lot more people you can let down at scale.

        • By techpression 2026-02-2721:14

          That’s the same definition of reviewing code as saying watching the movie is the same as reading the book it’s based on. No human has ever reviewed 600k lines of code in a month, ever. It’s hard to find someone who can even read and understand that amount in that time.

        • By lelanthran 2026-02-287:37

          > In the last month, I've been using Claude Code to write 20kloc of Rust code every day (and I review all of it).

          If you're generating 20kLoC per day, you definitely aren't reviewing it!

        • By Cipater 2026-02-2721:491 reply

          What does all this code do? What software are you writing?

        • By unfunco 2026-02-286:10

          There's no way you're reviewing 20kloc a day unless all you're doing is the Sandi Metz squint test.

        • By candiddevmike 2026-02-2720:52

          > They just need to put all of the engineers on HN out of work.

          I think you've crossed the line from being an AI maxi to just rage baiting. This comment is a pointless anecdote at best, please take your ridiculous FOMO takes elsewhere.

        • By lifeformed 2026-02-283:451 reply

          I buy my software by weight! Give me the biggest one you have, please. I'm talking hundreds of klocs.

          • By http-teapot 2026-02-285:15

            I let it work all night, wake up and eat klocs for breakfast.

        • By xienze 2026-02-2721:491 reply

          I’m convinced these “guys you gotta believe me I’m a seasoned veteran and this shit is the real deal” posts that show up in every AI thread are either coming from Sam Altman or a bot.

          • By echelon 2026-02-2723:222 reply

            Just try it.

            • By CamperBob2 2026-02-2723:541 reply

              I have. It's great. Not 20 kloc/day great, though, and nobody believes that you are giving >1 kloc/hour anything more than the most casual glance.

              • By ex-aws-dude 2026-02-280:27

                I had to hack my scrollbar speed to 500% to get through all the code reviews in a day

            • By xienze 2026-02-282:201 reply

              Yeah, I do use agentic coding a good bit. It’s impressive, but not so much that I’ve convinced myself that my days are numbered.

              • By echelon 2026-02-288:22

                I'm not doing agentic coding yet! I'm asking it change by change. I'm just sitting at the PC all day doing this.

    • By oersted 2026-02-2716:35

      It'd be interested in seeing how exactly the lawyers figured out how to define AGI. It must be a fairly mundane set of KPIs that they just arbitrarily call AGI, the term will probably devalue significantly in the coming years.

      The actual quote is this though:

      > hitting an AGI milestone or pursuing an IPO

      So it seems softer than actually achieving AGI or finalising an IPO.

    • By bpp 2026-02-2716:14

      I'd assume the real trigger here is "reaching AGI," which would help OpenAI shrug off some of their Microsoft commitments thus making OpenAI models available on Amazon Bedrock. Which is what Amazon is really after.

    • By paxys 2026-02-2716:513 reply

      Very convenient to put "AGI" in all these agreements because the term is fundamentally undefinable. So throw out whatever numbers you want and fight about it and backtrack later.

      • By bwfan123 2026-02-2717:30

        > fundamentally undefinable

        Incredible, how an entire religion has sprung up around AGI.

      • By ben_w 2026-02-2721:22

        The problem with AGI is not that it's undefinable, but that everyone has a different one. Kinda like consciousness in that regard.

        Fortunately, OpenAI already wrote theirs down. Well, Microsoft[0] says they did, anyway. Some people claimed it was a secret only a few years ago, and since then LLMs have made it so much harder to tell the difference between leaks and hallucinated news saying this, but I can say there's at least a claim of a leak[1].

        [0] https://blogs.microsoft.com/blog/2026/02/27/microsoft-and-op...

        [1] It talks about it, but links to a paywalled site, so I still don't know what it is: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...

      • By copx 2026-02-2721:364 reply

        The definition used to be "passes the Turing test" .. until LLMs passed it.

        • By davemp 2026-02-2723:27

          Extremely debatable. Especially because there is no "The Turing Test" [0] only a game and a few instances were described by Turing. I recommend reading the original paper before making bold claims about it. The bar for the interrogator has certainly be raised, but considering:

          - the prevalence "How many |r|'s are in the word 'strawberry'?" esque questions that cause(d) LLMs to stumble

          - context window issues

          It would be naive to claim that there does not exist, or even that it would be difficult to construct/train, an interrogator that could reliably distinguish between an LLM and human chat instance.

          [0]: https://archive.computerhistory.org/projects/chess/related_m...

        • By hunterpayne 2026-02-2723:33

          Sure, when the expected monetary value was 0. Then they started claiming that investing $1,000,000,000,000.00 (that's $1T) into a 4 year old startup was a good idea. Change the valuation, change the goal. Then the goal was be better than a human employees (or at least more efficient or even just improves efficiency) because without that the value of the LLM is far lower than what it is being sold as. All the research so far says that LLMs fall far short of that goal. And if this was someone else's money, fine. But this is basically everyone's retirement savings. Again, higher valuation, higher goal. Finally, when you start losing people's retirement savings, criminal penalties start getting attached to things.

        • By casey2 2026-02-2813:51

          It hasn't even passed the original turning test, depending on the question. There are an unlimited number of questions that cause LLMs to give inhuman looking answers.

          As for writing in general slop score is still higher than a human baseline for all models[1], so all a human tester has to do is grade it and make the human write a bunch, the interrogator is allowed to submit an arbitrarily long list of questions.

          [1] https://eqbench.com/slop-score.html

        • By eqvinox 2026-02-286:20

          I mean… just ask about something "naughty" and they'll fail? At the very least you'd need to use setups without safeguards to pass any Turing test…

          The Turing test could also be considered equivalent to "can humans come up with questions that break the AI?" and the answer to that is still yes I'd say.

    • By eikenberry 2026-02-2720:583 reply

      Has OpenAI laid out the specific definition of what an AGI is for this case? The one from their mission is quite vague and the general community has nothing close to a universal common definition... which means they will most likely just define it as what they already have when the timing is right.

      • By NekkoDroid 2026-02-2819:46

        > Has OpenAI laid out the specific definition of what an AGI is for this case?

        Yes and it's actually hilarious: a system that can perform most economically valuable work better than humans, or specifically when the AI generates $100 billion in profits.

      • By CSMastermind 2026-02-2721:35

        At least in their Microsoft contract it means $100 billion in profit, though they don't need to have actually made that money, they just need to show they're on track to do so.

      • By rvz 2026-02-283:58

        > Has OpenAI laid out the specific definition of what an AGI is for this case?

        AGI is an IPO.

    • By asadotzler 2026-02-2720:28

      All the major investments in these big rounds have come in tranches, right?

    • By konschubert 2026-02-2715:172 reply

      So they’re getting in on the IPO.

      Are they going to get stock for it or is it a PIPE?

      Personally, I don’t think I want to get in on this at retail prices.

      It can both be true at the same time that AI going to disrupt our world and that being an AI lab is a terrible business.

      • By rvnx 2026-02-2722:221 reply

        But will you have a choice once they enter the indexes ? People are automatically going to invest into that (circular) pyramid scheme.

        • By konschubert 2026-02-287:09

          Yea, I’m considering divesting from msci world into single stocks

      • By mattfrommars 2026-02-281:481 reply

        like the other comment, openai can force itself onto the massive index like VOO/FXAIX etc to make retail folks to provide liquidity exit for openai investors.

        • By konschubert 2026-02-287:10

          There is always the option to sell msci world and buy single stocks.

HackerNews