Comments

  • By atlasunshrugged 2025-05-0518:2617 reply

    I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:

    > Instead of our current complex capped-profit structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal capital structure where everyone has stock. This is not a sale, but a change of structure to something simpler.

    • By phreeza 2025-05-0519:143 reply

      That is a very obvious thing for them to say though regardless of what they truly believe, because (a) it legitimizes removing the cap , making fundraising easier and (b) averts antitrust suspicions.

      • By pryce 2025-05-062:24

        > "Our for-profit LLC, which has been under the nonprofit since 2019, will transition to a Public Benefit Corporation (PBC)–a purpose-driven company structure that has to consider the interests of both shareholders and the mission."

        One remarkable advantage of being a "Public Benefit Corporation" is this it:

        > prevent[s] shareholders from using a drop in stock value as evidence for dismissal or a lawsuit against the corporation[1]

        In my view, it is their own shareholders that the directors of OpenAI are insulating themselves against.

        [1] https://en.wikipedia.org/wiki/Benefit_corporation

      • By throwaway48476 2025-05-0523:521 reply

        (b) is true but no so much (a). If investors thought it would be winner take all and they thought ClosedAI would win they'd invest in ClosedAI only and starve competitors of funding.

        • By sebastiennight 2025-05-060:431 reply

          Actually I'm thinking in a winner-takes-all universe, the right strategy would be to spread your bets on as many likely winners as possible.

          That's literally the premise of venture capital. This is a scenario where we're assuming ALL our bets will go to zero, except one which will be worth trillions. In that case you should bet on everything.

          It's only in the opposite scenario (where every bet pays off with varying ROI) that it makes sense to go all-in on whichever bet seems most promising.

          • By AbstractH24 2025-05-0612:55

            Y that sounds just like a certain startup incubator’s perspective on things.

      • By selfselfgo 2025-05-0522:31

        [dead]

    • By istjohn 2025-05-0519:041 reply

      I'm not surprised that they found a reason to uncap their profits, but I wouldn't try to infer too much from the justification they cooked up.

      • By pdfernhout 2025-05-060:022 reply

        As a deeper issue on "justification", here is something I wrote related to this in 2001 on the risks of non-profits engaging in self-dealing when they create artificial scarcity to enrich themselves:

        https://pdfernhout.net/on-funding-digital-public-works.html#...

        "Consider this way of looking at the situation. A 501(c)3 non-profit creates a digital work which is potentially of great value to the public and of great value to others who would build on that product. They could put it on the internet at basically zero cost and let everyone have it effectively for free. Or instead, they could restrict access to that work to create an artificial scarcity by requiring people to pay for licenses before accessing the content or making derived works. If they do the latter and require money for access, the non-profit can perhaps create revenue to pay the employees of the non-profit. But since the staff probably participate in the decision making about such licensing (granted, under a board who may be all volunteer), isn't that latter choice still in a way really a form of "self-dealing" -- taking public property (the content) and using it for private gain? From that point of view, perhaps restricting access is not even legal?"

        "Self-dealing might be clearer if the non-profit just got a grant, made the product, and then directly sold the work for a million dollars to Microsoft and put the money directly in the staff's pockets (who are also sometimes board members). Certainly if it was a piece of land being sold such a transaction might put people in jail. But because the content or software sales are small and generally to their mission's audience they are somehow deemed OK. The trademark-infringing non-profit-sheltered project I mention above is as I see it in large part just a way to convert some government supported PhD thesis work and ongoing R&D grants into ready cash for the developers. Such "spin-offs" are actually encouraged by most funders. And frankly if that group eventually sells their software to a movie company, say, for a million dollars, who will really bat an eyebrow or complain? (They already probably get most of their revenue from similar sales anyway -- but just one copy at a time.) But how is this really different from the self-dealing of just selling charitably-funded software directly to Microsoft and distributing a lump sum? Just because "art" is somehow involved, does this make everything all right? To be clear, I am not concerned that the developers get paid well for their work and based on technical accomplishments they probably deserve that (even if we do compete for funds in a way). What I am concerned about is the way that the proprietary process happens such that the public (including me) never gets full access to the results of the publicly-funded work (other than a few publications without substantial source)."

        That said, charging to provide a service that costs money to supply (e.g. GPU compute) is not necessarily self-dealing. It is restricting the source code or using patents to create artificial scarcity around those services that could be seen that way.

        • By SOLAR_FIELDS 2025-05-064:31

          Enlightening read, especially your last paragraph which touches on the nuance of the situation. It’s quite easy to end up on one side or the other when it comes to charity/nonprofits because the mission itself can be very motivating and galvanizing.

        • By jasode 2025-05-0610:051 reply

          >"Self-dealing [...] convert some government supported PhD thesis work [...] the public (including me) never gets full access to the results of the publicly-funded work [...]

          Your 2001 essay isn't a good parallel to OpenAI's situation.

          OpenAI wasn't "publicly funded" i.e. with public donations or government grants.

          The non-profit was started and privately funded by a small group of billionaires and other wealthy people (Elon Musk donates $44 million, Reid Hoffman, etc collectively pledging $1 billion of their own money).

          They miscalculated in thinking their charity donations would be enough to recruit the PhD machine learning researchers and pay the high GPU costs to create the AI alternative to Google DeepMind, etc. Their 2015 assumptions about future AI development costs were massively underestimated and now they look like bad for trying to convert it to a for-profit enterprise. Instead of a big conversion to for-profit, they now will settle with keeping a subsidiary that's for-profit. Somewhat like other entities structured as a non-profit that owns for-profit subsidiaries such as Mozilla, Girl Scouts, Novo Nordisk, etc.

          Obviously with hindsight... if they had to do it all over, they would just create the reverse structure of creating the OpenAI for-profit company as the "parent entity" that pledges to donate money to charities. E.g. Amazon Inc is the for-profit that donates to Housing Equity Fund for affordable housing.

          • By CityOfThrowaway 2025-05-0615:461 reply

            All 501(c)(3) are funded in part by the public by way of uncollected tax revenues for economically valuable activity.

            • By jasode 2025-05-0616:211 reply

              >uncollected tax revenues for economically valuable activity.

              Taxes are on profits not revenue. The for-profit OpenAI LLC subsidiary created in 2019 would have been the entity that owes taxes but it has been losing money and never made any profits to tax.

              Yesterday's news about switching from for-profit LLC to for-profit PBC still leaves a business entity that's liable for future taxes on profits.

              • By philipodonnell 2025-05-0618:05

                The contributors to the charity get a write off too

    • By huijzer 2025-05-066:363 reply

      The value investor Mohnish Pabrai once talked about his observation that most companies with a moat pretend they don’t have one and companies without pretend they do.

      • By monkeyelite 2025-05-0614:00

        A version of this is emphasized in the thielverse as well. Companies in heavy competition try to intersect all their qualities to appear unique. Dominant companies talk about their portfolio of side projects to appear in heavy competition (space flight, ed tech, etc).

      • By Takennickname 2025-05-068:062 reply

        I don't know how I feel about a tech bro being credit for an idea like this.

        This is originally from The Art of War.

        • By rcxdude 2025-05-0611:07

          It's a specific observation that matches some very general advice from The Art of War, it's not like it's a direct quote from it.

        • By fakedang 2025-05-068:37

          Mohnish isn't a tech bro though, in my books. After selling his company, guy retreated away from the tech scene to get into Buffett-style value investing. And if you read his book, it's about glorifying the small businessmen running motels and garages, who invest bit by bit into the stock market.

      • By anshumankmr 2025-05-069:14

        Its quite true. The closest thing to a moat OpenAI has is the memory feature.

    • By hliyan 2025-05-061:38

      There needs to be regulations about deceptive, indirect, purposefully ambiguous or vague public communication by corporations (or any entity). I'm not an expert in corporate law or finance, but the statement should be:

      "Open AI for-profit LLC will become a Public Benefit Corporation (PBC)"

      followed by: "Profit cap is hereby removed" and finally "The Open AI non-profit will continue to control the PBC. We intend it to be a significant shareholder of the PBC."

    • By lanthissa 2025-05-0519:0810 reply

      AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity.

      Not only is there infinite incentive to compete, but theres decreasing costs to. The only world in which AGI is winner take all is a world in which it is extremely controlled to the point at which the public cant query it.

      • By JumpCrisscross 2025-05-0520:574 reply

        > AGI can't really be a winner take all market. The 'reward' for general intelligence is infinite as a monopoly and it accelerates productivity

        The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.

        But OpenAI doesn't have a path to AGI any more than anyone else. (It's increasingly clear LLMs alone don't make the cut.) And the market for LLMs, non-general AI, is very much not winner takes all. In this announcement, OpenAI is basically acknowledging that it's not getting to self-improving AGI.

        • By tbrownaw 2025-05-0521:471 reply

          > The first-mover advantages of an AGI that can improve itself are theoretically unsurmountable.

          This has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling.

          • By JumpCrisscross 2025-05-0522:331 reply

            > this has some baked assumptions about cycle time and improvement per cycle and whether there's a ceiling

            To be precise, it assumes a low variability in cycle time and improvement per cycle. If everyone is subjected to the same limits, the first-mover advantage remains insurmountable. I’d also argue that whether there is a ceiling matters less than how high it is. If the first AGI won’t hit a ceiling for decades, it will have decades of fratricidal supremacy.

            • By vgb2k18 2025-05-064:39

              > I’d also argue that whether there is a ceiling matters less than how high it is.

              How steeply the diminishing returns curve off at.

        • By danenania 2025-05-060:15

          I think the foundation model companies are actually poorly situated to reach the leading edge of AGI first, simply because their efforts are fragmented across multiple companies with different specializations—Claude is best at coding, OpenAI at reasoning, Gemini at large context, and so on.

          The most advanced tools are (and will continue to be) at a higher level of the stack, combining the leading models for different purposes to achieve results that no single provider can match using only their own models.

          I see no reason to think this won't hold post-AGI (if that happens). AGI doesn't mean capabilities are uniform.

        • By whoisthemachine 2025-05-0611:21

          I find these assumptions curious. How so? What is the AGI going to do that captures markets? Even if it can take over all desk work, then what? Who is going to consume that? And further more (and perhaps more importantly), with it putting everyone out of work, who is going to pay for it?

        • By Davidzheng 2025-05-0612:22

          I'm pretty sure today's models probably can be capable of self-improving. It's just that they are not yet as good as self-improving as the combinations of programmers improving them with the help of the models.

      • By Night_Thastus 2025-05-0519:126 reply

        Nothing OpenAI is doing, or ever has done, has been close to AGI.

        • By abtinf 2025-05-0519:39

          Agreed and, if anything, you are too generous. They aren’t just not “close”, they aren’t even working in the same category as anything that might be construed as independently intelligent.

        • By pinkmuffinere 2025-05-0519:171 reply

          I agree with you, but that’s kindof beside the point. Open AI’s thesis is that they will work towards AGI, and eventually succeed. In the context of that premise, Open AI still doesn’t believe AGI would be winner-takes-all. I think that’s an interesting discussion whether you believe the premise or not.

          • By Night_Thastus 2025-05-1918:42

            If they're saying it's not 'winner takes all' it's cover to shield them from the fact that they aren't (or may not always be) in the top spot. They would not be saying that if they were undeniably the best - they'd say the opposite.

        • By AndrewKemendo 2025-05-0519:174 reply

          I agree with you

          I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?

          • By somenameforme 2025-05-065:393 reply

            Differentiating between AGI and non-AGI, if we ever get remotely close, would be challenging, but for now it's trivial. The defining feature of AGI is recursive self improvement across any field. Without self improvement, you're just regurgitating. Humanity started with no advanced knowledge or even a language. In what should practically be a heartbeat at the speed of distributed computing with perfect memory and computation power, we were landing a man on the Moon.

            So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent. In fact you would prefer to feed it as minimal a series of the most primitive first principles as possible because it's certain that much of what we think is true is going to end up being not quite so -- the same as for humanity at any other given moment in time.

            We could derive more basic principles, but this one is fundamental and already completely incompatible with our current direction. Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry, mistakes and all. It'd create a facsimile of impressive intelligence because no human would have a remotely comparable knowledge base, but it'd basically just be a glorified natural language search engine - frozen in time.

            • By netdevphoenix 2025-05-068:521 reply

              I mostly agree with you. But if you think about it mimicry is an aspect of intelligence. If I can copy you and do what you do reliably, regardless of the method used, it does capture an aspect of intelligence. The true game changer is a reflective AI that can automatically improve upon itself

            • By AndrewKemendo 2025-05-0622:011 reply

              So then it’s something exponentially more capable than the most capable human?

              > So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent.

              The first 22 years of life for a “western professional adult” is literally dedicated to a giant bootstrapping info dump

              • By somenameforme 2025-05-074:271 reply

                Your quote is a non sequitur to your question. The reason you want to avoid massive data dumps is because there are guaranteed to be errors and flaws. See things like Alpha Go vs Alpha Go Zero. The former was trained on the entirety of human knowledge, the latter was trained entirely on itself.

                The zero training version not only ended up dramatically outperforming the 'expert' version, but reached higher levels of competence exponentially faster. And that should be entirely expected. There were obviously tremendous flaws in our understanding of the game, and training on those flaws resulted in software seemingly permanently handicapping itself.

                Minimal expert training also has other benefits. The obvious one is that you don't require anywhere near the material and it also enables one to ensure you're on the right track. Seeing software 'invent' fundamental arithmetic is somewhat easier to verify and follow than it producing a hundred page proof advancing, in a novel way, some esoteric edge theory of mathematics. Presumably it would also require orders of magnitude less operational time to achieve such breakthroughs, especially given the reduction in preexisting state.

                • By AndrewKemendo 2025-05-0718:321 reply

                  Think beyond software and current models

                  The moment after human birth the human agent starts a massive information gathering process - that no other system really expects much output from in a coherent way - for 5-10 years. Aka “data dump” some of that data is good, and some of it is bad. This in turn leads to biases, it leads to poor thinking models; everything that you described, is also applicable to every intelligent system - including humans. So again you presupposing that there’s some kind of perfect information benchmark that couldn’t exist.

                  When that system comes out of the birth canal it already has embedded in it millions of years of encoded expectations predictability systems and functional capabilities that are going to grow independent of what the environment does (but will be certainly shaped in its interactions by the environment).

                  So no matter what, you have a structured system of interaction that must be loaded with previously encoded data (experience, transfer learning etc) with and it doesn’t matter what type of intelligent system you’re talking about there are foundational assumptions at the physical interaction layer that encode all previous times steps of evolution.

                  Said an easier way: a lobster, because of the encoded DNA that created it, will never have the same capabilities as a human, because it is structured to process information completely differently and their actuators don’t have the same type and level of granularity as human actuators.

                  Now assume that you are a lobster compared to a theoretical AGI in sensor-effector combination. Most likely it would be structured entirely differently than you are as a biological thing - but the mere design itself carries with it an encoding of structural information of all previous systems that made it possible.

                  So by your definition you’re describing something that has never been seen in any system and includes a lot of assumptions about how alternative intelligent systems could work - which is fair because I asked your opinion.

                  • By somenameforme 2025-05-084:281 reply

                    With due respect I do not think you're tackling the fundamental issue, which I do not think is particularly controversial: intelligence and knowledge are distinct things, with the latter created by the former. What we're aiming to do is to create an intelligent system, a system that can create fundamentally new knowledge, and not simply reproduce or remix it on demand.

                    The next time your in the wilds, it's quite amazing to consider that your ancestors - millennia past, would have looked at, more or less, these exact same wilds but with so much less knowledge. Yet nonetheless they would discover such knowledge - teaching themselves, and ourselves, to build rockets, put a man on the Moon, unlock the secrets of the atom, and so much more. All from zero.

                    ---

                    What your example and elaboration focus on is the nature of intelligence, and the difficulty in replicating it. And I agree. This is precisely we want to avoid making the problem infinitely more difficult, costly, and time consuming by dumping endless amounts of knowledge in the equation.

                    • By AndrewKemendo 2025-05-0815:231 reply

                      Intelligence and knowledge being different things is quite the claim - namely it sounds like you’re stuck in the Cartesian dualist world and having transitioned into statistical empiricism.

                      I’m curious what epistemological grounding you are basing your claim on

                      • By somenameforme 2025-05-0817:03

                        I don't understand how you can equate the two and reconcile the past. The individuals who have pushed society forward in this domain or that scarcely, if ever, had any particular knowledge edge. Cases like Ramanujan [1] exemplify such to the point of absurdity.

                        [1] - https://en.wikipedia.org/wiki/Srinivasa_Ramanujan

            • By Closi 2025-05-068:391 reply

              I'm not sure humans meet the definition here.

              If you took the average human from birth and gave them only 'the most primitive first principles', the chance that they would have novel insights into medicine is doubtful.

              I also disagree with your following statement:

              > Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry

              At worst it's complex mimicry! But I would also say that mimicry is part of intelligence in general and part of how humans discover. It's also easy to see that AI can learn things - you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.

              I also disagree with this statement:

              > One fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent

              I don't think how something became intelligent should affect whether it is intelligent or not. These are two different questions.

              • By Jensson 2025-05-0614:071 reply

                > you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.

                You didn't teach it, the model is still the same after you ran that. That is the same as a human following instructions without internalizing the knowledge, he forgets it afterward and didn't learn what he performed. If that was all humans did then there would be no point in school etc, but humans do so much more than that.

                As long as LLM are like an Alzheimer's human they will never become a general intelligence. And following instructions is not learning at all, learning is building an internal model for those instructions that is more efficient and general than the instructions themselves, humans do that and that is how we manage to advance science and knowledge.

                • By Closi 2025-05-089:34

                  It depends what you count as learning - you told it something, and it then applied that new knowledge, and if you come back to that conversation in 10 years, it will still have that new knowledge and be able to use it.

                  Then when OpenAI does another training run it can also internalise that knowledge into the weights.

                  This is much like humans - we have short term memory (where it doesn't get into the internal model) and then things get baked into long term memory during sleep. AI's have context-level memory, and then that learning gets baked into the model during additional training.

                  Although whether or not it changed the weights IMO is not a prerequisite for whether something can learn something or not. I think we should be able to evaluate if something can learn by looking at it as a black-box, and we could make a black-box which would meet this definition if you spoke to a LLM and limited it to it's max context length each day, and then ran an overnight training run to incorporate learned knowledge into weights.

          • By 6510 2025-05-0612:201 reply

            It's not much help but when I read "AGI" I picture a fish tank with brains floating in it.

            • By AndrewKemendo 2025-05-0621:58

              Interesting but I’m not sure very instructive

          • By latentsea 2025-05-060:572 reply

            When it can start wars over resources.

        • By voidspark 2025-05-0519:203 reply

          Their multimodal models are a rudimentary form of AGI.

          EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".

          https://arxiv.org/abs/2311.02462

          • By jrmg 2025-05-061:18

            Ah! Like Full Self Driving!

          • By foobiekr 2025-05-0522:592 reply

            Goalpost moving.

            • By slowmovintarget 2025-05-0523:19

              Thank you.

              "AGI" was already a goalpost move from "AI" which has been gobbled up by the marketing machine.

            • By voidspark 2025-05-060:41

              Nothing to do with moving the goalposts.

              This is current research. The classification of AGI systems is currently being debated by AI researchers.

              It's a classification system for AGI, not a redefinition. It's a refinement.

              Also there is no universally accepted definition of AGI in the first place.

          • By always_imposter 2025-05-0519:345 reply

            AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, we don't wait for somebody to give us a task and go do it as if that is our sole existence. We live with our thoughts, blank out, watch TV, read books etc. What we currently have and possibly in the next century as well will be nothing close to an actual AGI.

            I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.

            And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.

            If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.

            That's just how I feel.

            • By buu700 2025-05-0521:513 reply

              I think there's a useful distinction that's often missed between AGI and artificial consciousness. We could conceivably have some version of AI that reliably performs any task you throw at it consistently with peak human capabilities, given sufficient tools or hardware to complete whatever that task may be, but lacks subjective experience or independent agency; I would call that AGI.

              The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.

              Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.

              It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.

              Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.

              • By felipeerias 2025-05-060:221 reply

                One sci-fi example could be based on the replicators from Star Trek, who are able to synthesize any meals on demand.

                It is not hard to imagine a "cooking robot" as a black box that — given the appropriate ingredients — would cook any dish for you. Press a button, say what you want, and out it comes.

                Internally, the machine would need to perform lots of tasks that we usually associate with intelligence, from managing ingredients and planning cooking steps, to fine-grained perception and manipulation of the food as it is cooking. But it would not be conscious in any real way. Order comes in, dish comes out.

                Would we use "intelligent" to describe such a machine? Or "magic"?

                • By svachalek 2025-05-0621:45

                  I immediately thought of Star Trek too, I think the ship's computer was another example of unconscious intelligence. It was incredibly capable and could answer just about any request that anyone made of it. But it had no initiative or motivation of its own.

              • By foobiekr 2025-05-0523:20

                Regarding "We could conceivably have some version of AI that reliably performs any task you throw at it consistently" - it is very clear to anyone who just looks at the recent work by Anthropic analyzing how their LLM "reasons" that such a thing will never come from LLMs without massive unknown changes - and definitely not from scale - so I guess the grandparent is absolute right that openai is nor really working on this.

                It isn't close at all.

              • By voidspark 2025-05-061:15

                That's an important distinction.

                A machine could be super intelligent at solving real world practical tasks, better than any human, without being conscious.

                We don't have a proper definition of consciousness. Consciousness is infinitely more mysterious than measurable intelligence.

            • By lukan 2025-05-0521:422 reply

              It seems like you believe AGI won't come for a long time, because you don't want that to happen.

              The turing test was succesfull. Pre chatGPT, I would not have believed, that will happen so soon.

              LLMs ain't AGI, sure. But they might be an essential part and the missing parts maybe already found, just not put together.

              And work there will be always plenty. Distributing ressources might require new ways, though.

              • By somenameforme 2025-05-064:351 reply

                While I also hold a peer comment's view that the Turing Test is meaningless, I would further add that even that has not been meaningfully beaten.

                In particular we redefined the test to make it passable. In Turing's original concept the competent investigator and participants were all actively expected to collude against the machine. The entire point is that even with collusion, the machine would be able to pass. Instead modern takes have paired incompetent investigators alongside participants colluding with the machine, probably in an effort to be part 'of something historic'.

                In "both" (probably more, referencing the two most high profile - Eugene and the large LLMs) successes, the interrogators consistently asked pointless questions that had no meaningful chance of providing compelling information - 'How's your day? Do you like psychology? etc' and the participants not only made no effort to make their humanity clear, but often were actively adversarial obviously intentionally answering illogically, inappropriately, or 'computery' to such simple questions. And the tests are typically time constrained by woefully poor typing skills (this the new normal in the smartphone gen?) to the point that you tend to get anywhere from 1-5 interactions of a few words each.

                The problem with any metric for something is that it often ends up being gamed to be beaten, and this is a perfect example of that.

                • By lukan 2025-05-068:25

                  I mean, I am pretty sure that I won't be fooled by a bot, if I get the time to ask the right questions.

                  And I did not looked into it (I also don'think the test has too much relevance), but fooling the average person sounds plausible by now.

                  Now sounding plausible is what LLMs are optimized for and not being plausible, still, I would not have thought we get so far so quick 10 years ago. So I am very hesistant about the future.

              • By semi-extrinsic 2025-05-0521:531 reply

                > The turing test was succesfull.

                The very people whose theories about language are now being experimentally verified by LLMs, like Chomsky, have also been discrediting the Turing test as pseudoscientific nonsense since early 1990s.

                It's one of those things like the Kardashev scale, or Level 5 autonomous driving, that's extremely easy to define and sounds very cool and scientific, but actually turns out to have no practical impact on anything whatsoever.

                • By lukan 2025-05-0522:07

                  "but actually turns out to have no practical impact on anything whatsoever"

                  Bots, that are now allmost indistinguishable from humans, won't have a practical impact? I am sceptical. And not just because of scammers.

            • By thayne 2025-05-064:58

              > I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone?

              I don't think there has ever been a time in history when work has been equitable and available to everyone.

              Of course, that isn't to say that AI can't make it worse then it is now.

            • By henryfjordan 2025-05-0520:591 reply

              > AGI would mean something which doesn't need direction or guidance to do anything. Like us humans, ...

              Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before

              • By JumpCrisscross 2025-05-0521:181 reply

                > Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before

                Literally everything that's been invented.

                • By tyg13 2025-05-0523:461 reply

                  I feel like, if nothing else, this new wave of AI products is rapidly demonstrating the lack of faith people have in their own intelligence -- or maybe, just the intelligence of other human beings. That's not to say that this latest round of AI isn't impressive, but legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles.

                  • By JumpCrisscross 2025-05-060:121 reply

                    > legions of apologists seem to forget that there is more to human cognition than being able to regurgitate facts, write grammatically-correct sentences, and solve logical puzzles

                    To be fair, there is a section of the population whose useful intelligence can roughly be summed up as that or worse.

                    • By tyg13 2025-05-060:371 reply

                      I think this takes an unnecessarily narrow view of what "intelligence" implies. It conflates "intelligence" with fact-retention and communicative ability. There are many other intelligent capabilities that most normally-abled human beings possess, such as:

                      - Processing visual data and classifying objects within their field of vision.

                      - Processing auditory data, identifying audio sources and filtering out noise.

                      - Maintaining an on-going and continuous stream of thoughts and emotions.

                      - Forming and maintaining complex memories on long-term and short-term scales.

                      - Engaging in self-directed experimentation or play, or forming independent wants/hopes/desires.

                      I could sit here all day and list the forms of intelligence that humans and other intelligent animals display which have no obvious analogue in an AI product. It's true that individual AI products can do some of these things, sometimes better than humans could ever, but there is no integrated AGI product that has all these capabilities. Let's give ourselves a bit of credit and not ignore or flippantly dismiss our many intelligent capabilities as "useless."

                      • By JumpCrisscross 2025-05-065:05

                        > It conflates "intelligence" with fact-retention and communicative ability

                        No, I’m using useful problem solving as my benchmark. There are useless forms of intelligence. And that’s fine. But some people have no useful intelligence and show no evidence of the useless kind. They don’t hit any of the bullets you list, there just isn’t that curiosity and drive and—I suspect—capacity to comprehend.

                        I don’t think it’s intrinsic. I’ve seen pets show more curiosity than some folk. But due to nature and nurture, they just aren’t intelligent to any material stretch.

            • By voidspark 2025-05-0520:111 reply

              > AGI would mean something which doesn't need direction or guidance to do anything

              There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".

              ChatGPT can solve problems that it was not explicitly trained to solve, across a vast number of problem domains.

              https://arxiv.org/pdf/2311.02462

              The paper is summarized here https://venturebeat.com/ai/here-is-how-far-we-are-to-achievi...

              • By dom96 2025-05-0520:234 reply

                This constant redefinition of what AGI means is really tiring. Until an AI has agency, it is nothing but a fancy search engine/auto completer.

                • By latentsea 2025-05-0521:421 reply

                  I agree. AGI is meaningless as a term if it doesn't mean completely autonomous agentic intelligence capable of operating on long-term planning horizons.

                  Edit: because if "AGI" doesn't mean that... then what means that and only that!?

                  • By ben_w 2025-05-0522:341 reply

                    > Edit: because if "AGI" doesn't mean that... then what means that and only that!?

                    "Agentic AI" means that.

                    Well, to some people, anyway. And even then, people are already arguing about what counts as agency.

                    That's the trouble with new tech, we have to invent words for new stuff that was previously fiction.

                    I wonder, did people argue if "horseless carriages" were really carriages? And "aeroplane" how many argued that "plane" didn't suit either the Latin or Greek etymology for various reasons?

                    We never did rename "atoms" after we split them…

                    And then there's plain drift: Traditional UK Christmas food is the "mince pie", named for the filling, mincemeat. They're usually vegetarian and sometimes even vegan.

                    • By latentsea 2025-05-0523:301 reply

                      Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.

                      It's kind of a simple enough concept... it's really just something that functions on par with how we do. If you've built that, you've built AGI. If you haven't built that, you've built a very capable system, but not AGI.

                      • By ben_w 2025-05-0714:28

                        > Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.

                        "Can", but not "must". The difference between an LLM being harnessed to be a customer service agent, or a code review agent, or a garden planning agent, can be as little as the prompt.

                        And in any case, the point was that the concept of "completely autonomous agentic intelligence capable of operating on long-term planning horizons" is better described by "agentic AI" than by "AGI".

                        > It's kind of a simple enough concept... it's really just something that functions on par with how we do.

                        "On par with us" is binary thinking — humans aren't at the same level as each other.

                        The problem we have with LLMs is the "I"*, not the "G". The problem we have with AlphaGo and AlphaFold is the "G", not the ultimate performance (which is super-human, an interesting situation given AlphaFold is a mix of Transformer and Diffusion models).

                        For many domains, getting a degree (or passing some equivalent professional exam) is just the first step, and we have a long way to go from there to being trusted to act competently, let alone independently. Someone who started a 3-year degree just before ChatGPT was released, will now be doing their final exams, and quite a lot of LLMs operate like they have just about scraped through degrees in almost everything — making them wildly superhuman with the G.

                        The G-ness of an LLM only looks bad when compared to all of humanity collectively; they are wildly more general in their capabilities than any single one of us — there are very few humans who can even name as many languages as ChatGPT speaks, let alone speak them.

                        * they need too many examples, only some of that can be made up for by the speed difference that lets machines read approximately everything

                • By bentcorner 2025-05-0523:561 reply

                  > Until an AI has agency, it is nothing but a fancy search engine/auto completer.

                  Stepping back for a moment - do we actually want something that has agency?

                • By ben_w 2025-05-0522:27

                  Unless you can define "agency", you're opening yourself to being called nothing more than a fancy chemical reaction.

                • By voidspark 2025-05-0520:301 reply

                  It's not a redefinition, it's a refinement.

                  Think about it - the original definition of AGI was basically a machine that can do absolutely anything at a human level of intelligence or better.

                  That kind of technology wouldn't just appear instantly in a step change. There would be incremental progress. How do you describe the intermediate stages?

                  What about a machine that can do anything better than the 50th percentile of humans? That would be classified as "Competent AGI", but not "Expert AGI" or ASI.

                  > fancy search engine/auto completer

                  That's an extreme oversimplification. By the same reasoning, so is a person. They are just auto completing words when they speak. No that's not how deep learning systems work. It's not auto complete..

                  • By JumpCrisscross 2025-05-0520:591 reply

                    > It's not a redefinition, it's a refinement

                    It's really not. The Space Shuttle isn't an emerging interstellar spacecraft, it's just a spacecraft. Throwing emerging in front of a qualifier to dilute it is just bullshit.

                    > By the same reasoning, so is a person. They are just auto completing words when they speak.

                    We have no evidence of this. There is a common trope across cultures and history of characterising human intelligence in terms of the era's cutting-edge technology. We did it with steam engines [1]. We did it with computers [2]. We're now doing it with large language models.

                    [1] http://metaphors.iath.virginia.edu/metaphors/24583

                    [2] https://www.frontiersin.org/journals/ecology-and-evolution/a...

                    • By voidspark 2025-05-0521:062 reply

                      Technically it is a refinement, as it distinguishes levels of performance.

                      The General Intelligence part of AGI refers to its ability to solve problems that it was not explicitly trained to solve, across many problem domains. We already have examples of the current systems doing exactly that - zero shot and few shot capabilities.

                      > We have no evidence of this.

                      That's my point. Humans are not "autocompleting words" when they speak.

                      • By JumpCrisscross 2025-05-0521:211 reply

                        > Technically it is a refinement, as it distinguishes levels of performance

                        No, it's bringing something out of scope into the definition. Gluten-free means free of gluten. Gluten-free bagel verus sliced bread is a refinement--both started out under the definition. Glutinous bread, on the other hand, is not gluten free. As a result, "almost gluten free" is bullshit.

                        > That's my point. Humans are not "autocompleting words" when they speak

                        Humans are not. LLMs are. It turns out that's incredibly powerful! But it's also limiting in a way that's fundamentally important to the definition of AGI.

                        LLMs bring us closer to AGI in the way the inventions of writing, computers and the internet probably have. Calling LLMs "emerging AGI" pretends we are on a path to AGI in a way we have zero evidence for.

                        • By voidspark 2025-05-0521:261 reply

                          > Gluten-free means free of gluten.

                          Bad analogy. That's a binary classification. AGI systems can have degrees of performance and capability.

                          > Humans are not. LLMs are.

                          My point is that if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans. It's such an oversimplification of the transformer / deep learning architecture that it becomes meaningless.

                          • By JumpCrisscross 2025-05-0521:352 reply

                            > That's a binary classification. AGI systems can have degrees of performance and capability

                            The "g" in AGI requires the AI be able to perform "the full spectrum of cognitively demanding tasks with proficiency comparable to, or surpassing, that of humans" [1]. Full and not full are binary.

                            > if you oversimplify LLMs to "word autocompletion" then you can make the same argument for humans

                            No, you can't, unless you're pre-supposing that LLMs work like human minds. Calling LLMs "emerging AGI" pre-supposes that LLMs are the path to AGI. We simply have no evidence for that, no matter how much OpenAI and Google would like to pretend it's true.

                            [1] https://en.wikipedia.org/wiki/Artificial_general_intelligenc...

                            • By og_kalu 2025-05-060:191 reply

                              Why are you linking a Wikipedia page like it's the ground zero for the term? Especially when neither article the page link to justify that definition see the term as a binary accomplishment.

                              The g in AGI is General. I don't what world you think Generality isn't a spectrum, but it's sure as hell isn't this one.

                              • By voidspark 2025-05-061:00

                                That's right, and the Wikipedia page refers to the classification system:

                                "A framework for classifying AGI by performance and autonomy was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman"

                                In the second paragraph:

                                "Some researchers argue that state‑of‑the‑art large language models already exhibit early signs of AGI‑level capability, while others maintain that genuine AGI has not yet been achieved."

                                The entire article makes it clear that the definitions and classifications are still being debated and refined by researchers.

                            • By voidspark 2025-05-0521:461 reply

                              Then you are simply rejecting any attempts to refine the definition of AGI. I already linked to the Google DeepMind paper. The definition is being debated in the AI research community. I already explained that definition is too limited because it doesn't capture all of the intermediate stages. That definition may be the end goal, but obviously there will be stages in between.

                              > No, you can't, unless you're pre-supposing that LLMs work like human minds.

                              You are missing the point. If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights. If you completely ignore all of that, then by the same reasoning (completely ignoring the complexity of the human brain) we can just say that people are auto-completing words when they speak.

                              • By JumpCrisscross 2025-05-0522:471 reply

                                > I already linked to the Google DeepMind paper. The definition is being debated in the AI research community

                                Sure, Google wants to redefine AGI so it looks like things that aren’t AGI can be branded as such. That definition is, correctly in my opinion, being called out as bullshit.

                                > obviously there will be stages in between

                                We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.

                                > If you reduce LLMs to "word autocompletion" then you completely ignore the the attention mechanism and conceptual internal representations. These systems have deep learning models with hundreds of layers and trillions of weights

                                Fair enough, granted.

                                • By voidspark 2025-05-060:46

                                  > Sure, Google wants to redefine AGI

                                  It is not a redefinition. It's a classification for AGI systems. It's a refinement.

                                  Other researchers are also trying to classify AGI systems. It's not just Google. Also, there is no universally agreed definition of AGI.

                                  > We don’t know what the stages are. Folks in the 80s were similarly selling their expert systems as a stage to AGI. “Emerging AGI” is a bullshit term.

                                  Generalization is a formal concept in machine learning. There can be degrees of generalized learning performance. This is actually measurable. We can compare the performance of different systems.

        • By dr_dshiv 2025-05-0521:345 reply

          https://www.noemamag.com/artificial-general-intelligence-is-...

          Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...

          • By henryfjordan 2025-05-0521:485 reply

            Why does the Author choose to ignore the "General" in AGI?

            Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.

            The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.

            • By brookst 2025-05-0521:562 reply

              There are people who can’t drive cars. Are they not general intelligence?

              I think we need to separate the thinking part of intelligence from tool usage. Not everyone can use every tool at a high level of expertise.

              • By root_axis 2025-05-0522:333 reply

                Generally speaking, anyone can learn to use any tool. This isn't true of generative AI systems which can only learn through specialized training with meticulously curated data sets.

                • By _heimdall 2025-05-064:581 reply

                  People physically unable to use the tool can't learn to use it. This isn't necessarily my view, but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.

                  • By root_axis 2025-05-066:321 reply

                    > but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.

                    Of course they can. We already have computer controlled car systems, the reason LLMs aren't used to drive them is because AI systems that specialize in text are a poor choice for driving - specialized driving models will always outperform them for a variety of technical reasons.

                    • By _heimdall 2025-05-0618:491 reply

                      We have compute controlled automobiles, not LLM controlled automobiles.

                      That was my whole point. Maybe in theory an LLM could learn to drive a car, but they can't today because they don't physically have access to cars they could try to drive just like a person who can't learn to use a tool because they're physically limited from using it.

                      • By root_axis 2025-05-0619:16

                        It doesn't make sense to connect a LLM to a car, that could never work because they are trained offline using curated data sets.

                • By Muromec 2025-05-0523:521 reply

                  >can only learn through specialized training with meticulously curated data sets.

                  but so do I!

                  • By root_axis 2025-05-066:44

                    This isn't true. A curated data set can greatly increase learning efficiency in some cases, but it's not strictly necessary and represents only a fraction of how people learn. Additionally, all curated data sets were created by humans in the first place, a feat that language models could never achieve if we did not program them to do so.

            • By ben_w 2025-05-0522:16

              Generality is a continuous value, not a boolean; turned out that "AGI" was poorly defined, and because of that most people were putting the cut-off threshold in different places.

              Likewise for "intelligent", and even "artificial".

              So no, ChatGPT can't drive a car*. But it knows more about car repairs, defensive driving, global road features (geoguesser), road signs in every language, and how to design safe roads, than I'm ever likely to.

              * It can also run python scripts with machine vision stuff, but sadly that's still not sufficient to drive a car… well, to drive one safety, anyway.

            • By Nuzzerino 2025-05-0522:20

              Text can be a carrier for any type of signal. The problem gets reduced to that of an interface definition. It’s probably not going to be ideal for driving cars, but if the latency, signal quality, and accuracy is within acceptable constraints, what else is stopping it?

              This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.

            • By KHRZ 2025-05-0522:171 reply

              You can literally today prompt ChatGPT with API instructions to drive a car, then feed it images of a car's window outlooks and have it generate commands for the car (JSON schema restricted structured commands if you like). Text can represent any data thus yes, it is general.

              • By threeseed 2025-05-0522:261 reply

                > JSON schema restricted structured commands if you like

                How about we have ChatGPT start with a simple task like reliably generating JSON schema when asked to.

                Hint: it will fail.

                • By voidspark 2025-05-061:221 reply

                  ChatGPT can write a working Python script to generate the Json. It can call a library to do that.

                  • By dd_xplore 2025-05-0617:171 reply

                    But it cannot think on it's own! Billions of years of evolution couldn't bring human level 'AGI' to many many species, and we think a mere LLM company could do so. AGI isn't just a language model, there's tons of things baked into dna(the way brain functions, it's structure when it grows etc). It's not simply neuron interactions as well. The complexity is mind boggling

                    • By voidspark 2025-05-0721:11

                      Humans and other primates are only a million years apart. Animals are quite intelligent.

            • By voidspark 2025-05-062:51

              The latest models are natively multimodal. Gemini, GPT-4o, Llama 4.

              Same model trained on audio, video, images, text - not separate specialized components stitched together.

          • By lossolo 2025-05-0522:221 reply

            > AGI is already here

            Last time I checked, in an Anthropic paper, they asked the model to count something. They examined the logits and a graph showing how it arrived at the answer. Then they asked the model to explain its reasoning, and it gave a completely different explanation, because that was the most statistically probable response to the question. Does that seem like AGI to you?

            • By Muromec 2025-05-0523:552 reply

              That's exactly what I would expect from a lot of people. Post factum rationalization is a thing.

              • By dr_dshiv 2025-05-064:58

                Exactly. A lot of these arguments end up dehumanizing people because our own intelligence doesn’t hit the definition

              • By lossolo 2025-05-0611:491 reply

                There is no post factum rationalization here. If you ask a human to think about how they do something before they do it, there's no post factum rationalization. If you ask an LLM to do the same, it will give you a different answer. So, there is a difference. It's all about having knowledge of your internal state and being conscious of your actions and how you perform them, so you can learn from that knowledge. Without that, there is no real intelligence, just statistics.

                • By Filligree 2025-05-0614:191 reply

                  If you ask a human to think about how to do a thing, before they do it, then you will also get a different answer.

                  There’s a good reason why schools spend so much time training that skill!

                  • By Jensson 2025-05-0614:35

                    Yes, humans can post rationalize. But an LLM do nothing but post rationalize, as you yourself admitted humans can think it through beforehand and then actually do what they planned, while an LLM wont follow that plan mentally.

                    It is easy to see why, since the LLM doesn't communicate what it thinks it communicates what it thinks a human would communicate. A human would explain their inner process, and then go through that inner process. An LLM would explain a humans inner process, and then generate a response using a totally different process.

                    So while its true that humans doesn't have perfect introspection, the fact that we have introspection about our own thoughts at all is extremely impressive. An LLM has no part that analyzes its own thoughts the way humans do, meaning it has no clue how it thinks.

                    I have no idea how you would even build introspection into an AI, like how are we able to analyze our own thoughts? What is even a thought? What would this introspection part of an LLM do, what would it look like, would it identify thoughts and talk about them the way we do? That would be so cool, but that is not even on the horizon, I doubt we will ever see that in our lifetime, it would need some massive insight changing the AI landscape at its core to get there.

                    But, once you have that introspection I think AGI will happen almost instantly. Currently we use dumb math to train the model, that introspection will let the model train itself in an intelligent way, just like humans do. I also think it will never fully replace humans without introspection, intelligent introspection seems like a fundamental part to general intelligence and learning from chaos.

          • By brendoelfrendo 2025-05-0521:46

            I would argue that this is a fringe opinion that has been adopted by a mainstream scholar, not a mainstream opinion. That or, based on my reading of the article, this person is using a definition of AGI that is very different than the one that most people use when they say AGI.

          • By root_axis 2025-05-0522:11

            "AGI is already here, just wait 30 more years". Not very convincing.

          • By semi-extrinsic 2025-05-0521:48

            ... that was written in mid-2023. So that opinion piece is trying to redefine 2 year old LLMs like GPT-4 (pre-4o) as AGI. Which can only be described as an absolutely herculean movement of goalposts.

        • By kvetching 2025-05-0523:523 reply

          Please, keep telling people that. For my sake. Keep the world asleep as I take advantage of this technology which is literally General Artificial Intelligence that I can apply towards increasing my power.

          • By throwaway48476 2025-05-0523:54

            Every tool is a technology than can increase ones power.

          • By 6510 2025-05-0612:27

            That is just what it wants you to think.

      • By aeternum 2025-05-0522:224 reply

        Remember however that their charter specifies: "If a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project"

        It does have some weasel words around value-aligned and safety-conscious which they can always argue but this could get interesting because they've basically agreed not to compete. A fairly insane thing to do in retrospect.

        • By foobiekr 2025-05-0523:16

          They will just define away all of those terms to make that not apply.

        • By ljouhet 2025-05-064:091 reply

          Who defines "value-aligned, safety-conscious project"?

          "Instead of our current complex non-competing structure—which made sense when it looked like there might be one dominant AGI effort but doesn’t in a world of many great AGI companies—we are moving to a normal competing structure where ..." is all it takes

          • By _heimdall 2025-05-064:561 reply

            Most likely the same people who define "all natural chicken" - the company that creates the term.

            • By cellis 2025-05-065:26

              I actually lol-ed at that. It's like asking the inventor of a religion who goes to heaven.

      • By TeMPOraL 2025-05-0519:171 reply

        AGI could be a winner-take-all market... for the AGI, specifically for the first one that's General and Intelligent enough to ensure its own survival and prevent competing AGI efforts from succeeding...

        • By pdxandi 2025-05-0521:493 reply

          How would an AGI prevent others from competing? Sincere question. That seems like something that ASI would be capable of. If another company released an AGI, how would the original stifle it? I get that the original can self-improve to try to stay ahead, but that doesn't necessarily mean it self-improves the best or most efficiently, right?

          • By TeMPOraL 2025-05-067:49

            AGI used to be synonymous with ASI; it's still unclear to me it's even possible to build a sufficiently general AI - that is, as general as humans - without it being an ASI just by virtue of being in silico, thus not being constrained in scale or efficiency like our brains are.

          • By amelius 2025-05-068:36

            Well, it could pretend to be playing 4d chess and meanwhile destroy the economy and from there take over the world.

          • By robinei 2025-05-067:30

            If it was first, it could have self-improved more, to the point that it has the capacity to prevent competition, while the competition does not have the capacity to defend itself against superior AGI. This all is so hypothetical and frankly far from what we're seeing in the market now. Funny how we're all discussing dystopian scifi scenarios now.

      • By jsnider3 2025-05-0613:35

        Homo Sapiens wiped out every other intelligent hominid and every other species on Earth exists at our mercy. That looks a lot like the winners (humans) taking all.

      • By dragonwriter 2025-05-0616:14

        Well, yeah, the world in which it is winner take all is the one where it accelerates productivity so much such that the first firm to achieve it doesn't provide access to its full capabilities directly to oursiders but uses it themselves and conquers every other field of endeavor.

        That's always been pretty overtly the winner-take-all AGI scenario.

      • By amelius 2025-05-068:32

        You can say the same thing about big companies hiring all the smart people and somehow we think that's ok.

      • By babyshake 2025-05-060:101 reply

        AGI can be winner take all. But winner take all AGI is not aligned with the larger interests of humanity.

        • By NoOn3 2025-05-0620:01

          Modern corporations did't seem to care about humanity...

      • By TheOtherHobbes 2025-05-060:47

        AGI might not be fungible. From the trends today it's more likely there will be multiple AGIs with different relative strengths and weakness, different levels of accessibility and compliance, different development rates, and different abilities to be creative and surprising.

    • By sz4kerto 2025-05-0518:371 reply

      Or they consider themselves to have low(er) chance of winning. They could think either, but they obviously can't say the latter.

      • By bhouston 2025-05-0518:577 reply

        OpenAI is winning in a similar way that Apple is winning in smartphones.

        OpenAI is capturing most of the value in the space (generic LLM models), even though they have competitors who are beating them on price or capabilities.

        I think OpenAI may be able to maintain this position at least for the medium term because of their name recognition/prominence and they are still a fast mover.

        I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."

        • By KerrAvon 2025-05-063:08

          Apple is not the right analogy. OpenAI has first mover advantage and they have a widely recognized brand name — ChatGPT — and that’s kind of it. Anyone (with very deep pockets) can buy Nvidia chips and go to town if they have a better or equivalent idea. There was a brief time (long before I was born) when “Univac” was synonymous with “computer.”

        • By retrorangular 2025-05-0520:05

          > I also think the US is going to ban all non-US LLM providers from the US market soon for "security reasons."

          Well Trump is interested in tariffing movies and South Korea took DeepSeek off mobile app stores, so they certainly may try. But for high-end tasks, DeepSeek R1 671B is available for download, so any company with a VPN to download it and the necessary GPUs or cloud credits can run it. And for consumers, DeepSeek V3's distilled models are available for download, so anyone with a (~4 year old or newer) Mac or gaming PC can run them.

          If the only thing keeping these companies valuations so high is banning the competition, that's not a good sign for their long-term value. If you have to ban the competition, you can't be feeling good about what you're making.

          For what it's worth, I think GPT o3 and o1, Gemini 2.5 Pro and Claude 3.7 Sonnet are good enough to compete. DeepSeek R1 is often the best option (due to cost) for tasks that it can handle, but there are times where one of the other models can achieve a task that it can't.

          But if the US is looking to ban Chinese models, then that could suggest that maybe these models aren't good enough to raise the funding required for newer, significantly better (and more expensive) models. That, or they just want to stop as much money as possible from going to China. Banning the competition actually makes the problem worse though, as now these domestic companies have fewer competitors. But I somewhat doubt there's any coherent strategy as to what they ban, tariff, etc.

        • By tim333 2025-05-067:07

          Big difference - Apple makes billions from smartphones, getting most of the industry's profits, which makes it hard to compete with.

          OpenAI loses billions and is at the mercy of getting new investors to fund the losses. It has many plausible competitors.

        • By screamingninja 2025-05-0519:093 reply

          > ban all non-US LLM providers

          What do you consider an "LLM provider"? Is it a website where you interact with a language model by uploading text or images? That definition might become too broad too quickly. Hard to ban.

          • By bhouston 2025-05-0519:25

            I don't have to imagine. There are various US bills trying to achieve this ban. Here is one of them:

            https://www.theregister.com/2025/02/03/us_senator_download_c...

            One of them will eventually pass given that OpenAI is also pushing for protection:

            https://futurism.com/openai-ban-chinese-ai-deepseek

          • By slt2021 2025-05-0519:13

            the bulk of money comes from enterprise users. Just need to call 500 CEOs from the S&P500 list, and enforce via "cyber data safety" enforcement via SEC or something like that.

            everyone will roll over if all large public companies roll over (and they will)

          • By babelfish 2025-05-0519:27

            rather than coming up with a thorough definition, legislation will likely target individual companies (DeepSeek, Alibaba Cloud, etc)

        • By jjani 2025-05-0519:11

          IE once captured all of the value in browserland, with even much higher mindshare and market dominance than OpenAI has ever had. Comparing with Apple (= physical products) is Apples to oranges (heh).

          Their relationship with MS breaking down is a bad omen. I'm already seeing non-tech users who use "Copilot" because their spouse uses it at work. Barely knowing it's rebadged GPT. You think they'll switch when MS replaces the backend with e.g. Anthropic? No chance.

          MS, Google and Apple and Meta have gigantic levers to pull and get the whole world to abandon OpenAI. They've barely been pulling them, but it's a matter of time. People didn't use Siri and Bixby because they were crap. Once everyone's Android has a Gemini button that's just as good as GPT (which it already is (it's better) for anything besides image generation), people are going to start pressing them. And good luck to OpenAI fighting that.

        • By pphysch 2025-05-0520:21

          Switching between Apple and Google/Android ecosystems is expensive and painful.

          Switching from ChatGPT to the many competitors is neither expensive nor painful.

        • By wincy 2025-05-0519:17

          Companies that are contractors with the US government already aren’t allowed to use Deepseek even if its an airgapped R1 model is running on our own hardware. Legal told us we can’t run any distills of it or anything. I think this is very dumb.

    • By ignoramous 2025-05-0518:541 reply

      > I think this is one of the most interesting lines as it basically directly implies that leadership thinks this won't be a winner take all market:

      Yeah; and:

        We want to open source very capable models. 
      
      Seems like nary a daylight between DeepSeek R1, Sonnet 3.5, Gemini 2.5, & Grok3 really put things in perspective for them!

      • By kvetching 2025-05-0519:36

        Not to mention, @Gork, aka Grok 3.5...

    • By jrvarela56 2025-05-061:26

      Not saying this is OpenAI's case, but every monopolist claims they are not a monopolist...

    • By raincole 2025-05-0522:07

      Even if they think it will be a winner-take-all market, they won't say it out loud. It would be begging for antitrust lawsuits.

    • By whiplash451 2025-05-069:20

      I read this line as : we were completely off the chart from a corp structure standpoint.

      We need to get closer to the norm and give shares of a for-profit to employees in order to create retention.

    • By sensanaty 2025-05-0523:082 reply

      Lmaoing at their casual use of AGI as if them or any of their competitors are anywhere near it.

      • By DirkH 2025-05-0821:50

        Proposition:

        Please promise to come back to this comment in 2030 and playfully mock me for ever being worried and I will buy you a coffee. If AGI is invented before 2030 please buy me one and let me mock you playfully.

      • By addandsubtract 2025-05-060:081 reply

        If you change the definition of AGI, we're already there!

        • By infairverona 2025-05-061:41

          Damn, didn't know my Casio FX-300 was AGI, good to know!

    • By dingnuts 2025-05-0518:423 reply

      to me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!

      and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."

      This is another nail in the coffin

      • By lenerdenator 2025-05-0518:451 reply

        That, or they don't care if they get to AGI first, and just want their payday now.

        Which sounds pretty in-line with the SV culture of putting profit above all else.

        • By foobiekr 2025-05-0518:491 reply

          If they think AGI is imminent the value of that payday is very limited. I think the grandparent is more correct: OpenAI is admitting that near term AGI - which, being that the only one anyone really cares about is the case with exponential self improvement - isn't happening any time soon. But that much is obvious anyway despite the hyperbolic nonsense now common around AI discussions.

          • By lenerdenator 2025-05-0518:541 reply

            Define "imminent".

            If I were a person like several of the people working on AI right now (or really, just heading up tech companies), I could be the kind to look at a possible world-ending event happening in the next - eh, year, let's say - and just want to have a party at the end of the world.

            Five years to ten years? Harder to predict.

            • By foobiekr 2025-05-0519:02

              Imminent means "in a timeframe meaningful to the individual equity holders this change is about."

              The window there would at _least_ include the next 5 years, though obviously not ten.

      • By the_duke 2025-05-0519:0010 reply

        AGI is matter of when, not if.

        It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.

        ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.

        But AIs that are on a level with humans for many common tasks is not that far off.

        • By runako 2025-05-0519:091 reply

          Either that, or this AI boom mirrors prior booms. Those booms saw a lot of progress made, a lot of money raised, then collapsed and led to enough financial loss that AI went into hibernation for 10+ years.

          There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.

          • By tim333 2025-05-0610:23

            But the Moore's law like growth in compute/$ chugs along, boom or bust.

        • By bdangubic 2025-05-0519:192 reply

          AGI is matter of when, not if

          probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong

          • By tim333 2025-05-0610:29

            If you look at Our World in Data's "Test scores of AI systems on various capabilities relative to human performance" https://ourworldindata.org/grapher/test-scores-ai-capabiliti...

            you can see a pattern of fairly steady progress in different aspects, like they matched humans for image recognition around 2015 but 'complex reasoning' is still much worse than humans but rising.

            Looking at the graph, I'd guess maybe five years before it can do all human skills which is roughly AGI?

            I've got a personal AGI test of being able to fix my plumbing, given a robot body. Which they are way off just now.

          • By dbacar 2025-05-0520:283 reply

            It is already here, kinda. I mean look at how it passes the bar exam, solves math olympiad level questions, generates video, art, music. What else are you looking for? It already has penetrated into job market causing significant disruption in programming. We are not seeing flying cars but we are witnessing things even not talked about around campfire. Seriously even 4 years ago, would you think all these would happen?

            • By dns_snek 2025-05-0612:021 reply

              > What else are you looking for?

              To begin with, systems that don't tell people to use elmer's glue to keep the cheese from sliding off the pizza, displaying a fundamental lack of understanding of.. everything. At minimum it needs to be able to reliably solve hard, unique, but well-defined problems like a group of the most cohesive intelligent people could. It's certainly not AGI until it can do a better job than the most experienced, talented, and intelligent knowledge workers out there.

              Every major advancement (which LLMs certainly are) has caused some disruption in the fields it affected, but that isn't useful criteria that can differentiate between "crude but useful tool" from "AGI".

              • By dbacar 2025-05-0621:061 reply

                Majority of people on earth don't solve hard, unique, but well-defined problems, do we? I dont expect AGI to to solve one of Hilbert's list of problems (yet). Your definition of AGI is a bit too imposing. Saying that I believe you would get answers from an LLM better than most of the answers you would get from an average human. IMHO the trend is obvious and we will see if it stalls or keeps the pace.

                • By dns_snek 2025-05-076:55

                  I don't mean "hard" in the sense that it can easily solve novel problems that no living human knows how to solve, although any "general" intelligence should certainly be capable of learning and making progress on these just like human would, but without limitations of human memory, attention span, relatively short lifetime, and other human needs.

                  I mean "hard" in the sense that it can reliably replace the best software developers, civil engineers, lawyers, diagnosticians. Not just in economic sense, but by reliably matching the quality of their work 100% of the time.

                  It should be capable of methodically and reliably arriving at correct answers without expert intervention. It shouldn't be the case that some people claim that they don't know how to code and the LLM generated an entire project for them, while I can confidently claim that LLMs fall flat on their face almost every time I try to use them for more delicate business logic.

            • By bdangubic 2025-05-0521:551 reply

              AGI is here?????! Damn, me, and every other human, must have missed that news… /s

              • By dbacar 2025-05-0621:08

                Such things happen.

        • By manquer 2025-05-0519:23

          Progress is not just a function of technical possibility( even if it exists) it is also economics.

          It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.

          If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.

          Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.

          From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.

          [1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.

          [2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.

        • By blibble 2025-05-0519:092 reply

          > AGI is matter of when, not if.

          LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day

          • By jwilber 2025-05-0519:59

            I agree that LLMs are hurting the general population’s capacity to think (assuming they use it often. I’ve certainly noticed a slight trend among students I’ve taught to use less effort, and myself to some extent).

            I don’t agree that this will affect ML progress much, since the general population isn’t contributing to core ML research.

          • By delecti 2025-05-0520:10

            On the other hand, dumbing down the population also lowers the bar for AGI. /s

        • By foobiekr 2025-05-0519:153 reply

          I think this is right but also missing a useful perspective.

          Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.

          That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.

          There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).

          The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.

          I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.

          So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.

          Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.

          • By quesera 2025-05-0520:581 reply

            I won't go too far out on this limb, because I kind of agree with you... but to be fair -- 1980s-1990s nanotech did not attract this level of investment, nor was it visible to ordinary people, nor was it useful to anyone except researchers and grant writers.

            It seems like nanotech is all around us now, but the term "nanotech" has been redefined to mean something different (larger scale, less amazing) from Drexler's molecular assemblers.

            • By foobiekr 2025-05-0523:00

              Investment was completely different at the time and interest rates played a huge part of that. VC also wasn't that old in 86.

          • By tbrownaw 2025-05-0522:301 reply

            > Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more,

            I thought this was a "we know we can't" thing rather than a "not with current technology" thing?

            • By foobiekr 2025-05-0523:08

              Specific cases are probably impossible, though there's always hope. After all, to ue the example the nanotech people loved: there are literal assemblers all around you. Whether we can have singular device that can build anything (probably not - energy limits and many many other issues) or factories that can work on atomic scale (maybe) is open, I think. The idea of little robots was kind of visibly silly even at the peak.

              The idea of scaling up LLMs and hoping is .. pretty silly.

          • By jonfromsf 2025-05-0521:181 reply

            Every consumer has very useful AI at their fingertips right now. It's eating the software engineering world rapidly. This is nothing like nanotech in the 80s.

            • By Yizahi 2025-05-0521:46

              Sure. But fancy autocomplete for a very limited industry (IT) plus graphics generation and a few more similar items, are indeed useful. Just like "nanotech" coating of say optics or in the precise machinery or all other fancy nano films in many industries. Modern transistors are close to nano scale now, etc.

              The problem is that the distance between a nano thin film or an interesting but ultimately rigid nano scale transistor and a programmable nano level sized robot is enormous, despite similar sizes. Same like the distance between an autocomplete heavily relying on the preexisting external validators (compilers, linters, static code analyzers etc.) and a real AI capable of thinking is equally enormous.

        • By Kabukks 2025-05-0519:131 reply

          Could you elaborate on the progress that has been made? To me, it seems only small/incremental changes are made between models with all of them still hallucinating. I can see no clear steps towards AGI.

        • By JumpCrisscross 2025-05-0521:24

          > AGI is matter of when, not if

          We have zero evidence for this. (Folks said the same shit in the 80s.)

        • By m_krebs 2025-05-0519:092 reply

          "X increased exponentially in the past, therefore it will increase exponentially in the same way in the future" is fallacious. There is nothing guaranteeing indefinite uncapped growth in capabilities of LLMs. An exponential curve and a sigmoidal curve look the same until a certain point.

          • By dragonwriter 2025-05-0519:12

            Yeah, it is a pretty good bet that any real process that produces something that looks like an exponential curve over time is the early phase of a sigmoid curve, because all real processes have constraints.

          • By 91bananas 2025-05-0519:52

            And if we apply the 80/20 rule, feels like we're at about 50-75% right now. So we're almost getting close to done with the easy parts. Then come the hard parts.

        • By schrodinger 2025-05-0519:07

          I don’t think that’s a safe foregone conclusion. What we’ve seen so far is very very powerful pattern matchers with emergent properties that frankly we don’t fully understand. It very well may be the road to AGI, or it may stop at the kind of things we can do in our subconscious—but not what it takes to produce truly novel solutions to never before seen problems. I don’t think we know.

        • By otabdeveloper4 2025-05-066:27

          > AGI is matter of when, not if.

          I want to believe, man.

      • By tim333 2025-05-0610:15

        I don't read it that way. It reads more like AGIs will be like very smart people and rather than having one smart person/AGI, everyone will have one. There's room for both Beethoven and Einstein although they were both generally intelligent.

    • By burnte 2025-05-0614:34

      The level of arrogance needed to think they'd be the only company to come up with AI/AGI is staggering.

    • By coryfklein 2025-05-0613:51

      “Appear weak when you are strong, and strong when you are weak.”

      ― Sun Tzu

    • By rlt 2025-05-0615:53

      “Fine, we’ll keep the non-profit, but we’re going to extract the fuck out of the for-profit”

      Quite the arc from the original organization.

    • By zeroq 2025-05-063:06

      "It's not you, it's me."

  • By pants2 2025-05-0518:3711 reply

    It's somewhat odd to me that many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...

    • By CorpOverreach 2025-05-063:097 reply

      I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.

      The intersection of the two seems to be quite hard to find.

      At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.

      This isn't a nuclear weapon. We're not going to accidentally create Skynet. The only thing it's going to go nuclear on is the market for jobs that are going to get automated in an economy that may not be ready for it.

      If anything, the "danger" here is that AGI is going to be a printing press. A cotton gin. A horseless carriage -- all at the same time and then some, into a world that may not be ready for it economically.

      Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.

      • By thurn 2025-05-064:497 reply

        Which of these statements do you disagree with?

        - Superintelligence poses an existential threat to humanity

        - Predicting the future is famously difficult

        - Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence

        - Even a 1-in-1000 existential threat would be extremely serious. If an asteroid had a 1-in-1000 chance of hitting Earth and obliterating humanity we should make serious contingency plans.

        Second question: how confident are you that you're correct? Are you 99.9% sure? Confident enough to gamble billions of lives on your beliefs? There are almost no statements about the future which I'd assign this level of confidence to.

        • By tsimionescu 2025-05-068:142 reply

          You could use the exact same argument to argue the opposite. Simply change the first premise to "Super intelligence is the only thing that can save humanity from certain extinction". Using the exact same logic, you'll reach the conclusion that not building superintelligence is a risk no sane person can afford to take.

          So, since we've used the exact same reasoning to prove two opposite conclusions, it logically follows that this reasoning is faulty.

          • By throw101010 2025-05-0610:091 reply

            That’s not how logic works. The GP is applying the precautionary principle: when there’s even a small chance of a catastrophic risk, it makes sense to take precautions-like restricting who can build superintelligent AI, similar to how we restrict access to nuclear technology.

            Changing the premise to "superintelligence is the only thing that can save us" doesn’t invalidate the logic of being cautious. It just shifts the debate to which risk is more plausible. The reasoning about managing existential risks remains valid either way, the real question is which scenario is more likely, not whether the risk-based logic is flawed.

            Just like with nuclear power, which can be both beneficial and dangerous, we need to be careful in how we develop and control powerful technologies. The recent deregulation by the US admin are an example of us doing the contrary currently.

            • By polynomial 2025-05-0612:222 reply

              Not really. If there is a small chance that this miraculous new technology will solve all of our problems with no real downside, we must invest everything we have and pull out all the stops, for the very future of the human race depends on AGI.

              Also, @tsimionescu's reasoning is spot on, and exactly how logic works.

              • By throw101010 2025-05-0613:08

                It literally isn't, changing/reversing a premise and not adressing the point that was made is not a valid way to counter the initial argument in a logical way.

                Just like your proposition that any "small" chance justifies investing "everything" disregards the same argument regarding the precautionary principle of potentially devastating technologies. You've also slipped in an additonal "with no real downside" which you cannot predict with certainty anyways, rendering this argument infalsifiable. At least tsimionescu didn't dare making such a sweeping (but baseless) statement.

              • By hollerith 2025-05-0618:37

                Some of us believe that continued AI research is by far the biggest threat to human survival, much bigger for example than climate change or nuclear war (which might cause tremendous misery and reduce the population greatly, but seem very unlikely to kill every single person).

                I'm guessing that you think that society is getting worse every year or will eventually collapse, and you hope that continued AI research might prevent that outcome.

          • By voidspark 2025-05-068:29

            The best we can hope for is that Artificial Super Intelligence treats us kindly as pets, or as wildlife to be preserved, or at least not interfered with.

            ASI to humans, is like humans to rats or ants.

        • By geysersam 2025-05-066:132 reply

          Isn't the question you're posing basically Pascals wager?

          I think the chance they're going to create a "superintelligence" is extremely small. That said I'm sure we're going to have a lot of useful intelligence. But nothing general or self-conscious or powerful enough to be threatening for many decades or even ever.

          > Predicting the future is famously difficult

          That's very true, but that fact unfortunately can never be used to motivate any particular action, because you can always say "what if the real threat comes from a different direction?"

          We can come up with hundreds of doomsday scenarios, most don't involve AI. Acting to minimize the risk of every doomsday scenario (no matter how implausible) is doomsday scenario no. 153.

          • By nearbuy 2025-05-0620:121 reply

            > I think the chance they're going to create a "superintelligence" is extremely small.

            I'd say the chance that we never create a superintelligence is extremely small. You either have to believe that for some reason the human brain achieved the maximum intelligence possible, or that progress on AI will just stop for some reason.

            Most forecasters on prediction markets are predicting AGI within a decade.

            • By geysersam 2025-05-0622:071 reply

              Why are you so sure that progress won't just fizzle out at 1/1000 of the performance we would classify as superintelligence?

              > that progress on AI will just stop for some reason

              Yeah it might. I mean, I'm not blind and deaf, there's been tremendous progress in AI over the last decade, but there's a long way to go to anything superintelligent. If incremental improvement of the current state of the art won't bring superintelligence, can we be sure the fundamental discoveries required will ever be made? Sometimes important paradigm shifts and discoveries take a hundred years just because nobody made the right connection.

              Is it certain that every mystery will be solved eventually?

              • By nearbuy 2025-05-074:131 reply

                Aren't we already passed 1/1000th of the performance we would classify as superintelligence?

                There isn't an official precise definition of superintelligence, but it's usually vaguely defined as smarter than humans. Twice as smart would be sufficient by most definitions. We can be more conservative and say we'll only consider superintelligence achieved when it gets to 10x human intelligence. Under that conservative definition, 1/1000th of the performance of superintelligence would be 1% as smart as a human.

                We don't have a great way to compare intelligences. ChatGPT already beats humans on several benchmarks. It does better than college students on college-level questions. One study found it gets higher grades on essays than college students. It's not as good as humans on long, complex reasoning tasks. Overall, I'd say it's smarter than a dumb human in most ways, and smarter than a smart human in a few ways.

                I'm not certain we'll ever create superintelligence. I just don't see why you think the odds are "extremely small".

                • By geysersam 2025-05-074:44

                  I agree, the 1/1000 ratio was a bit too extreme. Like you said, almost any way that's measured it's probably fair to say chatgpt is already there.

          • By polynomial 2025-05-0612:23

            Yes, this is literally Pascal's wager / Pascal's mugging.

        • By quietbritishjim 2025-05-0612:271 reply

          > Given that uncertainty, we can't rule out the chance of our current AI approach leading to superintelligence

          I think you realise this is the weak point. You can't rule out the current AI approach leading to superintelligence. You also can't rule out a rotting banana skin in your bin spontaneously gaining sentience either. Does that mean you shouldn't risk throwing away that skin? It's so outrageous that you need at least some reason to rule it in. So it goes with current AI approaches.

          • By km144 2025-05-0612:552 reply

            Isn't the problem precisely that uncertainty though? That we have many data points showing that a rotting banana skin will not spontaneously gain sentience, but we have no clear way to predict the future? And we have no way of knowing the true chance of superintelligence arising from the current path of AI research—the fact that it could be 1-in-100 or 1-in-1e12 or whatever is part of the discussion of uncertainty itself, and people are biased in all sorts of ways to believe that the true risk is somewhere on that continuum.

            • By icetank 2025-05-0615:141 reply

              >And we have no way of knowing the true chance of superintelligence arising from the current path of AI research

              What makes people think that the future advances in AI will continue to be linear instead of falling of and plateau? Don't all breakthrough technologies develop quickly at the start and then fall of in improvements as all the 'easy' improvements have already been made? In my opinion AI and AGI is like the car and the flying car. People saw continous improvements in cars and thought this rate of progress would continue indefinitely. Leading to cars that have the ability to not only drive but fly as well.

              • By voidspark 2025-05-0711:29

                We already have flying cars. They’re called airplanes and helicopters. Those are limited by the laws of physics, so we don’t have antigravity flying vehicles.

                In the case of AGI we already know it is physically possible.

            • By LPisGood 2025-05-0613:17

              There are lots of data points of previous AI efforts not creating super intelligence.

        • By pembrook 2025-05-066:212 reply

          You bring up the example of an extinction-level asteroid hurling toward earth. Gee, I wonder if this superintelligence you’re deathly afraid of could help with that?

          This extreme risk aversion and focus on negative outcomes is just the result of certain personality types, no amount of rationalizing will change your mind as you fundamentally fear the unknown.

          How do you get out of bed everyday knowing there’s a chance you could get hit by a bus?

          If your tribe invented fire you’d be the one arguing how we can’t use it for fear it might engulf the world. Yes, humans do risk starting wildfires, but it’s near impossible to argue the discovery of fire wasn’t a net good.

          • By yard2010 2025-05-066:451 reply

            Since the internet inception there were a few wrong turns taken by the wrong people (and lizards, ofc) behind the wheel, leading to the sub-optimal, enshitified tm experience we have today. I think GP just don't want to live through that again.

            • By immibis 2025-05-067:06

              You mean right turns. The situation that we have today is the one that gets most rewarded. A right move is defined as one that gets rewarded.

          • By voidspark 2025-05-0618:39

            I think of the invention of ASI as introducing a new artificial life form.

            The new life form will be to humans, as humans are to chimps, or rats, or ants.

            At this point we have lost control of the situation (the planet). We are no longer at the top of the food chain. Fingers crossed it all goes well.

            It's an existential gamble. Is the gamble worth taking? No one knows.

        • By OtherShrezzing 2025-05-067:371 reply

          > Superintelligence poses an existential threat to humanity

          I disagree at least on this one. I don't see any scenario where superintelligence comes into existence, but is for some reason limited to a mediocrity that puts it in contention with humans. That equilibrium is very narrow, and there's no good reason to believe machine-intelligence would settle there. It's a vanishingly low chance event. It considerably changes the later 1-in-n part of your comment.

          • By Meneth 2025-05-0612:151 reply

            So you assume a superintelligence, so powerful it would see humans as we see ants, would not destroy our habitat for resources it could use for itself?

            • By OtherShrezzing 2025-05-079:18

              More fundamental than that, I assume that a superintelligence on that level wouldn't have resource-contention with humans at all.

        • By tempfile 2025-05-067:08

          > There are almost no statements about the future which I'd assign this level of confidence to.

          You have cooked up a straw man that will believe anything as long as it contains a doomsday prediction. You are more than 99.9% confident about doomsday predictions, even if you claim you aren't.

        • By ZuFyf4Q6K4wjoS 2025-05-065:58

          [dead]

      • By digbybk 2025-05-0611:34

        > I'd really love to talk to someone that both really believes this to be true, and has a hands-on experience with building and using generative AI.

        Any of the signatories here match your criteria? https://safe.ai/work/statement-on-ai-risk#signatories

        Or if you’re talking more about everyday engineers working in the field, I suspect the people soldering vacuum tubes to the ENIAC would not necessarily have been the same people with the clearest vision for the future of the computer.

      • By coryfklein 2025-05-0614:011 reply

        Sounds a little too much like, "It's not AGI today ergo it will never become AGI"

        Does the current AI give productivity benefits to writing code? Probably. Do OpenAI engineers have exclusive access to more capable models that give them a greater productivity boost than others? Also probably.

        If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...

        The question eventually becomes, "is AGI technically possible"; is there anything special about meat that cannot be reproduced on silicon? We will find AGI someday, and more than likely that discovery will be aided by the current technologies. It's the path here that matters, not the specific iteration of generative LLM tech we happen to be sitting on in May 2025.

        • By Retric 2025-05-0614:091 reply

          > Does the current AI give productivity benefits to writing code? Probably.

          > If one exclusive group gets the benefit of developing AI with a 20% productivity boost compared to others, and they develop a 2.0 that grants them a 25% boost, then a 3.0 with a 30% boost, etc...

          That’s a bit of a stretch, generative AI is least capable of helping with novel code such as needed to make AGI.

          If anything I’d expect companies working on generative AI to be at a significant disadvantage when trying to make AGI because they’re trying to leverage what they are already working on. That’s fine for incremental improvement, but companies rarely ride one wave of technology to the forefront of the next. Analog > digital photography, ICE > EV, coal mining > oil, etc.

          • By coryfklein 2025-05-0714:151 reply

            The "novel AGI code" probably accounts for <5% of work by time spent. If they can reduce the remaining 95% of grunt work (wiring yet another DB query to a frontend, tweaking the build pipeline, automating GPU allocation scripts) then that means they can focus more on that 5%.

            Then it looks like Company A spends 90% of time on novel research work (while LLMs do all the busy work) and Company B spends 5% of time on novel research work.

            • By Retric 2025-05-0714:26

              If it were that simple we’d already have AGI.

              Just really think about what you just said, sure spend 5% of the time is on the bits nobody on earth has any idea how to accomplish that’s how people will approach this project. Organizationally the grunt work is a trivial rounding error vs the completely unbound we’ve got no idea how to solve this problems bits.

      • By utbabya 2025-05-065:19

        > At the state that we're in the AIs we're building are just really useful input/output devices that respond to a stimuli (e.g., a "prompt"). No stimuli, no output.

        It was true before we allowed them to access external systems, disregarding certain rule which I forgot the origin.

        The more general problem is a mix between the tradegy of the common; we have better understanding every passing day yet still don't understand exacly why LLM perform that well emergently instead of engineered that way; and future progress.

        Do you think you can find a way around access boundaries to masquerade your Create/Update requests as Read in the log system monitoring it, when you have super intelligence?

      • By otabdeveloper4 2025-05-066:31

        > are just really useful input/output devices that respond to a stimuli

        LLMs are huge pretrained models. The economic benefit here is that you don't have to train your own text classification model anymore. (The LLM was likely already trained on whatever training set you could think of.)

        That's a big time and effort saver, but no different from "AI" that we had decades prior. It's just more accessible to the normal person now.

      • By ev7 2025-05-064:30

        alignmentforum.com

      • By voidspark 2025-05-064:332 reply

        > Progress of technology should not be artitrarily held back to protect automateable jobs though. We need to adapt.

        So you don't mind if your economic value drops to zero, with all human labour replaced by machines?

        Dependent on UBI, existing in a basic pod, eating rations of slop.

        • By TobTobXX 2025-05-067:211 reply

          Yes! Sounds like a dream. My value isn't determined by some economic system, but rather by myself. There is so much to do when you don't have to work. Of course, this assumes we actually get to UBI first, and it doesn't create widespread poverty. But even if humanity will have to go through widespread poverty, we'd porbably come out with UBI on the other side (minus a few hundred millions starved).

          There's so much to do, explore and learn. The prospect of AI stealing my job is only scary because my income depends on this job.

          • By voidspark 2025-05-067:373 reply

            > There's so much to do, explore and learn.

            Hobbies, hanging out with friends, reading, etc. That's basically it.

            Probably no international travel.

            It will be like a simple retirement on a low income, because in a socialist system the resources must be rationed.

            This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve. No purpose. Drug use, debauchery, depression, violence, degeneracy, gangs.

            It will be a true idiocracy. No Darwinian selection pressures, unless the system enforces eugenics and population control.

            • By TobTobXX 2025-05-0710:431 reply

              > Hobbies, hanging out with friends, reading, etc. That's basically it. > It will be like a simple retirement on a low income [...].

              Yes, like retirement but without the old age. Right now I'm studying, so I do live on a very low income. But still, there are so many interesting things! For example, I'm trying to design a vacuum pump to 1mbar to be made of mostly 3d printed parts. Do vacuum pumps exist and can I buy them? Absolutely. But is it still fun to do the whole designing process? You bet. And I can't even start explaining all the things I'm learning.

              > This will drive a lot of young ambitious people to insanity.

              I teach teenagers in the age where they have to choose their profession. The ones going insane will be the unambitious people, those who just stay on TikTok all day and go to work because what else would they do? The ambitious will always have ideas and projects. And they won't mind creating something that already exists, just because they like the process of it.

              We already see this with generative AI. Even though you could generate most of the images you'd want already, people still enjoy the process of painting or photographing. Humans are made to be creative and take pleasure from it, even if it is not economically valuable.

              Hell, this is Hacker News. Hacking (in its original sense) was about creativity and problem-solving. Not because it will make you money, but because it was interesting and fun.

              • By voidspark 2025-05-0711:011 reply

                Introverted high IQ nerds are a tiny percentage of the world population. We exist in a tiny bubble here in Hacker News.

                I am thinking about society as a whole, how it will affect all types of people and cultures on this planet.

                • By TobTobXX 2025-05-0716:591 reply

                  There is nothing "introverted high IQ nerd" about being creative. Think about everyone that is practicing music, artistry, crafts, rhetoric, cooking, languages, philosophy, writing, gardening, carpentry, and whatever you can think of. Most of them don't do it for money.

                  > [...] how it will affect all types of people and cultures on this planet.

                  Some will definitely feel without purpose. But I'd argue that just having a job so that you have a purpose is just a band-aid, not a real solution. I won't say that purposelessness isn't a problem, just that it would be great to actually address the issue.

                  Granted, I do hold a utopic view. I continue to be curious due to my religious belief, where I'm looking forward to life unconstrained by age. Regardless whether this will manifest, I think it is healthy to remain curious and continue learning. So on "how it will affect all types of people": I really do think that people without purpose need to engage in curiosity and creativity, for their own mental health.

                  • By voidspark 2025-05-0719:34

                    Yes a few of us will enjoy the peaceful life of contemplation like Aristotle, but not everyone is genetically wired that way.

                    Introverts are only 25% - 40% of the population, and most people are not intellectually or artistically gifted (whether introvert or not), but they still want to contribute and feel valued by society.

                    > I'd argue that just having a job so that you have a purpose

                    It's not just about having a job. It's having an important or valuable role in society, feeling that your contributions actually matter to others - such as building or fixing things that others depend on, or providing for a family,

                    What would motivate a young boy to go through years of schooling, higher education, and so on, just to become a hobbyist, tinkering around on projects that no one else will ever use or really need? That may be acceptable for some niche personality types but not the majority.

                    Aspiring engineers or entrepreneurs are not merely motivated by having a job.

                    I am envisioning the AGI or ASI scenario which truly overtakes humans in all intellectual and physical capabilities, essentially making humans obsolete. That would smash the foundations and traditions of our civilization. It's an incredible gamble.

            • By pixelbro 2025-05-0613:451 reply

              Wait, wait, wait. Our society's gonna fall apart due to a lack of Darwinian selection pressure? What do you think we're selecting for right now?

              Seems to me like our culture treats both survival and reproduction as an inalienable right. Most people would go so far as to say everyone deserves love, "there's a lid for every pot".

            • By knowaveragejoe 2025-05-0618:311 reply

              > This will drive a lot of young ambitious people to insanity. Nothing meaningful for them to achieve.

              Maybe, if the only flavor of ambition you're aware of is that of SV types. Plenty of people have found achievement and meaning before and alongside the digital revolution world.

              • By voidspark 2025-05-0618:46

                I mean common people will be affected just as badly as SV types. It will impact everyone.

                Jobs, careers, real work, all replaced by machines which can do it all better, faster, cheaper than humans.

                Young people with modest ambitions to learn and master a skill and contribute to society, and have a meaningful life. That can be blue collar stuff too.

                How will children respond to the question - "What do you want to be when you grow up?"

                They can join the Amish communities where humans still do the work.

        • By cik 2025-05-064:522 reply

          > So you don't mind if your economic value drops to zero, with all human labour replaced by machines?

          This was the fear when the cotton gin was invented. It was the ear when cars were created. The same complaint happened with the introduction of electronic, automated, telephone switchboards.

          Jobs change. Societies change. Unemployment worldwide, is near the lowest it has ever been. Work will change. Society will eventually move to a currency based on energy production, or something equally futuristic.

          This doesn't mean that getting there will be without pain.

          • By achierius 2025-05-065:142 reply

            Where did all the work-horses go? Why is there barely a fraction of the population there once was? Why did they not adapt and find niches where they had a competitive advantage over cars and machines?

            • By wickedsight 2025-05-075:33

              The horses weren't the market the economy is selling to, the people are. Ford figured out that people having both time and money is best for the economy. We'll figure out that having all the production capabilities but none of the market benefits nobody.

          • By voidspark 2025-05-064:55

            The goal for AGI/ASI is to create machines that can do any job much faster, better, and cheaper than humans. That's the ultimate end point of this progress.

            The economic value of human labour will drop to zero. That would be an existential threat to our civilization.

    • By esafak 2025-05-0518:405 reply

      Lots of people in academia and industry are calling for more oversight. It's the US government that's behind. Europe's AI Act bans applications with unacceptable risk: https://en.wikipedia.org/wiki/Artificial_Intelligence_Act

      • By lenerdenator 2025-05-0518:511 reply

        The US government probably doesn't think it's behind.

        Right now it's operated by a bunch of people who think that you can directly relate the amount of money a venture could make in the next 90 days to its net benefit for society. Government telling them how they can and cannot make that money, in their minds, is government telling them that they cannot bring maximum benefit to society.

        Now, is this mindset myopic to everything that most people have in their lived experience? Is it ethically bankrupt and held by people who'd sell their own mothers for a penny if they otherwise couldn't get that penny? Would those people be banished to a place beyond human contact for the rest of their existence by functioning organs of an even somewhat-sane society?

        I don't know. I'm just asking questions.

        • By aylmao 2025-05-060:18

          I'd go further and say the US government wants "an instrument more powerful than any nuclear weapon" to be built in its territory, by people it has jurisdiction over.

          It might not be a direct US-govt project like the Manhattan Project was, but it doesn't have to. The government has the ties it needs with the heads of all these AI companies, and if it comes to it, the US-govt has the muscle and legal authority to reign control over it.

          A good deal for everyone involved really. These companies get to make bank and technology that furthers their market dominance, the US-govt gets potentially "Manhattan project"-level pivotal technology— it's elites helping elites.

      • By azinman2 2025-05-0518:474 reply

        Unless China handicaps the their progress as well (which they won’t, see made in China 2025), all you’re doing is handing the future to deepseek et al.

        • By esafak 2025-05-0518:542 reply

          What kind of a future is that? If China marches towards a dystopia, why should Europe dutifully follow?

          We can selectively ban uses without banning the technology wholesale; e.g., nuclear power generation is permitted, while nuclear weapons are strictly controlled.

          • By BeetleB 2025-05-0519:212 reply

            > If China marches towards a dystopia, why should Europe dutifully follow?

            I think the more relevant question is: Do you want to live in a Chinese dystopia, or a European one?

            • By immibis 2025-05-067:08

              Do Zambians currently live in an American dystopia? I think they just do their own thing and don't care much what America thinks as long as they don't get invaded.

            • By esafak 2025-05-0519:331 reply

              A European dystopia won't be AI borne, so this is a false dilemma.

              • By BeetleB 2025-05-0519:421 reply

                What I meant is: Europe can choose to regulate as they do, and end up living in a Chinese dystopia because the Chinese will drastically benefit from non-regulated AI, or they can create their own AI dystopia.

                A non-AI dystopia is the least likely scenario.

                • By esafak 2025-05-0519:513 reply

                  If you are suggesting that China may use AI to attack Europe, they can invest in defense without unleashing AI domestically. And I don't think China will become a utopia with unregulated AI. My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have. But if things go sideways they may regret it too.

                  • By Muromec 2025-05-0520:09

                    Not attack, just influence. Destabilize if you want. Advocate regime change, sabotage trust in institution. Being on a defense in a propaganda war doesn't really work.

                    With US already having lost ideologigal war with russia and China, Europe is very much next

                  • By JumpCrisscross 2025-05-0521:31

                    > China may use AI to attack Europe

                    No, just control. America exerts influence and control over Europe without having had to attack it in generations.

                  • By BeetleB 2025-05-0520:14

                    > If you are suggesting that China may use AI to attack Europe

                    No - I'm suggesting that China will reap the benefits of AI much more than Europe will, and they will eclipse Europe economically. Their dominance will follow, and they'll be able to dictate terms to other countries (just as the US is doing, and has been doing).

                    > And I don't think China will become a utopia with unregulated AI.

                    Did you miss all the places I used the word "dystopia"?

                    > My impression after having visited it was not one of a utopia, and knowing how they use technology, I don't think AI will usher it in, because our visions of utopia are at odds. They may well enjoy what they have.

                    Comparing China when I was a kid, not that long ago, to what it is now: It is a dystopia, and that dystopia is responsible for much of the improvements they've made. Enjoying what they have doesn't mean it's not a dystopia. Most people don't understand how willing humans are to live in a dystopia if it improves their condition significantly (not worrying too much about food, shelter, etc).

          • By alasano 2025-05-0519:112 reply

            We don't know whether pushing towards AGI is marching towards a dystopia.

            If it's winner takes all for the first company/nation to have AGI (presuming we can control it), then slowing down progress of any kind with regulation is a risk.

            I don't think there's a good enough analogy to be made, like your nuclear power/weapons example.

            The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.

            • By esafak 2025-05-0519:17

              As with nuclear weapons, there is non-negligible probability of wiping out the human race. The companies developing AI have not solved the alignment problem, and OpenAI even dismantled what programs it had on it. They are not going to invest in it unless forced to.

              We should not be racing ahead because China is, but investing energy in alignment research and international agreements.

            • By troupo 2025-05-0519:22

              > We don't know whether pushing towards AGI is marching towards a dystopia.

              We do know that. By literally looking at China.

              > The hypothetical benefits of an aligned AGI outweigh those of any other technology by orders of magnitude.

              AGI aligned with whom?

        • By nicce 2025-05-0518:532 reply

          This thought process it not different than it was with nuclear weapons.

          The primary difference is the observability - with satellites we had some confidence that other nations respected treaties, or that they had enough reaction time for mutual destruction, but with this AI development we lack all that.

          • By bpodgursky 2025-05-0519:341 reply

            Only countries with nuclear weapons had an actual seat at the table when the world banned new nuclear weapon programs.

          • By lukas099 2025-05-0519:32

            Yes, it was the same with nukes, each side had to build them because the other side was building them.

        • By immibis 2025-05-067:07

          Compare the other American "innovations" that Europe mostly rejects.

        • By saubeidl 2025-05-0518:54

          [dead]

      • By philipwhiuk 2025-05-0522:14

        > Lots of people in academia and industry

        Mostly OpenAI and DeepMind and it stunk of 'pulling up the drawbridge behind them' and pivoting from actual harm to theoretical harm.

        For a crowd supposedly entrenched in startups, it's amazing everyone here is so slow to recognise it's all funding pitches and contract bidding.

      • By 0xDEAFBEAD 2025-05-061:561 reply

        The EU can say all it wants about banning AI applications with unacceptable risk. But ASML is still selling machines to TSMC, which makes the chips which the AI companies are using. The EU is very much profiting off of the AI boom. ASML makes significantly more money than OpenAI, even.

        • By kgwgk 2025-05-067:10

          If we think of “making money” as having more revenue than expenses a lemonade stand makes significantly more money than OpenAI.

      • By jimbokun 2025-05-0520:31

        US government is behind because Biden admin were pushing strongly for controls and regulations and told Andersen and friends exactly that, who then went and did everything in their power to elect Trump, who then put those same tech bros in charge of making his AI policy.

    • By saubeidl 2025-05-0518:411 reply

      The EU does and has passed the AI act to reign in the worst consequences of this nuclear weapon. It has not been received well around here.

      The "digital god" angle might explain why. For many, this has become a religious movement, a savior for an otherwise doomed economic system.

      • By rchaud 2025-05-0518:514 reply

        Absolutely. It's frankly quite shocking to see how otherwise atheist or agnostic people have so quickly begun worshipping at the altar of "inevitable AGI apocalypse", much in the same way as how extremist Christians await the rapture.

        • By cloverich 2025-05-062:33

          To be fair many of us arrived at the idea that AI was humanities inevitable endpoint ahead of and independently of whether we would ever see it in our lifetimes. Its easy enough to see how people could independently converge on such am idea. I dont see that view as related to atheism in any way other than it creating space for the belief, in the same way it creates space for many others.

          Id love to believe there is more to life than the AI future, or that we as humans are destined to be perpetually happy and live meaningful. However I currently dont see how our current levels of extreme prosperity are anything more than an evolutionary blip, even if we could make them last several millennia more.

        • By Xenoamorphous 2025-05-0519:571 reply

          I guess they think that the “digital god” has a chance to become real (and soon, even), unlike the non-digital one?

          • By rchaud 2025-05-0521:18

            We'll be debating whether or not "AGI is here" in philosophical terms, in the same way people debate if God is real, for years to come. To say nothing of the untaxed "nonprofit" status these institutions share.

            Omnipotent deities can never be held responsible for famine and natural disasters ("God has a plan for us all"). AI currently has the same get-out-of-jail free card where mistakes that no literate human would ever make are handwaved away as "hallucinations" that can be exorcised with a more sophisticated training model ("prayers").

        • By lenerdenator 2025-05-0518:551 reply

          Roko's Basilisk is basically Pascal's wager with GPUs.

        • By saubeidl 2025-05-0518:53

          [dead]

    • By modeless 2025-05-0519:07

      I don't know what sources you're reading. There's so much eye-batting I'm surprised people can see at all.

    • By atleastoptimal 2025-05-0520:02

      Because many people fundamentally don’t believe AGI is possible at a basic level, even AI researchers. Humans tend to only understand what materially affects their existence.

    • By xandrius 2025-05-0522:041 reply

      How is an LLM more powerful than any nuclear weapon? Seriously curious.

      • By kragen 2025-05-0615:522 reply

        Well, possibly it isn't. Possibly LLMs are limited in ways that humans aren't, and that's why the staggering advances from GPT-2 to GPT-3 and from GPT-3 to GPT-4 have not continued. Certainly GPT-4 doesn't seem to be more powerful than the largest nuclear weapons.

        But OpenAI isn't limited to creating LLMs. OpenAI's objective is not to create LLMs but to create artificial general intelligence that is better than humans at all intellectual tasks. Examples of such tasks include:

        1. Designing nuclear weapons.

        2. Designing and troubleshooting mining, materials processing, and energy production equipment.

        3. Making money by investing in the stock market.

        4. Discovering new physics and chemistry.

        5. Designing and troubleshooting electronics such as GPUs.

        6. Building better AI.

        7. Cracking encryption.

        8. Finding security flaws in computer software.

        9. Understanding the published scientific literature.

        10. Inferring unpublished discoveries of military significance from the published scientific literature.

        11. Formulating military strategy.

        Presumably you can see that a system capable of doing all these things can easily be used to produce an unlimited quantity of nuclear weapons, thus making it more powerful than any nuclear weapon.

        If LLMs turn out not to be able to do those things better than humans, OpenAI will try other approaches, sooner or later. Maybe it'll turn out to be impossible, or much further off than expected, but that's not what OpenAI is claiming.

        • By xandrius 2025-05-078:201 reply

          It seems that the term LLM and AGI are being muddled here. One is a statistic next text generation and the other is what you're trying to say.

          LLMs are great at making you think they are the other but aren't.

          • By kragen 2025-05-0713:41

            Yes, I think it's unfortunate that you decided to muddle the discussion by introducing the discussion of LLMs in your previous comment, and by conflating them with AGI, in the sense of strongly superhuman AI, which is OpenAI's objective. I worked pretty hard to unmuddle them in my comment.

            The questions you are bringing up about the possible limits of the LLM approach are interesting open research questions, and while I really doubt your implicit claim to have resolved them, they are ultimately irrelevant to the topic at hand, which, I will remind you, is the astounding novelty of the situation where

            > many companies operating in the public eye are basically stating "We are creating a digital god, an instrument more powerful than any nuclear weapon" and raising billions to do it, and nobody bats an eye...

            Note that there is nothing about LLMs in this proposition, and the particular company we're implicitly most focused on—OpenAI—has already developed a number of well-known models that aren't LLMs and plans to keep doing so.

        • By TrapLord_Rhodo 2025-05-0617:531 reply

          the problem is, none of that needs to happen. If the AI can start coming up with novel math or physics, it's game over. Whether the AI is "sentient" or not, being able to break that barrier would send us into an advancement spiral.

          • By kragen 2025-05-0618:06

            None of my argument depends on the AI being sentient.

            You are surely correct that there are weaker imaginable AIs than the strongly superhuman AI that OpenAI and I are talking about which would still be more powerful than nuclear weapons, but they are more debatable. For example, whether discovering new physics would permit the construction of new, more powerful weapons is debatable; it didn't help Archimedes or Tipu Sultan. So discussing such weak claims is likely to end up off in the weeds of logistics and speculation about exactly what kind of undiscovered physics and math would come to light. Instead, I focused on the most obviously correct ways that strongly superhuman AI would be more powerful than nuclear weapons.

            These may not be the most practically important ways. Maybe any strongly superhuman AI would immediately discover a way to explode the sun, or to control people's minds, or to build diamondoid molecular nanotechnology, or to genetically engineer super-plagues, or to collapse the false vacuum. Any of those would make nuclear weapons seem insignificant. But claims like those are much more uncertain than the very simple question before us: whether what OpenAI is trying to develop would be more powerful than nuclear weapons. Obviously it would be, by my reasoning in the grandparent comment, even if this isn't a false vacuum, if the sticky fingers problem makes diamondoid nanotechnology impossible, if people's minds are inherently uncontrollable, etc. So we don't need to resolve those other, more difficult questions in order to do the much easier task of ranking OpenAI's objective relative to nuclear weapons.

    • By jimbokun 2025-05-0520:28

      Most of us are batting our eyelashes as rapidly as possible but have no idea how to stop it.

    • By otabdeveloper4 2025-05-0518:411 reply

      Well, because it's obviously bullshit and everyone knows it. Just play the game and get rich like everyone else.

      • By esafak 2025-05-0519:063 reply

        Are you sure about that? AI-powered robotic soldiers are around the corner. What could go wrong...

        • By otabdeveloper4 2025-05-067:21

          > AI agent robot soliders that are as inept as ChatGPT

          Sounds like payola for the enterprising and experienced mercenary.

        • By sealeck 2025-05-061:281 reply

          Robot soldiers != AGI

        • By devinprater 2025-05-0519:30

          Ooo I know, Cybermen! Yay.

    • By soheil 2025-05-064:35

      It'd be odd if people batted eyes before the 1st nuclear weapon came to be, but not batting now.

    • By gooob 2025-05-062:48

      have they started hiring people to make maglev trains and permaculture gardens all around urban areas yet?

    • By squigz 2025-05-060:00

      We're all too busy rolling our eyes.

  • By A_Duck 2025-05-0521:527 reply

    This is the moment where we fumble the opportunity to avoid a repeat of Web 1.0's ad-driven race to the bottom

    Look forward to re-living that shift from life-changing community resource to scammy and user-hostile

    • By zharknado 2025-05-060:112 reply

      I feel this. I had a very productive convo with an LLM today and realized that a huge part of the value of it was that it addressed my questions in a focused way, without trying to sell me anything or generate SEO rankings or register ad impressions. It just helped me. And that was incredibly refreshing in a digital world that generally feels adversarial.

      Then the thought came, when will they start showing ads here.

      I like to think that if we learn to pay for it directly, or the open source models get good enough, we could still enjoy that simplicity and focus for quite a while. Here’s hoping!

      • By sumedh 2025-05-061:35

        > I like to think that if we learn to pay for it directly

        The $20 monthly payment is not enough though and companies like Google can keep giving away their AI for free till OpenAI is bankrupt.

      • By danenania 2025-05-060:583 reply

        The "good" thing is this is all way too expensive to be ad-supported. Maybe there will be some ad-supported products using very small/cheap models, but the leading edge stuff is always going to be at the leading-edge of compute usage too, and someone has to pay the bill. Even with investors subsidizing a lot of the costs, it's still very expensive to use the best models heavily for real work.

        • By aylmao 2025-05-061:091 reply

          Subscription services can sell ads too. See Hulu, or Netflix. Spotify might not play "radio ads" if you pay, but it will still advertise artists on your home screen.

          These models being expensive leads me to think they will look at all methods of monetization possible when seeking profitability. Rather than ads being off the table, it could feasibly make ads be on the table sooner.

          • By danenania 2025-05-061:29

            Maybe it could happen, but the revenue that can be made per user from ads is basically insignificant compared to the compute costs. They’d be pissing off their users for a very marginal benefit.

        • By toxik 2025-05-069:32

          It is guaranteed that the models will become salespeople in disguise with time. This is just how the world works. Hopefully competition can stave it off but I doubt it.

          It's also why totalitarian regimes love it, they can simply train it to regurgitate a modified version of reality.

        • By advisedwang 2025-05-0617:14

          There's no such thing as too expensive to be ad-supported. There might be too expensive to be ONLY ad-supported, but as a revenue stream ads can be layered on top of other sources. For example, see that the ads shown on a $100/mo cable package!

    • By wrsh07 2025-05-0611:37

      For all of the skepticism I've seen of Sam Altman, listening to interviews with him (eg by Ben Thompson) he says he really does not want to create an ad tier for OpenAI.

      Even if you take him at his word, incentives are hard to ignore (and advertising is a very powerful business model when your goal is to create something that reaches everyone)

    • By pradn 2025-05-0614:52

      I'm hoping there will always be a good LLM option, for the following reasons:

      1) The Pareto frontier of open LLMs will keep expanding. The breakneck pace of open research/development, combined with techniques like distillation will keep the best open LLMs pretty good, if not the best.

      2) The cost of inference will keep going down as software and hardware are optimized. At the extreme, we're lookin toward bit-quantized LLMs that run in RAM itself.

      These two factors should mean a good open LLM alternative should always exist, one without ulterior motives. Now, will people be able to have the hardware to run it? Or will users just put up with ads to use the best LLM? The latter is likely, but you do have a choice.

    • By energy123 2025-05-067:34

      Now the hard part. Design a policy stop this from happening while balancing the need to maintain competition, innovation, etc.

      That step, along with getting politicians to pass it, is the only thing that will stop that outcome.

    • By otabdeveloper4 2025-05-067:261 reply

      In the future AI will be commoditized. You'll be able to buy an inference server for your home in the form factor like a wi-fi router now. They will be cheap and there will be a huge selection of different models, both open-source and proprietary. You'll be able to download a model with a click of a button. (Or just torrent them.)

      • By anticensor 2025-05-068:112 reply

        That can be done with today's desktops already, if you beef up the specs slightly.

        • By otabdeveloper4 2025-05-0610:081 reply

          Cheap Chinese single-board computers made specifically for inference is the missing puzzle piece. (No, GPU's and especially Nvidia is not that.)

          Also the current crop of AI agents are just utter crap. But that's a skill issue of the people coding them, expect actual advances here soon.

          • By anticensor 2025-05-0614:241 reply

            Aren't DGX Spark or Framework Desktop cheap enough?

            • By otabdeveloper4 2025-05-0618:00

              Not really. Eventually we'll get something with the price and availability of a home appliance. (Wi-fi router tier.)

        • By mlnj 2025-05-069:57

          The smaller models are becoming even more capable now. Add that with a suite of tools and integrations and you can do most of what you do online within the infra at home.

    • By NoahZuniga 2025-05-061:242 reply

      Ads intermixed into llm responses is so clearly evil that openai will never do it so long as the nonprofit has a controlling stake (which it currently still has), because the nonprofit would never allow it.

      • By Twirrim 2025-05-061:392 reply

        The insidious part is it doesn't have to be so blatant as adverts, you can achieve a lot by just slight biases in text output.

        Decades ago I worked for a classical music company, fresh out of school. "So.. how do you anticipate where the music trend is going", I once naively asked one of the senior people on the product side. "Oh, we don't. We tell people really quietly, and they listen". They and the marketing team spent a lot of time doing very subtle work, easily as much as anything big like actual advertisements. Things like small little conversations with music journalists, just a dropped sentence or two that might be repeated in an article, or marginally influence an article; that another journalist might see and have an opinion on, or spark some other curiosity. It only takes a small push and it tends to spread across the industry. It's not a fast process, but when the product team is capable of road-mapping for a year or so in advance, a marketing team can do a lot to prepare things so the audience is ready.

        LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle.

        • By gooob 2025-05-062:52

          >LLMs represent a scary capability to influence the entire world, in ways we're not equipped to handle

          replace LLMs with TV, or smartphones, or maybe even mcdonald's, and you've got the same idea. through TV, corporations got to control a lot of the social world and people's behavior.

        • By NoahZuniga 2025-05-0920:01

          Ok but this is still clearly evil, so the controlling non profit would not allow this either.

      • By aprilthird2021 2025-05-061:29

        Ads / SEO but with AI responses was so obviously the endgame given how much human attention it controls and the fact that people aren't really willing to pay what it costs (when decent free, open-weights alternatives exist)

    • By yread 2025-05-067:33

      At least we can self-host this time around

HackerNews