Comments

  • By MrScruff 2025-05-238:084 reply

    It's always a treat to watch a Carmack lecture or read anything he writes, and his notes here are no exception. He writes as an engineer, for engineers and documents all his thought processes and misteps in the exact detailed yet concise way you'd want a colleague to who was handing off some work.

    One question I would have about the research direction is the emphasis on realtime. If I understand correctly he's doing online learning in realtime. Obviously makes for a cool demo and pulls on his optimisation background, and no doubt some great innovations will be required to make this work. But I guess the bitter lesson and recent history also tell us that some solutions may only emerge at compute levels beyond what is currently possible for realtime inference let alone learning. And the only example we have of entities solving Atari games is the human brain, of which we don't have a clear understanding of the compute capacity. In which case, why wouldn't it be better to focus purely on learning efficiency and relax the realtime requirement for now?

    That's a genuine question by the way, definitely not an expert here and I'm sure there's a bunch of value to working within these constraints. I mean, jumping spiders solve reasonably complex problems with 100k neurons, so who knows.

    • By kilpikaarna 2025-05-2314:202 reply

      I'm sure there were offline rendering and 3D graphics workstation people saying the same about the comparatively crude work he was doing in the early 90s...

      Obviously both Carmack and the rest of the world has changed since then, but it seems to me his main strength has always been in doing more with less (early id/Oculus, AA). When he's working in bigger orgs and/or with more established tech his output seems to suffer, at least in my view (possibly in his as well since he quit both Bethesda-id and Meta).

      I don't know Carmack and can't claim to be anywhere close to his level, but as someone also mainly interested in realtime stuff I can imagine he also feels a slight disdain for the throw-more-compute-at-it approach of the current AI boom. I'm certainly glad he's not running around asking for investor money to train an LLM.

      Best case scenario he teams up with some people who complement his skillset (akin to the game designers and artists at id back in the day) and comes up with a way to help bring some of the cutting edge to the masses, like with 3D graphics.

      • By LarsDu88 2025-05-2316:356 reply

        The thing about Carmack in the 90s... There was a lot of research going on around 3d graphics. Companies like SGI and Pixar were building specialized workstations for doing vector operations for 3d rendering. 3d was a thing. Game consoles with specialized 3d hardware would launch in 1994 with the Sega Saturn and the Sony Playstation (in Japan only for one year)

        What Carmack did was basically get a 3d game running on existing COMMODITY hardware. The 386 chip that most people used for their excel spreadsheets did not do floating point operations well, so Carmack figured out how to do everything using integers.

        May 1992 -> Wolfenstein 3d releases December 1993 -> Doom releases December 1994 -> Sony Playstation launches in Japan June 1996 -> Quake releases

        So Wolfenstein and Doom were actually not really 3d games, but rather 2.5 games (you can't have rooms below other rooms). The first 3d game here is actually Quake which also eventually also got hardware acceleration support.

        Carmack was the master of doing the seeminly impossible on super constrained hardware on virtually impossible timelines. If DOOM released in 1994 or 1995, would we still remember it in the same way?

        • By hx8 2025-05-2317:463 reply

          > If DOOM released in 1994 or 1995, would we still remember it in the same way?

          Maybe. One aspect of Wolfenstein and Doom's popularity is that it was years ahead of everyone else technically on PC hardware. The other aspect is that they were genre defining titles that set the standards for gameplay design. I think Doom Deathmatch would have caught on in 1995, as there really were very few (just Command and Conquer?) standout PC network multiplayer games released between 1993 and 1995.

          • By LarsDu88 2025-05-2317:524 reply

            I guess the thing about rapid change is... it's hard to imagine what kind of games would exist in a DOOMless world in an alternate 1995.

            The first 3d console games started to come out that year, like Rayman. Star Wars Dark Forces with its own custom 3d engine also came out. Of course Dark Forces was, however, an overt clone of DOOM.

            It's a bit ironic, but I think the gameplay innovation of DOOM tends to hold up more than the actual technical innovation. Things like BSP for level partitioning have slowly been phased out of game engines, we have ample floating point compute power and hardware acceleration ow, but even developers of the more recent DOOM games have started to realize that they should return to the original formula of "blast zombies in the face at high speed, and keep plot as window dressing"

            • By xh-dude 2025-05-2320:21

              Sort of in the middle, id games always felt tight. The engines were immersive not only because of graphics, but basic i/o was excellent.

            • By HKH2 2025-05-241:111 reply

              > but even developers of the more recent DOOM games have started to realize that they should return to the original formula of "blast zombies in the face at high speed, and keep plot as window dressing"

              There's still a lot of chatter breaking the continuity. In the original, the plot was entirely made up of what you were experiencing directly.

              • By LarsDu88 2025-05-245:01

                In the ending of the original game, you kill the demon spider brain robot thing and the demons kill your bunny rabbit. That's the plot

            • By nextaccountic 2025-05-254:102 reply

              > Things like BSP for level partitioning have slowly been phased out of game engines

              Hey, can you say more / do you have a link about this? I mean, for what reason are BSP trees phased out, and what are they replaced with? (quad/oct tree? AABB trees? or something entirely different?)

              • By crq-yml 2025-05-2510:341 reply

                The pipeline bottlenecks all changed in favor of bruteforcing the things that BSP had been solving with an elegant precomputed data structure - what BSP was extremely good at was eliminating overdraw and getting to where the scene could render exactly the number of pixels that were needed and no more. It's optimized around small, low-detail scenes that carefully manage occlusion.

                More memory, bandwidth and cache means that more of your solutions are per-pixel instead of per-vertex and you can tolerate overdraw if it means you get to have higher polycount models. Likewise, the environment collision that was leveraged by the BSP process reduced the number of tests against walls, but introduced edge cases and hindered general-purpose physics features. Scaling physics leads in the direction of keeping the detailed collision tests at their original, per-poly detail, but doing things with sorting or tree structures to get a broadphase that filters the majority of tests against AABB or sphere bounds.

                On a Wii(original Wii) 3D action game I helped ship, we just rendered the whole level at once, using only the most basic frustum culling technique; the hardware did the lifting, mostly through the z-buffer.

                • By LarsDu88 2025-05-2516:36

                  Adding to this, the nice thing about the bsp partitioning was you could also leverage it to make off screen monsters go to sleep or reduce their tick rate. Was helpful for optimizing AI as well as rendering. DOOM not only had some of the first pseudo 3d but also huge numbers of enemies... something that a lot of other games still cut down on

              • By momocowcow 2025-05-258:05

                Read the quake pvs article linked to in this thread.

                On top of my head as I remember it.. one of the reasons for Quake's use of a bsp was to allow back to front rendering of the world geometry without the use of a zbuffer. This was required to get decent performance with the software rasterizer.

                I'm not 100% sure what's most commonly used these days, but for a large open world requiring data streaming, I could see the use for something like an octree and even portals.

            • By Narishma 2025-05-2322:042 reply

              > The first 3d console games started to come out that year, like Rayman.

              Rayman was a 2D game.

              • By LarsDu88 2025-05-245:05

                I'm misremembering Rayman for the Sega Dreamcast. Looking at wikipedia, I now see that there weren't many games in 1995 even on the new consoles that really leveraged the 3d hardware. The PC actually had more such games despite many PCs lacking hardware acceleration for 3d rendering or even significant floating point capabilities. There's Sega Rally Racing for the Saturn, Virtua Fighter, Tekken...

                Perhaps it's really 1996 that's the real turning point (with Mario 64 coming out), which makes DOOM about 3 years ahead of its time. And of course id shipped Quake that year....

              • By ddingus 2025-05-243:50

                It is. I have experienced others reference it as an early 3d Title too.

                Great work by that team! Seriously. The graphics are great, and for the time period, excellent enough to be lumped in with the real 3d games.

        • By andrepd 2025-05-2320:41

          > So Wolfenstein and Doom were actually not really 3d games, but rather 2.5 games (you can't have rooms below other rooms). The first 3d game here is actually Quake

          Ultima Underworld is a true 3D game from 1992. An incredibly impressive game, in more ways than one.

        • By muziq 2025-05-2321:372 reply

          The world seems to have rewritten history, and forgotten Ultima Underworld, which shipped prior to Doom..

          • By zeroq 2025-05-2322:11

            Couple "3D" games shipped before Doom. Battlezone comes to mind.

            The difference is that id owned the natural progression (from Wolf3D through Doom to Quake) and laid foundation to what we call today a FPS genre.

          • By Narishma 2025-05-2322:061 reply

            I think that's because it had such high system requirements that very few people could run it, unlike Wolfenstein 3D and Doom.

            • By foldr 2025-05-249:472 reply

              It was just about playable on a 25MHz 386. We used to put up with frame rates that would make your eyes bleed back in the early 90s.

              https://www.youtube.com/watch?v=3VdRXgWoShM

              • By momocowcow 2025-05-2410:29

                Exactly, I played through underworld on a 386 16mhz. Being an rpg, the lower fps was much more tolerable than in Doom, which was in fact unplayable.

              • By Narishma 2025-05-2421:36

                I'm sorry but that is not very playable even when it only renders to a quarter of the screen, especially compared to Wolfenstein 3D on similar hardware. It was also quite clunky in terms of interface. The guy on the video spends like 3 minutes trying to pick up a sack on the floor before just giving up.

        • By gjadi 2025-05-2319:00

          Hardware changes a lot in the time it takes to develop a game. When I read his plan files and interviews, I realized he seemed to spend a lot of time before developing the game thinking about what the next gen hardware was going to bring. Then design the best game they could think of whike targeting this not-yet-available hardware.

        • By CamperBob2 2025-05-2318:10

          If DOOM released in 1994 or 1995, would we still remember it in the same way?

          I think so, because the thing about DOOM is, it was an insanely good game. Yes, it pioneered fullscreen real-time perspective rendering on commodity hardware, instantly realigning the direction of much of the game industry, yadda yadda yadda, but at the end of the day it was a good-enough game for people to remember and respect even without considering the tech.

          Minecraft would be a similar example. Minecraft looked like total ass, and games with similar rendering technology could have been (and were) made years earlier, but Minecraft was also good. And that was enough.

        • By leoc 2025-05-2321:002 reply

          But also, he didn't do the technically hardest and most impressive part, Quake, on his own. IIUC he basically relied on Michael Abrash's help to get Quake done (in any reasonable amount of time).

          • By sturob 2025-05-2323:211 reply

            Realizing that he needed Abrash (and aggressively recruiting him) could easily be seen as the most impressive thing he did to make Quake happen

            • By CyberDildonics 2025-05-241:52

              I would say his multiple technical feats and phenomenal output are more impressive.

      • By Buttons840 2025-05-2318:462 reply

        > his main strength has always been in doing more with less

        Carmack builds his kingdom and then runs it well.

        I makes me wonder how he would fare as an unknown Jr. developer with managers telling him "that's a neat idea, but for now we just need you to implement these Figma designs".

        • By mrandish 2025-05-2320:00

          A key aspect of the Carmack approach (or similar 'smart hacker' unconventional career approach) is avoiding that situation in the first place. However, this also carries substantial career, financial and lifestyle risks & trade-offs - especially if you're not both talented enough and lucky enough to hit a sufficiently fertile oppty in the right time window on the first few tries.

          Assuming one is willing to accept the risks and has the requisite high-talent plus strong work drive, the Carmack-like career pattern is to devote great care to evaluating and selecting opptys near the edges of newly emerging 'interesting things' which also: coincide with your interests/talents, are still at a point where a small team can plausibly generate meaningful traction, and have plausible potential to grow quickly and get big.

          Carmack was fortunate that his strong interest in graphics and games overlapped a time period when Moore's Law was enabling quite capable CPU, RAM and GFX hardware to hit consumer prices. But we shouldn't dismiss Carmack's success as "luck". That kind of luck is an ever-present uncontrolled variable which must be factored into your approach - not ignored. Since Carmack has since shown he can get very interested in a variety of things, I assume he filtered his strong interests to pick the one with the most near-term growth potential which also matched his skills. I suspect the most fortunate "luck" Carmack had wasn't picking game graphics in the early 90s, it was that (for whatever reasons) he wasn't already employed in a more typical "well-paying job with a big, stable company, great benefits and career growth potential" so he was free to find the oppty in the first place.

          I had a similarly unconventional career path which, fortunately, turned out very well for me (although not quite at Carmack's scale :-)). The best luck I had actually looked like 'bad luck' to me and everyone else. Due to my inability to succeed in a traditional educational context (and other personal shortcomings), I didn't have a college degree or resume sufficient to get a "good job", so I had little choice but to take the high-risk road and figure out the unconventional approach as best I could - which involved teaching myself, then hiring myself (because no one else would) and then repeatedly failing my way through learning startup entrepreneurship until I got good at it. I think the reality is that few who succeed on the 'unconventional approach' consciously chose that path at the beginning over lower risk, more comfortable alternatives - we simply never had those alternatives to 'bravely' reject in pursuit of our dreams :-).

        • By zeroq 2025-05-2322:06

            > "makes me wonder how he would fare as an unknown Jr. developer with managers telling him (...)"
          
          he would probably write an open letter and left Meta. /s

    • By johnb231 2025-05-238:414 reply

      From the notes:

      "A reality check for people that think full embodied AGI is right around the corner is to ask your dancing humanoid robot to pick up a joystick and learn how to play an obscure video game."

      • By ferguess_k 2025-05-2312:445 reply

        We don't really need AGI. We need better specialized AIs. Throw in a few specialized AIs and they will leave some impact in the society. That might not be that far away.

        • By nightski 2025-05-2315:003 reply

          Saying we don't "need" AGI is like saying we don't need electricity. Sure life existed before we had that capability, but it would be very transformative. Of course we can make specialized tools in the mean time.

          • By hoosieree 2025-05-2317:211 reply

            The error in this argument is that electricity is real.

            • By mrandish 2025-05-2318:231 reply

              Indeed, and I'd go even further. In addition to existing, electricity is also usefully defined - which helps greatly in establishing its existence. Neither unicorns nor AGI currently exist but at least unicorns are well enough defined to establish whether an equine animal is or isn't one.

              • By treebeard901 2025-05-2516:01

                LLMs are being irresponsibly marketed as AGI to the general public and many do not seem to understand the difference. Clearly there are limits to how far current methods can be taken. They can only train on so much data and add more compute capacity before running into diminishing returns.

                AGI is really something else entirely. If this was clarified, especially for Wall Street, we would see some changes in valuations.

                At a certain point, it depends on how you define human reasoning capabilities. Maybe a collection of refined and specialized AI can get close. One counter argument could be that we are not far off from highly advanced robotics that have increasingly powerful reasoning abilities even without AGI.

                After the automobile was created, global horse population fell dramatically. Who knows if the wealthy classes will need all these people to do the work for them in the near future.

          • By esafak 2025-05-2322:44

            Furthermore, we will be faced with it whether we want to or not, because others are making it happen.

          • By charcircuit 2025-05-2315:192 reply

            Can you give an example how it would be transformative compared to specialized AI?

            • By Jensson 2025-05-2315:32

              AGI is transformative in that it lets us replace knowledge workers completely, specialized AI requires knowledge workers to train them for new tasks while AGI doesn't.

            • By fennecfoxy 2025-05-2316:169 reply

              Because it could very well exceed our capabilities beyond our wildest imaginations.

              Because we evolved to get where we are, humans have all sorts of messy behaviours that aren't really compatible with a utopian society. Theft, violence, crime, greed - it's all completely unnecessary and yet most of us can't bring ourselves to solve these problems. And plenty are happy to live apathetically while billionaires become trillionaires...for what exactly? There's a whole industry of hyper-luxury goods now, because they make so much money even regular luxury is too cheap.

              If we can produce AGI that exceeds the capabilities of our species, then my hope is that rather than the typical outcome of "they kill us all", that they will simply keep us in line. They will babysit us. They will force us all to get along, to ensure that we treat each other fairly.

              As a parent teaches children to share by forcing them to break the cookie in half, perhaps AI will do the same for us.

              • By hackinthebochs 2025-05-2320:271 reply

                Why on earth would you want an AI that takes away our autonomy? It's wild to see someone actually advocate for this outcome.

              • By davidivadavid 2025-05-2316:47

                Oh great, can't wait for our AI overlords to control us more! That's definitely compatible with a "utopian society"*.

                Funnily enough, I still think some of the most interesting semi-recent writing on utopia was done ~15 years ago by... Eliezer Yudkowsky. You might be interested in the article on "Amputation of Destiny."

                Link: https://www.lesswrong.com/posts/K4aGvLnHvYgX9pZHS/the-fun-th...

              • By fennecfoxy 2025-05-2910:44

                He he, too many comments to reply to. I see reactions in the same vein as those I get when I tell people it should be illegal to own a second/"holiday" home given the terrible housing markets preventing people from getting just one.

                Well, I guess we shall see what the future brings, won't we.

              • By andrekandre 2025-05-241:02

                  > They will force us all to get along, to ensure that we treat each other fairly.
                
                so, a dictatorship?

              • By latentsea 2025-05-240:25

                > humans have all sorts of messy behaviours that aren't really compatible with a utopian society. Theft, violence, crime, greed - it's all completely unnecessary and yet most of us can't bring ourselves to solve these problems.

                Why would you think or claim those behaviors are unnecessary?

              • By tirant 2025-05-2320:262 reply

                I still don’t see an issue of billionaires becoming trillionaires and being able to buy hyper luxury goods. Good for them and good for the people selling and manufacturing those goods. Meanwhile poverty is in all time lows and there’s a growing middle class at global level. Our middle class life conditions nowadays have a level of comfort that would get Kings from some centuries ago jealous.

                • By ddingus 2025-05-245:46

                  I do haveba problem with that.

                  Basically, if we are going to allow single humans to have so much buying power, that needs to come with some clear expectations.

                  Risks must be taken. Pick something and advance society. Or, empower others in a reasonable, non discriminatory way, with no hooks.

                  There are a ton of things need doing.

                  Why not require a real effort?

                  We do not need more dynasties, and other enduring signs of wealth.

                  We do need those people to have earned it. (That is the no dynasty part again)

                  I know sounds harsh, but the fact is anyone can live very well for the rest of their lives on just a fraction of that money.

                  May well put it to work.

                  And if it were me personally? I got a list. Ready?

                • By atq2119 2025-05-240:00

                  If buying hyper luxury goods was the only thing that this wealth was good for, I'd agree.

                  But it's not: many of those billionaires are using their wealth to subvert democracy. They are quite literally enemies of the state.

                  The only lasting defense is to prevent this level of individual wealth accumulation.

              • By rurp 2025-05-2319:35

                Who on earth has the resources to create true AGI and is interested in using it to create this sort of utopia for the masses?

                If AGI is created it is most likely to be guided by someone like Altman or Musk, people whose interests couldn't be farther from what you describe. They want to make themselves gods and couldn't care less about random plebs.

                If AGI is setting its own principles then I fail to see why it would care about us at all. Maybe we'll be amusing as pets but I expect a superhuman intelligence will treat us like we treat ants.

              • By brulard 2025-05-2321:08

                Is this meant seriously? Do we really want something more intelligent than us to just force on us it's rules, logic and ways of living (or dying), which we may be too stupid to understand?

        • By alickz 2025-05-2315:321 reply

          What if AGI is just a bunch of specialized AIs put together?

          It would seem our own generalized intelligence is an emergent property of many, _many_ specialized processes

          I wonder if AI is the same

          • By Jensson 2025-05-2315:592 reply

            > It would seem our own generalized intelligence is an emergent property of many, _many_ specialized processes

            You can say that about other animals, but about humans it is not so sure. No animal can be taught as general set of skills as a human can, they might have some better specialized skills but clearly there is something special that makes humans so much more versatile.

            So it seems there was this simple little thing humans got that makes them general, while for example our very close relatives the monkeys are not.

            • By fennecfoxy 2025-05-2316:091 reply

              Humans are the ceiling at the moment yes, but that doesn't mean the ceiling isn't higher.

              Science is full of theories that are correct per our current knowledge and then subsequently disproven when research/methods/etc improves.

              Humans aren't special, we are made from blood & bone, not magic. We will eventually build AGI if we keep at it. However unlike VCs with no real skills except having a lot of money™, I couldn't say whether this is gonna happen in 2 years or 2000.

              • By Jensson 2025-05-2316:271 reply

                Question was if cobbling together enough special intelligence creates general intelligence. Monkeys has a lot of special intelligence that our current AI models can't come close to, but still aren't seen as general intelligence like humans, so there is some little bit humans has that isn't just another special intelligence.

                • By fennecfoxy 2025-05-2910:47

                  I guess the answer to that is that we don't know yet. I will be interesting to see if human intelligence/the way our brains work (which we certainly don't understand at all yet) is the only way to do intelligence or if models will eventually end up doing something novel.

            • By mike_ivanov 2025-05-2316:14

              It may be a property of (not only of?) humans that we can generate specialized inner processes. The hardcoded ones stay, the emergent ones come and go. Intelligence itself might be the ability to breed new specialized mental processes on demand.

        • By bluGill 2025-05-2312:494 reply

          Specialized AIs have been making an impact on society since at least the 1960s. AI has long suffered from every time they come up with something new it gets renamed and becomes important (where it makes sense) without giving AI credit.

          From what I can tell most in AI are currently hoping LLMs reach that point quick just because the hype is not helping AI at all.

          • By Workaccount2 2025-05-2313:555 reply

            Yesterday my dad, in his late 70's, used Gemini with a video stream to program the thermostat. He then called me to tell me this, rather then call me to come stop by and program the thermostat.

            You can call this hype, maybe it is all hype until LLMs can work on 10M LOC codebases, but recognize that LLMs are a shift that is totally incomparable to any previous AI advancement.

            • By lexandstuff 2025-05-2321:00

              That is amazing. But I had a similar experience when I first taught my mum how to Google for computer problems. She called me up with delight to tell me how she fixed the printer problem herself, thanks to a Google search. In a way, LLMs are a refinement on search technology we already had.

            • By orochimaaru 2025-05-2314:20

              That is what open ai’s non-profit economic research arm has claimed. LLMs will fundamentally change how we interact with the world like the Internet did. It will take time like the Internet and a couple of hype cycle pops but it will change the way we do things.

              It will help a single human do more in a white collar world.

              https://arxiv.org/abs/2303.10130

            • By bluefirebrand 2025-05-2315:293 reply

              > He then called me to tell me this, rather then call me to come stop by and program the thermostat.

              Sounds like AI robbed you of an opportunity to spend some time with your Dad, to me

              • By Workaccount2 2025-05-2321:11

                I'm there like twice a week don't worry. He knows about Gemini because I was showing him it two days before hah

              • By TheGRS 2025-05-2319:03

                For some of us that's a plus!

              • By jabits 2025-05-2316:211 reply

                Or maybe instead of spending time with your dad on a bs menial task, you could spent time fishing with him…

                • By bluefirebrand 2025-05-2316:28

                  It's nice to think that but life and relationships are also composed of the little moments, which sometimes happen when someone asks you over to help with a "bs menial task"

                  It takes five minutes to program the thermostat, then you can have a beer on the patio if that's your speed and catch up for a bit

                  Life is little moments, not always the big commitments like taking a day to go fishing

                  That's the point of automating all of ourselves out of work, right? So we have more time to enjoy spending time with the people we love?

                  So isn't it kind of sad if we wind up automating those moments out of our lives instead?

            • By bluGill 2025-05-2315:16

              There are clearly a lot of useful things about LLMs. However there is a lot of hype as well. It will take time to separate the two.

            • By ferguess_k 2025-05-2316:05

              Yeah. As a mediocre programmer I'm really scared about this. I don't think we are very far from AI replacing the mediocre programmers. Maybe a decade, at most.

              I'd definitely like to improve my skills, but to be realistic, most of the programmers are not top-notch.

          • By BolexNOLA 2025-05-2313:01

            Yeah “AI” tools (such a loose term but largely applicable) have been involved in audio production for a very long time. They have actually made huge strides with noise removal/voice isolation, auto transcription/captioning, and “enhancement” in the last five years in particular.

            I hate Adobe, I don’t like to give them credit for anything. But their audio enhance tool is actual sorcery. Every competitor isn’t even close. You can take garbage zoom audio and make it sound like it was borderline recorded in a treated room/studio. I’ve been in production for almost 15 years and it would take me half a day or more of tweaking a voice track with multiple tools that cost me hundreds of dollars to get it 50% as good as what they accomplish in a minute with the click of a button.

          • By danielbln 2025-05-2313:352 reply

            Bitter lesson applies here as well though. Generalized models will beat specialized models given enough time and compute. How much bespoke NLP is there anymore? Generalized foundational models will subsume all of it eventually.

            • By johnecheck 2025-05-2313:591 reply

              You misunderstand the bitter lesson.

              It's not about specialized vs generalized models - it's about how models are trained. The chess engine that beat Kasparov is a specialized model (it only plays chess), yet it's the bitter lesson's example for the smarter way to do AI.

              Chess engines are better at chess than LLMs. It's not close. Perhaps eventually a superintelligence will surpass the engines, but that's far from assured.

              Specialized AI are hardly obsolete and may never be. This hypothetical superintelligence may even decide not to waste resources trying to surpass the chess AI and instead use it as a tool.

              • By CrimsonCape 2025-05-2322:48

                I think your point that AI would refuse to play chess is interesting. To humans, chess is a strategic game. To a mathematician, chess is an exceedingly hard game, (pretty sure it is EXP complete, but I'm not fully familiar with Np/Exp completeness). To an AI, it seems like the AI will side with the mathematicians. AI is like "bro you can't even figure out if P=NP so how am I going to, you want me to waste power to solve an unsolvable problem?"

                From Wikipedia, Garry Kasparov said it was a pleasure to watch AlphaZero play, especially since "its style was open and dynamic like his own".

                People can't define AI because they don't want to consider AI as a subset of exponentially difficult algorithms, but they do want to consider AI as a generator of stylistic responses.

            • By ses1984 2025-05-2313:44

              Generalized models might be better but they are rarely more efficient.

          • By ferguess_k 2025-05-2312:52

            Yeah I agree with it. There is a lot of hype, but there is some potentials there.

        • By Karrot_Kream 2025-05-2319:59

          I think to many AI enthusiasts, we're already at the "specialized AIs" phase. The question is whether those will jump to AGI. I'm personally unconvinced but I'm not an ML researcher so my opinion is colored by what I use and what I read, not active research. I do think though that many specialized AIs is already enough to experience massive economic disruption.

        • By babyent 2025-05-2316:121 reply

          Why not just hire like 100 of the smartest people across domains and give them SOTA AI, to keep the AI as accurate as possible?

          Each of those 100 can hire teams or colleagues to make their domain better, so there’s always human expertise keeping the model updated.

          • By trial3 2025-05-2316:171 reply

            "just"

            • By babyent 2025-05-2316:23

              They’re spending 10s of billions. Yes, just.

              200 million to have dedicated top experts on hand is reasonable.

      • By AndrewKemendo 2025-05-2314:004 reply

        This debate is exhausting because there's no coherent definition of AGI that people agree on.

        I made a google form question for collecting AGI definitions cause I don't see anyone else doing it and I find it infinitely frustrating the range of definitions for this concept:

        https://docs.google.com/forms/d/e/1FAIpQLScDF5_CMSjHZDDexHkc...

        My concern is that people never get focused enough to care to define it - seems like the most likely case.

        • By johnb231 2025-05-2318:231 reply

          The Wikipedia article on AGI explains it well enough.

          Researchers at Google have proposed a classification scheme with multiple levels of AGI. There are different opinions in the research community.

          https://arxiv.org/abs/2311.02462

          • By AndrewKemendo 2025-05-241:05

            My whole point was to demonstrate that despite years of definition suggestions the term has never found broad agreement

            Now that the term is in the general lexicon (which is crazy to me as an old guy doing this a long time) it’s morphing into something new

            Like any good scientist i want to sample the population

        • By latentsea 2025-05-240:01

          In a way it sort of doesn't matter. If all we ever wind up with is unbelievably fancy chatbots, marketing teams will eventually draw their line in the sand and call it AGI.

          There comes a theoretical point at which a definition is no longer relevant because it's obvious to everyone on an intuitive level. An easy lower bound for where this threshold might sit would be "when it can start and win wars unassisted under its own volition". Since, at that point no one on earth would have a need to debate it. It would simply be respected and understood for what it is without needing to define it.

          Until such an obvious threshold is crossed, it will be whatever executives, product managers, and marketing teams say it is.

        • By mvkel 2025-05-2314:102 reply

          It doesn't really seem like there's much utility in defining it. It's like defining "heaven."

          It's an ideal that some people believe in, and we're perpetually marching towards it

          • By theptip 2025-05-2314:371 reply

            No, it’s never going to be precise but it’s important to have a good rough definition.

            Can we just use Morris et al and move on with our lives?

            Position: Levels of AGI for Operationalizing Progress on the Path to AGI: https://arxiv.org/html/2311.02462v4

            There are generational policy and societal shifts that need to be addressed somewhere around true Competent AGI (50% of knowledge work tasks automatable). Just like climate change, we need a shared lexicon to refer to this continuum. You can argue for different values of X but the crucial point is if X% of knowledge work is automated within a decade, then there are obvious risks we need to think about.

            So much of the discourse is stuck at “we will never get to X=99” when we could agree to disagree on that and move on to considering the x=25 case. Or predict our timelines for X and then actually be held accountable for our falsifiable predictions, instead of the current vide based discussions.

            • By mvkel 2025-05-2414:581 reply

              This is a great reply, thank you.

              For me, I just zoom out a little further and say: at the rate AGI is approaching, what is the utility in trying to regulate it ahead of time?

              Seems like advancement is slow enough that society can/will naturally regulate it based on what feels comfortable.

              And it's a global phenomenon that can't have rules applied at the protocol level like the internet, because it's so culturally subjective.

              Precedents need to be set first, and I think we'll only be able to call them when we see them.

              • By theptip 2025-05-2420:061 reply

                It’s a good point. For epistemic hygiene I think it’s critical to actually have models of the growth rate and what is implied. Eg we are seeing exponential growth on many capability metrics (some with doubling-times of 7 months), but haven’t joined this up to economic growth numbers. In models where the growth continues you could imagine stuff getting crazy quickly, eg one year AI contributes 0.5% GDP only measurable in retrospect, next year 2%, year after 8%.

                Personally I don’t think politicians are capable of adapting fast enough to this extreme scenario. So they need to start thinking about it (and building and debating legislation) long before it’s truly needed.

                Of course if it turns out that we are living in one of the possible worlds where true economically meaningful capabilities are growing more slowly, or bottlenecks just happen to appear at this critical phase in the growth curve, then this line of preparation isn’t needed, but I’m more concerned about downside tail risk than the real but bounded costs of delaying progress by a couple years. (Though of course, we must ensure we don’t do to AI what we did to nuclear).

                Finally I’ll note in agreement with your point, that there are a whole class of solutions that are mostly incomprehensible or inconceivable to most people at this time (ie currently fully outside the Overton Window). Eg radical abundance -> UBI might just solve the potential inequities of the tech, and therefore make premature job protection legislation vastly harmful on net. I mostly say “just full send it” when it comes to these mundane harms, it’s the existential ones (including non-death “loss of control” scenarios) that I feel warrant some careful thought. For that reason while I see where you are coming from, I somewhat disagree on your conclusion; I think we can meaningfully start acting on this as a society now.

                • By mvkel 2025-05-257:02

                  This is great food for thought.

                  I like your idea of developing a new economic model as a proxy for possible futures; that at least can serve as a thinking platform.

                  Your comment inspired me to look at historical examples of this happening. Two trends emerged:

                  1. Rapid change always precedes policy. Couldn't find any examples of the reverse. That doesn't discount what you're saying at all, it reiterates that we probably need to be as vigilant and proactive as possible.

                  and related:

                  2. Things that seem impossible become normative. Electricity. The Industrial Revolution. Massive change turns into furniture. We adapt quickly as individuals even if societies collectively struggle to keep up. There will be many people that get caught in the margins, though.

                  Consider me fully convinced!

        • By bigyabai 2025-05-2314:163 reply

          It is a marketing term. That's it. Trying to exhaustively define what AGI is or could be is like trying to explain what a Happy Meal is. At it's core, the Happy Meal was not invented to revolutionize food eating. It puts an attractive label on some mediocre food, a title that exists for the purpose of advertisement.

          There is no point collecting definitions for AGI, it was not conceived as a description for something novel or provably existent. It is "Happy Meal marketing" but aimed for adults.

          • By AndrewKemendo 2025-05-2317:25

            That’s historically inaccurate

            My masters thesis advisor Ben Goertzel popularized the term and has been hosting the AGI conference since 2008:

            https://agi-conference.org/

            https://goertzel.org/agiri06/%5B1%5D%20Introduction_Nov15_PW...

            I had lunch with Yoshua Bengio at AGI 2014 and it was most of the conversation that day

          • By HarHarVeryFunny 2025-05-2317:351 reply

            The name AGI (i.e. generalist AI) was originally intended to contrast with narrow AI which is only capable of one, or a few, specific narrow skills. A narrow AI might be able to play chess, or distinguish 20 breeds of dog, but wouldn't be able to play tic tac toe because it wasn't built for that. AGI would be able to learn to do anything, within reason.

            The term AGI is obviously used very loosely with little agreement to it's precise definition, but I think a lot of people take it to mean not only generality, but specifically human-level generality, and human-level ability to learn from experience and solve problems.

            A large part of the problem with AGI being poorly defined is that intelligence itself is poorly defined. Even if we choose to define AGI as meaning human-level intelligence, what does THAT mean? I think there is a simple reductionist definition of intelligence (as the word is used to refer to human/animal intelligence), but ultimately the meaning of words are derived from their usage, and the word "intelligence" is used in 100 different ways ...

            • By mrandish 2025-05-2322:141 reply

              > intended to contrast with narrow AI

              I've thought for a while that the middle letter in AGI ('General' vs 'Specific') would be more useful and helpful if it were changed to Wide vs Narrow. All AIs can be evaluated on a scale of narrow to wide in terms of their abilities and I don't think that will change anytime soon.

              Everyone understands that something is only wide or narrow in comparison to something else. While that's also true of the terms "general' and 'specific', those are less used that way in daily conversation these days. In science and tech we make distinctions about generalized vs specific but 'general' isn't a conversational term like 50 or 100 years ago. When I was a kid my grandparents would call the local supermarket, the 'general store' which I thought was an unusual usage even then.

              • By HarHarVeryFunny 2025-05-2322:39

                I guess "general store" made more sense back then though. I grew up in the UK in the 60's and food shops were "narrow" - fishmonger, butcher, greengrocer (fruit & veg), bakery, etc. From that perspective a "general store" would have been noteworthy!

          • By johnb231 2025-05-2318:56

            Generalization is a formal concept in machine learning and is measurable.

      • By vonneumannstan 2025-05-2314:341 reply

        Is this supposed to be a gotcha? We know these systems are typically trained using RL and they are exceedingly good at learning games...

        • By johnb231 2025-05-2318:18

          No it is not a “gotcha” and I don’t understand how you got that impression.

          Carmack believes AGI systems should be able to learn new tasks in realtime alongside humans in the real world.

      • By throw_nbvc1234 2025-05-2310:365 reply

        This sounds like a problem that could be solved around the corner with a caveat.

        Games generally are solvable for AI because they have feedback loops and a clear success or failure criteria. If the "picking up a Joystick" part is the limiting factor, sure. But why would we want robots to use an interface (especially a modern controller) heavily optimized for human hands; that seems like the definition of a horseless carriage.

        I'm sure if you compared a monkey and a dolphins performance using a joystick you'd get results that aren't really correlated with their intelligence. I would guess that if you gave robots an R2D2 like port to jack into and play a game, that problem could be solved relatively quickly.

        • By xnickb 2025-05-2310:522 reply

          Just like OpenAI early on promised us an AGI and showed us how it "solved" Dota 2.

          They also claimed it "learned" to play by playing itself only however it was clear that most of the advanced techniques were borrowed from existing AI and by observing humans.

          No surprise they gave up on that project completely and I doubt they'll ever engage in anything like that again.

          Money better spent on different marketing platforms.

          • By jsheard 2025-05-2311:343 reply

            It also wasn't even remotely close to learning Dota 2 proper. They ran a massively simplified version of the game where the AI and humans alternated between playing one of two pre-defined team compositions, meaning >90% of the games characters and >99.999999% of the possible compositions and matchups weren't even on the table, plus other standard mechanics were also changed or disabled altogether for the sake of the AI team.

            Saying you've solved Dota after stripping out nearly all of its complexity is like saying you've solved Chess, but on a version where the back row is all Bishops.

            • By xnickb 2025-05-2312:003 reply

              Exactly. What I find surprising in this story though is not the OpenAI. It's investors not seeing through these blatant.. lets call them exaggerations of the reality and still trusting the company with their money. I know I wouldn't have. But then again, maybe that's why I'm poor.

              • By ryandrake 2025-05-2312:36

                In their hearts, startup investors are like Agent Mulder: they Want To Believe. Especially after they’ve already invested a little. They are willing to overlook obvious exaggerations up to and including fraud, because the alternative is admitting their judgment is not sound.

                Look at how long Theranos went on! Miraculous product. Attractive young founder with all the right pedigree, credentials, and contacts, dressed in black trurtlenecks. Hell, she even talked like Steve Jobs! Investors never had a chance.

              • By jdross 2025-05-2312:482 reply

                They already have 400 million daily users and a billion people using the product, with billions of consumer subscription revenue, faster than any company ever. They are also aggregating R&D talent at a density never before seen in Silicon Valley

                That is what investors see. You seem to treat this as a purity contest where you define purity

                • By zaphar 2025-05-2313:191 reply

                  Also apparently still not making a profit.

                • By xnickb 2025-05-2312:55

                  I'm speaking about past events. Perhaps I didn't make it clear enough

            • By rowanG077 2025-05-2314:042 reply

              I agree that restricting the hero pool is a huge simplification. But they did play full 5v5 standard dota with just a restricted hero pool of 17 heroes and no illusions/control units according to theverge (https://www.theverge.com/2019/4/13/18309459/openai-five-dota...). It destroyed the professionals.

              As an ex dota player, I don't think this is that far off from having full on, all heroes dota. Certainly not as far of as you are making it sound.

              And dota is one of the most complex games, I expect for example that an AI would instantly solve CS since aim is such a large part of the game.

              • By mistercheph 2025-05-2314:231 reply

                Another issue with the approach is that the model had direct access to game data, that is simply an unfair competitive advantage in dota, and it is obvious why that advantage would be unfair in CS.

                It is certainly possible, but i won't be impressed by anything "playing CS" that isn't running a vision model on a display and moving a mouse, because that is the game. The game is not abstractly reacting to enemy positions and relocating the cursor, it's looking at a screen, seeing where the baddy is and then using this interface (the mouse) to get the cursor there as quickly as possible.

                It would be like letting an AI plot its position on the field and what action its taking during a football match and then saying "Look, The AI would have scored dozens of times in this simulation, it is the greatest soccer player in the world!" No, sorry, the game actually requires you to locomote, abstractly describing your position may be fun but it's not the game

                • By rowanG077 2025-05-2315:162 reply

                  Did you read the paper? It had access to the dota 2 bot API, which is some gamestate but very far from all gamestate. It also had artifially limited reaction to something like 220ms, worse then professional gamers.

                  But then again, that is precisely the point. A chess bot also has access to gigabytes of perfect working memory. I don't see people complaining about that. It's perfectly valid to judge the best an AI can do vs the best a human can do. It's not really fair to take away exactly what a computer is good at from an AI and then say: "Look but the AI is now worse". Else you would also have to do it the other way around. How well could a human play dota if it only had access to the bot API. I don't think they would do well at all.

                  • By mistercheph 2025-05-2518:02

                    It's fine if the computer has access to gigabytes of working memory, it can use all of the "natural" advantages that it has to play the game, that's perfectly fair, but there is no comparison to make when you give models bespoke machine interfaces to play games whose core mechanics revolve around perception and physical coordination, it may be impressive, but they are playing a different game, something akin to HvH Counter Strike.

                    And you can try to play some game where you create disadvantages to try to balance out all of the advantages of the machine interface, but again, hard to reason about the edge cases, and easy to create a misleading headline like "artificially limited reaction time worse than professional gamers" while in practice being able to react to information much more quickly than a human player because of its exclusive interface to game state. All of that is fair and well, and doesn't take anything away from the very cool achievements of google et al., but when you change the core mechanics of the game to accommodate a uniquely challenged player, you're playing a different game! Chess is ~mostly not about physically moving the pieces on the board, but Counter Strike is about little more than that! (And dota is somewhere in between.)

                  • By lukeschlather 2025-05-2320:021 reply

                    > But then again, that is precisely the point. A chess bot also has access to gigabytes of perfect working memory. I don't see people complaining about that.

                    There are ~86 billion neurons in the human brain. If we assume each neuron stores a single bit a human also has access to gigabytes of working memory. If we assume each synapse is a bit that's terabytes. Petabytes is not unreasonable assuming 1kb of storage per synapse. (And more than 1kb is also not unreasonable.)

                    The whole point of the exercise is figuring out how much memory compares to a human brain.

                    • By rowanG077 2025-05-241:482 reply

                      No human can, or would, flush their entire brain to use every single neuron as working memory for chess. By doing that you would even forget the rules of chess. At best a tiny subset of neurons could be used for that.

                      I wouldn't have expected for anyone to even attempt to argue a human can beat, or even approach, a computer on working memory. Wikipedia is just 24.05gb. You are somehow claiming here that a human can hold that in working memory. That is they read it once and have perfect recall. Not even the most extreme savants have shown such feats.

                      • By lukeschlather 2025-05-2522:20

                        > No human can, or would, flush their entire brain to use every single neuron as working memory for chess.

                        We don't know what this means. Each neuron connects to thousands of synapses. I would assume that there is some quantity of information encoded in each pairwise connection of synapse-paths through a neuron. I would assume this is more than a bit, and also that it has something that might be described as a lossy fractal compression with each pathway adding or subtracting from whatever structures store information so that each path can use the same physical things - though not with perfect fidelity.

                        But the nuts and bolts are somewhat beside the point. The point is that if you look at Leela zero it only needs like 3GB of RAM to run and we have no evidence this is more memory than human grandmasters use to play chess. Yes, humans have imperfect recall but that's not relevant because neural net based chess engines do not work based on perfect recall.

                      • By mistercheph 2025-05-2518:05

                        It depends on what you mean by "memory". Pure data recall? Sure, a computer has humans beat, but that's not really the purpose of human memory. I can freely reason about mathematical theorems I learned decades ago, and there are many mistakes that I've only made once, and will never make again.

              • By Jensson 2025-05-2315:121 reply

                > It destroyed the professionals.

                Only the first time, later when it played better players it always lost. Players learned the faults of the AI after some time in game and the AI had very bad late game so they always won later.

                • By rowanG077 2025-05-2315:14

                  Not on the last iteration.

            • By scotty79 2025-05-2312:012 reply

              It was 6 years ago. I'm sure now there'd be no contest now if OpenAI dedicated resources to it, which it won't because it's busy with solving entirety of human language before others eat their lunch.

              • By spektral23 2025-05-2312:40

                Funnily enough, even dota2 has grown much more complex than it was 6 years ago, so it's a harder problem to solve today than it was back then

              • By xnickb 2025-05-2312:051 reply

                What do you base your certainty on? Were there any significant enough breakthroughs in the AGI?

                • By scotty79 2025-05-2312:181 reply

                  ARC-AGI, while imagined as super hard for AI, was beaten enough that they had to come up with ARC-AGI-2.

                  • By hbsbsbsndk 2025-05-2312:451 reply

                    "AI tend to be brittle and optimized for specific tasks, so we made a new specific task and then someone optimized for it" isn't some kind of gotcha. Once ARC puzzles became a benchmark they ceased to be meaningful WRT "AGI".

                    • By scotty79 2025-05-2317:42

                      So if DOTA became a benchmark same way Chess or Go became earlier it would be promptly beaten. It just didn't stick before people moved to more useful "games".

          • By fennecfoxy 2025-05-2316:23

            To be fair humans have had quite a few million years across a growing population to gather all of the knowledge that we have.

            As we're learning with LLMs, the dataset is what matters - and what's awesome is that you can see that in us, as well! I've read that our evolution is comparatively slow to the rate of knowledge accumulation in the information age - and that what this means is that you can essentially take a caveman, raise them in our modern environment and they'll be just as intelligent as the average human today.

            But the core of our intelligence is logic/problem solving. We just have to solve higher order problems today, like figuring out how to make that chart in excel do the thing you want, but in days past it was figuring out how to keep the fire lit when it's raining. When you look at it, we've possessed the very core of that problem solving ability for quite a while now. I think that is the key to why we are human, and our close ancestors monkeys are...still just monkeys.

            It's that problem solving ability that we need to figure out how to produce within ML models, then we'll be cooking with gas!

        • By mellosouls 2025-05-2310:43

          The point isn't about learning video games its about learning tasks unrelated to its specific competency generally.

        • By jappgar 2025-05-2311:37

          A human would learn it faster, and could immediately teach other humans.

          AI clearly isn't at human level and it's OK to admit it.

        • By jandrese 2025-05-2315:22

          > But why would we want robots to use an interface (especially a modern controller) heavily optimized for human hands; that seems like the definition of a horseless carriage.

          Elon's response to this is that if we want these androids to replace human jobs then the lowest friction alternative is for the android to be able to do anything a human can do in a human amount of space. A specialized machine is faster and more efficient, but comes with engineering and integration costs that create a barrier to entry. Elon learned this lesson the hard way when he was building out the gigafactories and ended up having to hire a lot of people to do the work while they sorted out the issues with the robots. To someone like Elon a payroll is an ever growing parasite on a companies bottom line, far better if the entire thing is automated.

        • By johnb231 2025-05-2310:50

          No, the joystick part is really not the limiting factor. They’ve already done this with a direct software interface. Physical interface is a new challenge. But overall you are missing the point.

    • By suddenlybananas 2025-05-238:263 reply

      It's because humans (and other animals) have enormous innate capacities and knowledge which makes learning new things much much simpler than if you start from scratch. It's not really because of human's computational capacity.

      • By xnx 2025-05-2313:221 reply

        > enormous innate capacities and knowledge

        Hundreds of millions of years of trial-and-error biological pre-training where survival/propagation is the reward function

      • By MrScruff 2025-05-239:451 reply

        By innate do you mean evolved/instinctive? Surely even evolved behaviour must be expressed as brain function, and therefore would need a brain capable of handling that level of processing.

        I don't think it's clear how much of a human brains function exists at birth though, I know it's theorised than even much of the sensory processing has to be learned.

        • By suddenlybananas 2025-05-239:551 reply

          I'm not arguing against computational theory of mind, I'm just saying that innate behaviours don't require the same level of scale as learnt ones.

          Existing at birth is not the same thing as innate. Puberty is innate but it is not present at birth.

          • By MrScruff 2025-05-2310:101 reply

            That's an interesting point. I can see that, as you say puberty and hormones impact brain function and hence behaviour, and those are inate and not learned. But at least superfically that would appear to be primarily broad behavioural effects, similar to what might be induced by medication. Rather than something that impacts pure abstract problem solving, which I guess is what the Atari games are supposed to represent?

            • By rafaelmn 2025-05-2310:24

              This is obviously wrong from genetic defects that cause predictable development problems in specialized areas. They are innate but not present at birth.

      • By Nopoint2 2025-05-2314:222 reply

        There is just no reason to believe that we are born with some insanely big library of knowledge, and it sounds completely impossible. How would it be stored, and how would we even evolve it?

        It just isn't needed. Just like you can find let's say kangaroos in the latent space of an image generator, so we learn abstract concepts and principles of how things work as a bonus of learning to process the senses.

        Maybe a way to AGI could be figuring out how to combine a video generator with a LLM or something similar in a way that allows it to understand things intuitively, instead of doing just lots and lots of some statistical bullsit.

        • By Jensson 2025-05-2315:182 reply

          > There is just no reason to believe that we are born with some insanely big library of knowledge, and it sounds completely impossible. How would it be stored, and how would we even evolve it?

          We do have that, ever felt fear of heights? That isn't learned, we are born with it. Same with fear of small moving objects like spiders or snakes.

          Such things are learned/stored very different from memories, but its certainly there and we can see animals also have those. Like cats gets very scared of objects that are long and appear suddenly, like a cucumber, since their genetic instincts thinks its a snake.

          • By throwup238 2025-05-2316:16

            > Like cats gets very scared of objects that are long and appear suddenly, like a cucumber, since their genetic instincts thinks its a snake.

            After having raised four dozen kittens that a couple of feral sisters gave birth to in my garage, I’m certain that is nonsense. It’s an internet meme that became urban legend.

            I don’t think they have ever even reacted to a cucumber, and I have run many experiments because my childhood cat loved cucumbers (we’d have to guard the basket of cucumbers after harvest, otherwise she’d bite every single one of them… just once).

          • By Nopoint2 2025-05-2315:352 reply

            Of course it is learned, and fear is triggered by anything unfamiliar, that causes a high reconstruction error. Because it means you don't understand it, and it could be dangerous. We are just not used to encoding anything so deep below the eye level, and it freaks us out.

            • By suddenlybananas 2025-05-248:00

              Yeah that's why people are terrified by white noise.

            • By Jensson 2025-05-2315:51

              Do you really think every single ant is learning all that on its own? And if ants can store that in their DNA, why don't you think other animals can? DNA works just fine as generic information storage, there are obviously a ton of behaviors and information encoded there from hundreds of millions of years of survival of the fittest.

        • By suddenlybananas 2025-05-248:02

          >How would it be stored, and how would we even evolve it?

          DNA and the same way anything else is evolved? The body is insanely complicated, I don't see why innate knowledge is so unbelievable.

    • By nlitened 2025-05-239:092 reply

      > the human brain, of which we don't have a clear understanding of the compute capacity

      Neurons have finite (very low) speed of signal transfer, so just by measuring cognitive reaction time we can deduce upper bounds on how many _consecutive_ neuron connections are involved in reception, cognitive processing, and resulting reaction via muscles, even for very complex cognitive processes. And the number is just around 100 consecutive neurons involved one after another. So “the algorithm” could not be _that_ complex in the end (100x matmul+tanh?)

      Granted, a lot of parallelism and feedback loops are involved, but overall it gives me (and many others) an impression that when the AGI algorithm is ever found, it’s “mini” version should be able to run on modest 2025 hardware in real time.

      • By johnb231 2025-05-239:352 reply

        > (100x matmul+tanh?)

        Biological neurons are way more complex than that. A single neuron has dentritic trees with subunits doing their own local computations. There are temporal dynamics in the firing sequences. There is so much more complexity in the biological networks. It's not comparable.

        • By woolion 2025-05-2311:071 reply

          You could implement a Turing-machine with humans acting physically operating as logic gates. Then, every human is just a boolean function.

          • By Jensson 2025-05-2315:19

            Neurons are stateful though, it is core to their function and how they learn.

        • By neffy 2025-05-2311:16

          This is exactly it. Biology is making massive use of hacked real time local network communication in ways we haven´t begun to explore.

      • By scajanus 2025-05-239:55

        The granted is doing a lot of work there. In fact, if you imagine a computer being able to do similar tasks as human brain can in around 100 steps, it becomes clear that considering parallelism is absolutely critical.

  • By qoez 2025-05-2310:3810 reply

    Interesting reply from an openai insider: https://x.com/unixpickle/status/1925795730150527191

    • By epr 2025-05-2311:346 reply

      Actually no, it's not interesting at all. Vague dismissal of an outsider is a pretty standard response by insecure academic types. It could have been interesting and/or helpful to the conversation if they went into specifics or explained anything at all. Since none of that's provided, it's "OpenAI insider" vs John Carmack AND Richard Sutton. I know who I would bet on.

      • By handsclean 2025-05-2315:381 reply

        It seems that you’ve only read the first part of the message. X sometimes aggressively truncates content with no indication it’s done so. I’m not sure this is complete, but I’ve recovered this much:

        > I read through these slides and felt like I was transported back to 2018.

        > Having been in this spot years ago, thinking about what John & team are thinking about, I can't help but feel like they will learn the same lesson I did the hard way.

        > The lesson: on a fundamental level, solutions to these games are low-dimensional. No matter how hard you hit them with from-scratch training, tiny models will work about as well as big ones. Why? Because there's just not that many bits to learn.

        > If there's not that many bits to learn, then researcher input becomes non-negligible.

        > "I found a trick that makes score go up!" -- yeah, you just hard-coded 100+ bits of information; a winning solution is probably only like 1000 bits. You see progress, but it's not the AI's.

        > In this simplified RL setting, you don't see anything close to general intelligence. The neural networks aren't even that important.

        > You won't see _real_ learning until you absorb a ton of bits into the model. The only way I really know to do this is with generative modeling.

        > A classic example: why is frame stacking just as good as RNNs? John mentioned this in his slides. Shouldn't a better, more general architecture work better?

        > YES, it should! But it doesn't, because these environments don't heavily encourage real intelligence.

        • By leoc 2025-05-2723:22

          I'm not sure what the moral is from this, but if Atari games are just too easy, at the same time the response of the machine-learning guys to the challenge of the NetHack Learning Environment seems to have mostly been to quietly give up. Why is generative modeling essential to finding harder challenges when NetHack is right there ...?

      • By lairv 2025-05-2313:58

        Alex Nichol worked on "Gotta Learn Fast" in 2018 which Carmack mentions in his talk, he also worked on foundational deep learning methods like CLIP, DDPM, GLIDE, etc. Reducing him to a "seething openai insider" seems a bit unfair

      • By ActivePattern 2025-05-2313:072 reply

        It's a OpenAI researcher that's worked on some of their most successful projects, and I think the criticism in his X thread is very clear.

        Systems that can learn to play Atari efficiently are exploiting the fact that the solutions to each game are simple to encode (compared to real world problems). Furthermore, you can nudge them towards those solutions using tricks that don't generalize to the real world.

        • By 6stringmerc 2025-05-2323:00

          Right, and the current state of tech - from accounts I’ve read, though not first hand experienced - is the “black box” methods of AI are absolutely questionable when delivering citations and factual basis for their conclusions. As in, the most real world challenge, in the basic sense, of getting facts right is still a bridge too far for OpenAI, ChatGPT, Grok, et al.

          See also: specious ethics regarding the training of LLMs on copyright protected artistic works, not paying anything to the creators, and pocketing investor money while trying to legislate their way around decency in engineering as a science.

          Carmack has a solid track record as an engineer, innovator, and above the board actor in the tech community. I cannot say the same for the AI cohort and I believe such a distinction is important when gauging the validity of critique or self-aggrandizement by the latter, especially at the expense of the former. I am an outlier in this community because of this perspective, but as a creator and knowledgeable enough about tech to see things through this lens, I am fine being in this position. 10 years from now will be a great time to look back on AI the way we’re looking back at Carmack’s game changing contributions 30 years ago.

        • By dgb23 2025-05-2315:21

          That sounds like an extremely useful insight that makes this kind of research even more valuable.

      • By kadushka 2025-05-2312:59

        He did go into specifics and explained his point. Or have you only read his first post?

      • By quadrature 2025-05-2314:491 reply

        Do you have an X account, if you're not logged in you'll only see the first post in the thread.

        • By threatripper 2025-05-2315:141 reply

          x.com/... -> xcancel.com/...

          • By ewoodrich 2025-05-2319:48

            I use a Chrome extension to auto replace the string in the URL, works very well.

      • By MattRix 2025-05-2314:29

        It’s not vague, did you only see the first tweet or the entire thread?

    • By jjulius 2025-05-2312:441 reply

      I appreciate how they don't tell us what lesson they learned.

      • By dcre 2025-05-2313:231 reply

        It is a thread. You may have only seen the first tweet because Twitter is a user-hostile trash fire.

        “The lesson: on a fundamental level, solutions to these games are low-dimensional. No matter how hard you hit them with from-scratch training, tiny models will work about as well as big ones. Why? Because there's just not that many bits to learn.”

        https://unrollnow.com/status/1925795730150527191

        • By jjulius 2025-05-2314:041 reply

          Thank you for clarifying. I don't have a Twitter account, and the linked tweet genuinely looks like a standalone object. Mea culpa.

          • By dcre 2025-05-2319:48

            Not your fault. They are the worst.

    • By lancekey 2025-05-2318:031 reply

      I think some replies here are reading the full twitter thread, while others (not logged in?) see only the first tweet. The first tweet alone does come off as a dismissal with no insight.

    • By alexey-salmin 2025-05-2315:27

      Each of these games is low-dimensional and require not the "intelligence" but more like "reflexes", I tend to agree.

      However making a system that can beat an unknown game does require generalization. If not real a intelligence (whatever that means) but at the level of say "a wolf".

      Whether it can arise from RL alone is not certain, but it's there somewhere.

    • By andy_ppp 2025-05-2310:424 reply

      My bet is on Carmack.

      • By WithinReason 2025-05-2313:594 reply

        "Graphics Carmack" is a genius but that doesn't mean that "AI Carmack" is too.

        • By MrLeap 2025-05-2314:083 reply

          I wouldn't bet against him. "The Bitter Lesson" may imply an advantage to someone who historically has been at the tip of the spear for squeezing the most juice out of GPU hosted parallel computation.

          Graphics rendering and AI live on the same pyramid of technology. A pyramid with a lot of bricks with the initials "JC" carved into them, as it turns out.

          • By mhh__ 2025-05-2318:27

            I would be long carmack in the sense that I think he will have good judgement and taste running a business but I really don't see anything in common between AI and graphics.

            Maybe someone better at aphorisms than me can say it better but I really don't see it. There are definitely mid-level low hanging fruits that would look like the kinds of things he did in graphics but the game just seems completely different.

          • By KerrAvon 2025-05-2320:37

            I think people would do well to read about Philo Farnsworth in this context.

          • By kadushka 2025-05-2315:09

            Only if computation is the bottleneck. GPT-4.5 shows it’s not.

        • By cheschire 2025-05-2316:29

          Carmack is always a genius, but like most people he requires luck, and like most people, the house always wins. Poor Armadillo Aerospace.

        • By mrguyorama 2025-05-2320:452 reply

          What has "Graphics Carmack" actually done since about 2001?

          So, his initial tech was "Adaptive tile refresh" in Commander Keen, used to give it console style pixel-level scrolling. Turns out, they actually hampered themselves in Commander Keen 1 by not understanding the actual tech, and implemented "The Jolt", a feature that was not necessary. The actual hardware implemented scrolling the same way that consoles like the NES did, and did not need "the jolt", nor the limitations it imposed.

          Then, Doom and Quake was mostly him writing really good optimizations of existing, known and documented algorithms and 3D techniques, usually by recognizing what assumptions they could make, what portions of the algorithm didn't need to be recalculated when, etc. Very talented at the time, but in the software development industry, making a good implementation of existing algorithms that utilize your specific requirements is called doing your job. This is still the height of his relative technical output IMO.

          Fast Inverse Square Root was not invented by him, but was floating around in industry for a while. He still gets kudos for knowing about it and using it.

          "Carmack's reverse" is a technique for doing stencil shadows that was a minor (but extremely clever) modification to the "standard" documented way of doing shadow buffers. There is evidence of the actual technique from a decade before Carmack put it in Doom 3 and it was outright patented by two different people the year before. There is no evidence that Carmack "stole" or anything this technique, it was independent discovery, but was clearly also just a topic in the industry at the time.

          "Megatextures" from Rage didn't really go anywhere.

          Did Carmack actually contribute anything to VR rendering while at Oculus?

          People treat him like this programming god and I just don't understand. He was well read, had a good (maybe too good) work ethic, and was very talented at writing 386 era assembly code. These are all laudable, but doesn't in my mind imply that he's some sort of 10X programmer who could revolutionize random industries that he isn't familiar with. 3D graphics math isn't exactly difficult.

        • By dumdedum123 2025-05-2319:431 reply

          Exactly. I know him and like him. He is a genius programmer for sure BUT people forget that the last successful product that he released was Doom 3 over 20 years ago. Armadillo was a failure and Oculus went nowhere.

          He's also admitted he doesn't have much of math chops, which you need if you want to make a dent in AI. (Although the same could have been said of 3D graphics when he did Wolfenstein and Doom, so perhaps he'll surprise us)

          I wish him well TBH

          • By johnb231 2025-05-242:46

            Rage was released in 2011. His work at Meta produced highly optimized standalone VR. Whether you think it's successful or not, the tracking accuracy and latency is extremely competitive.

      • By ramesh31 2025-05-2313:392 reply

        What has he shipped in the last 20 years? Oculus is one thing, but that was firmly within his wheelhouse of graphics optimization. Abrash and co. handled the hardware side of things.

        Carmack is a genius no doubt. But genius is the result of intense focused practice above and beyond anyone else in a particular area. Trying to extend that to other domains has been the downfall of so many others like him.

        • By alexey-salmin 2025-05-2319:171 reply

          Ever since Romero departed the id Software had shipped *checks notes* Quake II, Quake III, Doom 3 and Quake 4.

          Funnily enough Romero himself didn't ship much either. IMO it's one of the most iconic "duo breakups". The whole is greater than the sum of the parts.

          • By johnb231 2025-05-2320:031 reply

            Rage was Carmack's last big game at id Software before leaving.

            Romero is credited on 27 games since he left id Software.

            https://en.wikipedia.org/wiki/John_Romero#Games

            • By alexey-salmin 2025-05-2410:401 reply

              None of them came close to the success of Quake, Doom or Commander Keen.

              If you examine the list it includes games like "Gunman Taco Truck" by his 12yo sun, SIGIL I/II (Doom mods) and a remake of Dangerous Dave. Most of the money he made post-id came from Facebook farming games.

              I'm not saying he's doing nothing. He's extremely talented and achieved more than most of us could ever dream of. I'm just pointing out that after he departed from id neither id nor him managed to replicate the earlier success. Who knows, maybe times had changed and it would be the same even if he stayed.

              • By johnb231 2025-05-2419:53

                Their success with Doom and Quake was a confluence of things that cannot be replicated today. Carmack's programming talent gave them at least a year head start versus the competition. They introduced a new genre with no competition. Romero wrote game development tools that made them productive and able to deliver quickly. The artists and game designers created something innovative and fun to play, that stood the test of time.

                Duke Nukem was released in 1996, then Unreal was released in 1998 and that's when they lost their technical advantage. The market became saturated with FPS.

                Romero and Tom Hall founded Ion Storm which produced one successful game - Deus Ex. He gave up on AAA and went back to creating small games.

                Carmack's licensed code was the basis of many successful games beyond the 90s, including Half Life 1 and 2 and the latest Doom games. We wouldn't have Half Life without id Software. Maybe Valve Software wouldn't exist.

      • By cmpxchg8b 2025-05-2318:323 reply

        Appeal to authority is a logical fallacy. People often fall into the trap of thinking that because they are highly intelligent and an expert in one domain that this makes them an expert in one or more other domains. You see this all the time.

        • By mrandish 2025-05-2321:22

          > People often fall into the trap of thinking that because they are highly intelligent and an expert in one domain that this makes them an expert in one or more other domains.

          While this is certainly true, I'm not aware of any evidence that Carmack thinks this way about himself. I think he's been successful enough that's he's personally 'post-economic' and is choosing to spend his time working on unsolved hard problems he thinks are extremely interesting and potentially tractable. In fact, he's actively sought out domain experts to work with him and accelerate his learning.

        • By rurp 2025-05-2319:40

          Bayesian reasoning isn't a fallacy. A known expert in one domain is often correct about things in a related one. The post didn't claim that Carmack is right, just that that he's who they would bet on to be right, which seems perfectly reasonable to me.

        • By edanm 2025-05-2319:15

          Expecting an expert in one thing to also be pretty good at other domains, especially when they're relatively related, isn't a fallacy.

      • By speed_spread 2025-05-2311:47

        I suspect Carmack in the Dancehall with the BFG.

    • By zeroq 2025-05-2311:30

        >> "they will learn the same lesson I did"
      
      Which is what? Don't trust Altman? x)

    • By cmiles74 2025-05-2312:18

      From a marketing perspective, this strikes me as a very predictable response.

    • By roflcopter69 2025-05-2310:502 reply

      Funny, I was just commenting something similar here, see https://news.ycombinator.com/item?id=44071614

      And I say this while most certainly not being as knowledgeable as this openai insider. So it even I can see this, then it's kinda bad, isn't it?

HackerNews