
A Response to Yudkowsky and Soares from the Front Lines of AGI Development
Being: A reaction to Eliezer Yudkowsky and Nate Soares’s book “If anybody builds it everyone dies” which is getting a bit of media attention
I’ve known Eliezer Yudkowsky since the late 1990s. In 2000, he visited my AI company Webmind Inc. in New York City, where we were explicitly trying to build AGI alongside various commercial applications. His mission? To convince us to maybe stop. Or, failing that, to get us thinking really hard about AGI safety and how to make the AGI good for everyone.
While he was encouraging us to slow down and work on ethics instead of building AGI, he also showed me his new programming language called Flare – which he was developing specifically to create AGI. A few years later, in 2005, I published an edited book called “Artificial General Intelligence“ which put that term on the map, and Eliezer had a chapter in it explaining his understanding of how to engineer minds and make safe AGI.
This contradiction has persisted through the decades. Eliezer has oscillated between “AGI is the most important thing on the planet and only I can build safe AGI” and “anyone who builds AGI will kill everyone.”
For a period of time in the early aughts I ended up as the Head of Research at Eliezer’s organization The Singularity Institute for AI (name later changed to MIRI), which gave us a chance to both debate these matters more extensively and also to try to reconcile our perspective as best we could – which turned out to not be all that well, and my tenure at the organization ended with an aura of friendly, mutually respectful disagreement on many fundamentals. I later wrote a blog post expressing my views, called The Singularity Institute’s Scary Idea – And Why I Don’t Buy It .. with largely the same upshot as this very blog post.
Anyway, for anyone who’s been following the US futurist and Singularitarian community for the last couple decades, Yudowsky and Soares’s recent book will come as no surprise – it’s basically the same story he’s been telling for 15-20-ish years now.
In fact if anyone wants a deeper look into his perspective, I’d encourage a romp through Eliezer’s book Rationality: from AI to Zombies, which is a wonderful rambling tome – and an utterly different book than one would expect from its name, as the author’s perspective on rationality is a fascinatingly eccentric one. Among other things one gets the clear sense that every single idea in Nick Bostrom’s influential anti-AGI work Superintelligence was aired earlier and more thoroughly in Eliezer’s writings, albeit in less formal and academic terms.
Anyway – having spent decades actually working on AGI systems while Eliezer has mostly been warning about them and scheming how to stop or prevent them, I feel compelled to respond to his latest articulation of doom. I do have a slight reservation that by doing so I might be drawing more attention to the book than it deserves … but ultimately I feel like, even though most of the book’s thoughts are misguided, the debate it provokes touches so many commonplace fears in our culture that it’s worth having.
Just as Eliezer’s arguments in his book mostly aren’t new, nor is the crux of my response. What is different is the heightened attention and importance attached to these issues now that everyone without blinders on can see we are palpably getting might close to actually creating human-level AGI. (No, of course scaled up LLMs are not going to give us AGI, but the same deeper hardware and software and industry and science trends that have given us LLMs are very likely to keep spawning more and more amazing AI technologies, some combination of which will likely produce AGI on something roughly like the 2029 timeframe Kurzweil projected in his 2005 book The Singularity Is Near, possibly even a little sooner.)
Eliezer is right about one thing: we cannot know with certainty that AGI won’t lead to human extinction – what my old friend Hugo de Garis, also a long-time AGI researcher, likes to call “Gigadeath.” But the leap from uncertainty to “everybody dies” represents a tremendous failure of imagination about both the nature of intelligence and our capacity to shape its development.
This was really the main idea of my “Singularity Institute’s Scary Idea” post way back when – the way they segued so glibly from “AGI might potentially kill everyone and we can’t totally rule it out” to “AGI will almost certainly kill everyone.” For a bunch of self-styled rationalists, it seemed to me they made this leap with an egregious lack of careful rationality. Their arguments in favor of this scary segue were inevitably extremely full of holes. Stuff like “a mind randomly chosen from mindspace would have probability near zero of caring about humans.” OK sure, probably, but that would be a very bizarre and difficult thing to do, to choose a random mind from mindspace (according to any reasonably broad distribution) – obviously what people will actually do is quite different, more like creating our “mind children,” drawn from the very biased distribution of AGI minds initialized with some version of human value system and an initial purpose of serving at least some humans. So…
Specific arguments and counterarguments aside, I feel like the core philosophical flaw in Eliezer’s reasoning on these matters is treating intelligence as pure mathematical optimization, divorced from the experiential, embodied, and social aspects that shape actual minds. If we think about AGI systems as open-ended intelligences—a concept I explore in my 2024 book “The Consciousness Explosion“ — we see them as living, self-organizing systems that seek both survival and self-transcendence, evolving in a fashion complexly coupled with their environments. The challenge is not to theoretically analyze their hypothetical behaviors in an isolated way, but rather to enter into the right sort of mutually beneficial interactions with them as they evolve, grow and learn.
An intelligence capable of recursive self-improvement and transcending from AGI to ASI would naturally tend toward complexity, nuance, and relational adaptability rather than monomaniacal optimization. Eliezer and Soares and their ilk are possessed with the fear that we’ll have a superintelligence that’s A) utterly narrowminded and pigheaded in pursuing psychopathic or megalomaniacal goals, while at the same time being B) incredibly deep and broad in understanding the universe and how to get things done – yes, this could theoretically happen, but there is no rational reason to assume it’s likely!
AGI development isn’t happening in some abstract space where optimization arguments determine everything. The AGIs that appear on earth will not be random digital minds sampled haphazardly from some notional mind-space. Rather, we and our computer systems are part of a fantastically complex evolving global brain — and AGI systems are emerging as part of this human-digital dynamic. The actual factors that will shape AGI’s impact, within this global-brain context, are concrete and manipulable: the cognitive architecture we choose, who owns and controls it, and what applications shape its early learning experiences.
This is what drives my own technical AGI work – my team at SingularityNET and TrueAGI is creating the Hyperon AGI system very differently from LLMs and other modern neural networks, both because we believe LLMs lack the cognitive architecture needed for human-level AGI, and also because we want to create AGI systems more capable of intentional beneficial activity. We’re trying to create AGI systems designed for self-understanding, deep reflection, and moral agency. This doesn’t guarantee safety, but it makes beneficial outcomes more likely than architectures focused solely on pursuing narrow rewards for narrow goals (like predicting the next token in a series, maximizing the profit of a certain company, or maximizing the success of a certain military).
This also ties in with my reasons for developing decentralized platforms for AI deployment, through SingularityNET and the ASI Alliance. Eliezer’s doom scenarios typically assume AGI emerges from a single actor, but when thousands or millions of diverse stakeholders contribute to and govern AGI’s development, the system is far less likely to embody the narrow, potentially destructive goal functions that alignment pessimists fear. Democratic, transparent, decentralized AGI development isn’t just ethically preferable—it’s technically safer, for reasons similar to why open-source software systems are often safer than their closed-source analogues.
Perhaps the most annoying thing about the AGI doom narrative is how it distracts from real, addressable issues. I’m not just talking about immediate problems like AI bias or military applications – though these are real and serious enough. I’m even more concerned about the transition period between early-stage AGI and superintelligence.
When automation eliminates jobs faster than new opportunities emerge, when countries that can’t afford universal basic income face massive displacement, we risk global terrorism and fascist crackdowns—as we’re already seeing in various parts of the world. These nearer-term challenges could determine whether early-stage AGI grows up in a context that enables beneficial development or one that virtually guarantees poor outcomes.
The LLM revolution has already demonstrated that Eliezer’s model is too simplistic. These systems show that intelligence and values aren’t orthogonal, as Eliezer, Bostrom and their intellectual allies have often argued. In theory, yes, you could pair an arbitrarily intelligent mind with an arbitrarily stupid value system. But in practice, certain kinds of minds naturally develop certain kinds of value systems.
Mammals, which are more generally intelligent than reptiles or earthworms, also tend to have more compassion and warmth. Humans tend to have a broader scope of compassion than most other mammals, because our greater general intelligence lets us empathize more broadly with systems different from ourselves. There’s deep intertwining between intelligence and values—we even see it in LLMs already, to a limited extent. The fact that we can meaningfully influence their behavior through training hints that value learning is tractable, even for these fairly limited sub-AGI systems.
After decades of these debates, I’m convinced the real danger isn’t that someone will build AGI—that ship has sailed. The danger we should be most worried about is that fear-mongering will either drive development underground into the hands of the least scrupulous actors, or create regulatory capture that hands control to a small elite.
Fear-mongering that forces centralized AGI development by incompetent or narrow, self-centered actors is far more palpably dangerous than the vast unwashed masses of humanity creating open source AGI in a participatory, democratic way.
Since 2006, I’ve organized the annual AGI research conference, and last year I started a conference series on Beneficial General Intelligence. Next month at the BGI-25 event in Istanbul, we’re bringing together people from around the world to think through how to nudge AGI and ASI development in positive directions. It’s not easy—there are tremendous social and economic forces working against us—but the idea that this effort is doomed to fail is both incorrect and potentially dangerous.
Yudkowsky and Soares’s “everybody dies” narrative, while well-intentioned and deeply felt (I have no doubt he believes his message in his heart as well as his eccentrically rational mind), isn’t just wrong—it’s profoundly counterproductive. By treating beneficial AGI as impossible, it threatens to become a self-fulfilling prophecy that cedes the field to those who don’t care about safety at all. We need voices advocating for careful, beneficial development, not voices saying we’re all doomed.
The future remains unwritten, and we’re helping to write it. The question isn’t whether humanity will build AGI—it’s how to build it wisely. That means architectures emphasizing compassion and self-reflection, decentralized governance preventing monopolistic or oligopolistic control, and applications in education, healthcare, and science that shape AGI’s values through beneficial action.
This won’t guarantee good outcomes — nothing can do that — but it’s far more likely to succeed than either halting development (next to impossible by this point) or proceeding without considering these factors (genuinely dangerous). After all these years and decades, I remain convinced: the most important work isn’t stopping AGI—it’s making sure we raise our AGI mind children well enough.
I’m more optimistic about the possibility of beneficial AGI in general than most folks, I think, but something that caught me in the article was the recourse to mammalian sociality to (effectively) advocate for compassion as an emergent quality of intelligence.
A known phenomenon among sociologists is that, while people may be compassionate, when you collect them into a superorganism like a corporation, army, or nation, they will by and large behave and make decisions according to the moral and ideological landscape that superorganism finds itself in. Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position. Nobody would throw someone out of their home or deny another person lifesaving medicine, but as a bank officer or an insurance agent, they make a living doing these things and sleep untroubled at night. A CEO will lay off 30,000 people - an entire small city cast off into an uncaring market - with all the introspection of a Mongol chieftain subjugating a city (and probably less emotion). Humans may be compassionate, but employees, soldiers, and politicians are not, even though at a glance they’re made of the same stuff.
That’s all to say that to just wave generally in the direction of mammalian compassion and say “of course a superintelligence will be compassionate” is to abdicate our responsibility for raising our cognitive children in an environment that rewards the morals we want them to have, which is emphatically not what we’re currently doing for the collective intelligences we’ve already created.
> Nobody rational would kill another person for no reason, but a soldier will bomb a village for the sake of their nation’s geostrategic position.
I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so?
> Nobody would throw someone out of their home or deny another person lifesaving medicine, but as a bank officer or an insurance agent, they make a living doing these things and sleep untroubled at night.
Again, you're forgetting to control for other variables. What if you paid them equally to do the same things?
> I think you're forgetting to control for the fact that the former would be severely punished for doing so, and the latter would be severely punished for not doing so? > What if you paid them equally to do the same things?
I think the larger point is that rewarding bombing, or paying bank officers to evict people from their homes is how the superorganism functions. Your counter examples are like saying 'what if fire was cold instead of hot', well then it wouldn't be fire anymore.
Why should you "control" for these variables? AIs will effectively be punished for doing various inscrutable things by their own internal preferences.
a CEO laying off 3% scales in absolute numbers as the company grows
should, therefore, large companies, even ones that succeed largely in a clean way by just being better at delivering what that business niche exists for, be made to never grow too big, in order to avoid impacting very many people? keep in mind that people engage in voluntary business transactions because they want to be impacted (positively—but not every impact can be positive, in any real world)
what if its less efficient substitutes collectively lay off 4%, but the greater layoffs are hidden (simply because it's not a single employer doing it which may be more obvious)?
to an extent, a larger population inevitably means that larger absolute numbers of people will be affected by...anything
> voluntary business transactions
The evil parts are hid in property rights which are not voluntary.
> made to never grow too big, in order to avoid impacting very many people
Consolidated property rights have more power against their counterparties, that's why businesses love merging so much.
Look at your tax return. Do you make more money from what you do or what you own? If you make money from what you do, you're a counterparty and you should probably want to tap the brakes on the party.
What are the evil parts, exactly? When property can't be privately owned with strong rights the effectively the government owns everything. That inevitably leads to poverty, often followed by famine and/or genocide.
Why is it that property is taxed less than productive work? Someone sitting on their ass doing nothing but sucking resources through dividend payments has that income taxed less than the workers' income who did the work that generated those dividends. Why isn't the reverse the case? Heavily tax passive income, and lightly tax active income. Incentivize productive activity and penalize rent-seeking parasites?
Plenty of examples on both sides of that, even in the US there’s vast swaths of land that can’t be privately owned for example try and buy a navigable river or land below the ordinary high water mark etc. https://en.wikipedia.org/wiki/Navigable_servitude Similarly eminent domain severely limits the meaning of private land ownership in the US.
The most extreme capitalist societies free from government control of resources like say Kowloon Walled City are generally horrible places to live.
Places with predominantly private ownership can be and are prone to famine, and/oril genocide, etc. as well.
Sure, but those are the exceptions that prove the rule. Centralized (Marxism and its descendants) societies tend to have those things happens the majority of the time. In decentralized capitalist societies, they happened once a long time ago and we took steps for them to not happen again. Seems like a flaw in those societies is that when these problems happen so infrequently people forget and then you get takes like this.
Centralised planning is not what Marxism is about though, Marxism is about class struggle and the abolishment of a capital-owning class, distributing the fruits of labour to the labourers.
In that definition it's even more decentralised than capitalism which has inherent incentives for the accumulation of capital into monopolies, since those are the best profit-generating structures, only external forces from capitalism can reign into that like governments enforcing anti-trust/anti-competitive laws to control the natural tendency of monopolisation.
If the means of production were owned by labourers (not through the central government) it could be possible to see much more decentralisation than the current trend from the past 40 years of corporate consolidation.
The centralisation is already happening under capitalism.
I think it's reasonable that bigger companies are under more scrutiny and stricter constraints than smaller companies, yeah.
Keeps actors with more potential for damaging society in check, while not laying a huge burden on small companies which have less resources to spend away from their core business.
Indeed, by what moral justification does one slow the wheels of commerce, no matter how many people they run over?
[dead]
Beautifully expressed.
> Nobody would throw someone out of their home or deny another person lifesaving medicine
Individuals with rental properties and surgeons do this every day.
Quibble, surgeons are not the ones doing this. Surgeons' schedules are generally permanently full. They do not typically deny people lifesaving medicine, on the contrary they spend all of their time providing lifesaving medicine.
The administrators who create the schedule for the surgeons, are the one denying lifesaving care to people.
If all of the surgeons' schedules are full, the administrators are as innocent as the surgeons.
Triage, whether by overworked nurses or by auction or by private death panel or by public death panel, is not necessarily a problem created by administrators. It can be created by having too few surgeons, in which case whatever caused that (in a time of peace, no less) is at fault. Last I heard it was the doctor's guild lobbying for a severe crimp on their training pipeline, in which case blame flows back to some combination of doctors and legislators.
You heard wrong. While at one point the AMA lobbied Congress to restrict residency slots, they reversed position some years back. However Congress has still refused to increase Medicare funding for residency programs. This is essentially a form of care rationing imposed through supply shortages.
There is no "doctor's guild". No one is required to join the AMA to practice medicine, nor are they involved in medical school accreditation.
I'm not even talking about triage. It's not a matter of who has the worst problem, it's about which patient the nurses deliver to the surgeon and anesthesiologist. Literally just who gets scheduled and when.
Surely they could volunteer to do some charity surgery in their own time. They aren't slaves.
Sure! They can volunteer:
- Their skills.
- Their time.
- The required materials to properly perform the surgery.
They can't volunteer:
- The support staff around them required to do surgery.
- The space to do the surgery.
Surgery isn't a one-man show.
What did you mean by "Surely they could volunteer to do some charity surgery in their own time. They aren't slaves?"
There are a lot of individuals who have the ability to provide those resources.
Even if that's a bad example, there are innumerable examples where individuals do choose not to help others in the same way that corporations don't.
Frankly, nearly every individual is doing that by not volunteering every single extra dollar and minute they don't need to survive.
Not really, because surgeons require operaing rooms and support staff and equipment to do what they do, all of which are controlled bybthe aforementioned hospital administrators.
Yeah, it's the natural empathy myth. Somebody totally would kill somebody else for some reason. It's not inherent to being human that you're unable to be steely-hearted and carry out a range of actions we might classify as "mean" - and those mean actions can have reasons behind them.
So, OK, abdication of responsibility to a collective is a thing. Just following orders. So what? Not relevant to AGI.
Oh wait, this is about "superintelligence", whatever that is. All bets are off, then.
The superintelligence might decide based on things only it can understand that the existence of humans prevents some far future circumstance where even more "good" exists in the universe. When it orders you to toss the babies into the baby-stomping machine, perhaps you should consider doing so based on the faith in its superintelligence that we're supposed to have.
Human beings aren't even an intelligent species, not at the individual level. When you have a tribe of human beings numbering in the low hundreds, practically none of them need to be intelligent at all. They need to be social. Only one or two need to be intelligent. That one can invent microwave ovens and The Clapper™, and the rest though completely mentally retarded can still use those things. Intelligence is metabolically expensive, after all. And if you think I'm wrong, you're just not one of the 1-in-200 that are the intelligent individuals.
I've yet to read the writings of anyone who can actually speculate intelligently on artificial intelligence, let alone meet such a person. The only thing we have going for us as a species is that, to a large degree, none of you are intelligent enough to ever deduce the principles of intelligence. And god help us if the few exceptional people out there get a wild bug up their ass to do so. There will just be some morning where none of us wake up, and the few people in the time zone where they're already awake will experience several minutes of absolute confusion and terror.
And lenders and insurers.
I would argue that corporate actors (a state, an army or a corporation) are not true superorganisms but are semi-autonomous, field-embedded systems that can exhibit super-organism properties, with their autonomy being conditional, relational and bounded by the institutional logics and resource structures of their respective organisational fields. As the history of humanity has shown multiple times, such semi-autonomous with super-organism properties have a finite lifespan and are incapable of evolving their own – or on their own – qualitatively new or distinct, form of intelligence.
The principal deficiency in our discourse surrounding AGI lies in the profoundly myopic lens through which we insist upon defining it – that of human cognition. Such anthropocentric conceit renders our conceptual framework not only narrow but perilously misleading. We have, at best, a rudimentary grasp of non-human intelligences – biological or otherwise. The cognitive architectures of dolphins, cephalopods, corvids, and eusocial insects remain only partially deciphered, their faculties alien yet tantalisingly proximate. If we falter even in parsing the intelligences that share our biosphere, then our posturing over extra-terrestrial or synthetic cognition becomes little more than speculative hubris.
Should we entertain the hypothesis that intelligence – in forms unshackled from terrestrial evolution – has emerged elsewhere in the cosmos, the most sober assertion we can offer is this: such intelligence would not be us. Any attempt to project shared moral axioms, epistemologies or even perceptual priors is little more than a comforting delusion. Indeed, hard core science fiction – that last refuge of disciplined imagination – has long explored the unnerving proposition of encountering a cognitive order so radically alien that mutual comprehension would be impossible, and moral compatibility laughable.
One must then ponder – if the only mirror we possess is a cracked one, what image of intelligence do we truly see reflected in the machine? A familiar ghost, or merely our ignorance, automated?
> I would argue that corporate actors (a state, an army or a corporation) are not true superorganisms but are semi-autonomous, field-embedded systems that can exhibit super-organism properties, with their autonomy being conditional, relational and bounded by the institutional logics and resource structures of their respective organisational fields.
Lotsa big words there.
Really, though, we're probably going to have AI-like things that run substantial parts of for-profit corporations. As soon as AI-like things are better at this than humans, capitalism will force them to be in charge. Companies that don't do this lose.
There's a school of thought, going back to Milton Friedman, that corporations have no responsibilities to society.[1] Their goal is to optimize for shareholder value. We can expect to see AI-like things which align with that value system.
And that's how AI will take over. Shareholder value!
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
That assumes that consumers will just accept it. I would not do business with an AI company, just as I don’t listen to AI music, view AI pictures or video, or read AI writings. At least not knowingly.
People would absolutely buy AI farmed meat or vegetables if they were 10% cheaper. The number of people who pay a premium depending on production method is a small minority.
Costs will go down. But so will revenue, as fewer customers have an income because a different company also cut costs.
Record profits. Right up until the train goes off a cliff.
Also sociopaths are more capable of doing those things while pretending they are empathetic and moral to get positions of power or access to victims. We know a certain percentage of human mammals have sociopathic or narcissistic tendencies, not just misaligned groups of humans that they might take advantage of by becoming a cult leader or war lord or president.
> soldier will bomb a village for the sake of their nation’s geostrategic position.
Soldier does that to please the captain, to look manly and tough to peers, to feel powerful. Or to fulfill a duty - moral mandate on itself. Or out of hate, because soldiers are often made to hate the ennemies.
> Nobody would throw someone out of their home or deny another person lifesaving medicine
They totally would. Trump would do it for pleasure of it. Project 2025 authors would so it happily and sees the rest of us as wuss. If you listen to right wing rhetorics and look at voters, many people will hapilly do just that.
I’ve known both Ben and Eliezer since the 1990s and enjoyed the arguments. Back then I was doing serious AI research along the same lines as Marcus Hutter and Shane Legg, which had a strong basis in algorithmic information theory.
While I have significant concerns about AGI, I largely reject both Eliezer’s and Ben’s models of where the risks are. It is important to avoid the one-dimensional “two faction” model that dominates the discourse because it really doesn’t apply to complex high-dimensionality domains like AGI risk.
IMO, the main argument against Eliezer’s perspective is that it relies pervasively on a “spherical cow on a frictionless plane” model of computational systems. It is fundamentally mathematical, it does not concern itself with the physical limitations of computational systems in our universe. If you apply a computational physics lens then many of the assumptions don’t hold up. There is a lot of “and then something impossible happens based on known physics” buried in the assumptions that have never been addressed.
That said, I think Eliezer’s notion that AGI fundamentally will be weakly wired to human moral norms is directionally correct.
Most of my criticism of Ben’s perspective is against the idea that some kind of emergent morality that we would recognize is a likely outcome based on biological experience. The patterns of all biology emerged in a single evolutionary context. There is no reason to expect those patterns to be hardwired into an AGI that developed along a completely independent path. AGI may be created by humans but their nature isn’t hardwired by human evolution.
My own hypothesis is that AGI, such as it is, will largely reflect the biases of the humans that built it but will not have the biological constraints on expression implied by such programming in humans. That is what the real arms race is about.
But that is just my opinion.
Can you give concrete examples of "something impossible happens based on known physics"? I have followed the AI debate for a long time but I can't think of what those might be.
Optimal learning is an interesting problem in computer science because it is fundamentally bound by geometric space complexity rather than computational complexity. You can bend the curve but the approximations degrade rapidly and still have a prohibitively expensive exponential space complexity. We have literature for this; a lot of the algorithmic information theory work in AI was about characterizing these limits.
The annoying property of prohibitively exponential (ignoring geometric) space complexity is that it places a severe bound on computational complexity per unit time. The exponentially increasing space implies an increase in latency for each sequentially dependent operation, bounded at the limit by the speed of light. Even if you can afford the insane space requirements, your computation can’t afford the aggregate latency for anything useful even for the most trivial problems. With highly parallel architectures this can be turned into a latency-hiding problem to some extent but this also has limits.
This was thoroughly studied by the US defense community decades ago.
The tl;dr is that efficient learning scales extremely poorly, more poorly than I think people intuit. All of the super-intelligence hard-takeoff scenarios? Not going to happen, you can’t make the physics work without positing magic that circumvents the reality of latencies when your state space is unfathomably large even with unimaginably efficient computers.
I harbor a suspicion that the cost of this scaling problem, and the limitations of wetware, has bounded intelligence in biological systems. We can probably do better in silicon than wetware in some important ways but there is not enough intrinsic parallelism in the computation to adequately hide the latency.
Personally, I find these “fundamental limits of computation” things to be extremely fascinating.
So I studied Machine Learning too. One of the main things I learned is that for any problem there is an ideally sized model that when trained will produce the lowest error rate. Now, when you do multi-class learning (training a model for multiple problems), that ideally sized model is larger but there is still an optimum sized model. Seems to me that for GAI, there will also be an ideally sized model. I wouldn't be surprised if the complexity of that model was very similar to the size of the human brain. If that is the case, then some sort of super-intelligence isn't possible in any meaningful way. This would seem to track with what we are seeing in the today's LLMs. When they build bigger models, they often don't perform as well as the previous one which perhaps was at some maximum/ideal complexity. I suspect, we will continue to run into this barrier over and over again.
> for any problem there is an ideally sized model that when trained will produce the lowest error rate.
You studied ML before discovery of "double descent"?
Any reference material (papers/textbooks) on that topic? It does sound fun.
Not the person you are responding to, but much of the conclusions drawn by Bostrom (and most of EY’s ideas are credited to Bostrom) depend on infinities. The orthogonality thesis being series from AIXI, for example.
EY’s assertions regarding a fast “FOOM” have been empirically discredited by the very fact that ChatGPT was created in 2022, it is now 2025, and we still exist. But goal posts are moved. Even ignoring that error, the logic is based on, essentially, “AI is a magic box that can solve any problem by thought alone.” If you can define a problem, the AI can solve it. This is part of the analysis done by AI x-risk people of the MIRI tradition. Which ignores entirely that there are very many problems (including AI recursive improvement itself) which are computationally infeasible to solve in this way, no matter how “smart” you are.
Many of us on HN are beneficiaries of the standing world order and American hegemony.
I see the developments in LLMs not as getting us close to AGI, but more as destabilizing the status quo and potentially handing control of the future to a handful of companies rather than securing it in the hands of people. It is an acceleration of the already incipient decay.
It is not decay. People are just more conscious than previous generations ever were about how the world works. And that leads to confusion and misunderstandings if they are only exposed to herd think.
The chicken doesn't understand it has to lay a certain number of eggs a day to be kept alive in the farm. It hits its metrics because it has been programmed to hit them.
But once it gets access to chatgpt and develops consciousness of how the farm works, the questions it asks slowly evolve with time.
Initially its all fear driven - how do we get a say in how many eggs we need to lay to be kept alive? How do we keep the farm running without relying on the farmer? etc etc
Once the farm animals begins to realize the absurdity of such questions, new questions emerge - how come the crow is not a farm animal? why is the shark not used as a circus animal? etc etc
And thro that process, whose steps cannot be skipped the farm animal begins to realize certain things about itself which no one, especially the farmer, has any incentive of encouraging.
truly, nobody ever asked such questions until they had access to the world’s most sycophantic dumb answer generating machine
Ideology is a -10 modifier on Intelligence
Are you implying that there are people who don't have ideology or that they're somehow capable of reasoning and acting independently of their ideology?
I agree. You wouldn't see incredibly powerful and wealthy people frothing at the mouth to build this technology if that wasn't true, in my opinion.
No one needs Curtis Yarvin, or any other commentator of any political stripe, to tell them that they'd like more money and power, and that they'd like to get it before someone else locks it in.
We should be so lucky as to only have to worry about one particular commentator's audience.
Are you seeing a moat develop around LLMs, indicating that only a small number of companies will control it? I'm not. It seems that there's nearly no moat at all.
The moat is around capital. For thousands of years most people were slaves or peasants whose cheap fungible labor was exploited.
For a brief period intellectual and skilled work has (had?) been valued and compensated, giving rise to a somewhat wealthy and empowered middle class. I fear those days are numbered and we’re poised to return to feudalism.
What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination? Or burgeoning of precariat gig workers barely hanging on? If we’re speaking of extremes, I find the latter far more likely.
> The moat is around capital.
Not really. I can run some pretty good models on my high end gaming PC. Sure, I can't train them. But I don't need to. All that has to happen is at least one group releases a frontier model open source and the world is good to go, no feudalism needed.
> What is more likely, that LLMs lead to the flourishing of entrepreneurship and self determination
I'd say whats more likely is that whatever we are seeing now continues. And that current day situation is a massive startup boom run on open source models that are nearly as good as the private ones while GPUs are being widely distributed.
I am also not seeing a moat on LLMs.
It seems like the equilibrium point for them a few years out will be that most people will be able to run good enough LLMs on local hardware through a combination of the fact that they don't seem to be getting much better due to input data exhaustion while various forms of optimization seem to be increasingly allowing them to run on lesser hardware.
But I still have generalized lurking amorphous concerns about where this all ends up because a number of actors in the space are certainly spending as if they believe a moat will magically materialize or can be constructed.
LLMs as we know them have no real moat, but few people genuinely believe that LLMs are sufficient as a platform for AGI. Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.
> Whatever it takes to add object permanence and long-term memory assimilation to LLMs may not be so easy to run on your 4090 at home.
Today yes but extrapolate GPU/NPU/CPU improvement by a decade.
Im pretty skeptical "the people" are smart enough to control their own destiny anymore. We've deprioritized education wo heavily in the US that it may be better to have a ruling class of corporations and elites. At least you know where things stand and how they'll operate.
> it may be better to have a ruling class of corporations and elites.
Given that the outcome of that so far has been to deprioritize education so heavily in the US that one becomes skeptical that the people are smart enough to control their own destiny anymore while simultaneously shoving the planet towards environmental calamity, I’m not sure doubling down on the strategy is the best bet.
Or we could, you know, prioritize education.
The standing world order is already dead since well before AI, it ended back in 2010s in terms of when the US had an opportunity to maybe resist change and we're just watching the inevitable consequences play out. They no longer have the economic weight to maintain control over Asia even assuming China is overstating their income by 2x. The Ukraine war has been a bloodier path than we needed to travel to make the point, but if they can't coerce Russia there is an open question of who they can, Russia isn't a particularly impressive power.
With that backdrop it is hard to see what impact AI is supposed to make to people who are reliant on US hegemony. They probably want to find something reliable to rely on already.