The singularity is close?

2022-03-3120:19170666mkaic.substack.com

Or "Why we're all in denial about the robot apocalypse"

A screenshot from the animated film The Mitchells Vs. The Machines that depicts two humanoid robots brandishing their glowing blaster arms menacingly.
from The Mitchells Vs. The Machines. Go watch it if you haven’t already, it’s perfect in every way.

Then came the Butlerian Jihad— two generations of chaos. The god of machine-logic was overthrown among the masses and a new concept was raised: “Man may not be replaced.”

—Frank Herbert, Dune

Within one century, biological intelligence will be a tiny minority of all sentient life. It will be very rare to be human. It will be very rare to have cells and blood and a heart. Human beings will be outnumbered a thousand to one by conscious machine intelligences.

Artificial General Intelligence (AGI)1 is about to go from being science fiction to being part of everybody’s day-to-day life. It’s also going to happen in the blink of an eye — because once it gets loose, there is no stopping it from scaling itself incredibly rapidly. Whether we want it to or not, it will impact every human being’s life.

Some people believe the singularity won’t happen for a very long time, or at all. I’d like to discuss why I am nearly certain it will happen in the next 20 years. My overall prediction is based on 3 hypotheses:

  1. Scale is not the solution.

  2. AI will design AGI.

  3. The ball is already rolling.

Keep in mind that this is just speculation and opinions. These predictions depict the future I personally feel is most likely.

Scale is not the solution.

Recently, an architecture called the Transformer has been taking over machine learning. It’s really good at sequence-to-sequence tasks like translation and text completion, and it’s also been successfully applied to other fields like computer vision.

Transformers2 also demonstrate an intriguing ability to scale their performance with their size better than other architectures. They seem less prone to the performance ceilings found in their competition.

This has lead to a new slogan popping up in the AGI-speculation community: “scale is all you need.” Some people believe that bigger networks, bigger compute clusters, and bigger datasets are all we need to get to AGI. I disagree.

I believe we are more bottlenecked by the architecture designs than anything else. While modern, standard feedforward neural networks are getting very good at Doing Stuff™, they aren’t AGI and I don’t think there’s a clear path forward for them to become AGI. I have no doubt OpenAI’s next mega-model, GPT-4 (and beyond), will be excellent, but I also think it will have exploitable flaws that make it fail a thorough Turing test.

In fact, I see the massive size of the present-day’s GPT-3 as a sign that scale isn’t the answer. 175 billion parameters, but still obviously not sentient? For comparison, the human brain has between 20 and 100 billion neurons and up to 1 quadrillion synapses.

You could argue that until our neural networks have hundreds of trillions of parameters, it’s not fair to compare them to the brain, but I think this argument relies too much on the assumption that a biological synapse and a weight in a network are equivalent in computational ability. This has not be proven. The intricacies of how the brain moves and processes signals are still not entirely understood3, but we know it seems to operate very differently from current neural networks.4

Looking at most of the most revolutionary papers in the history of AI, they are dominated not by “we made it bigger” but by “we made it smarter at the same size”. I see no reason not to expect that this pattern will continue.

If scale isn’t the answer, what is? I believe that the pièce de résistance is adaptability. Presently, the way you make an ML model is fairly rigid: you decide on a fancy new way to differentiably mix matrix multiplications together, you feed it a ton of data, and you use some simple calculus-based optimizer to train the weights in your network5. The way that the weights in your network are arranged doesn’t change after training.

I don’t believe this is adaptible enough, even at scale. In order for true intelligence to emerge, models must be able to reorganize their own inner workings. I don’t think you can have the level of flexibility required for sentience with a frozen architecture.6

I think sentient AI will be created by working smarter, not harder, with a focus on better architectural design and intelligent optimizers. This leads nicely into my next hypothesis:

AI will design AGI.

A photoshopped image in which Barack Obama appears to be awarding a clone of himself with a medal.

Human-designed networks have achieved great results, but they still suffer from the flaws of their creators. We are attracted to neatly organized network architectures which we can investigate and explain and attempt to understand.

But our brains, the gold standard of intelligence, are famously difficult to investigate, explain, or understand! I think this is because our brains weren’t “designed” by anyone — they evolved. They are the product of the universe’s greatest optimizer, natural selection.7

I think it’s reasonable to assume that the architecture that brings about AGI will not be hand-designed by humans, or even selected via some brute-force hyperparameter search — it will be designed by another AI. I predict there will be several recursive layers of AI design — perhaps a dumb network which constructs a decent network which constructs a smart network which constructs AGI.

I am bullish on the prospect of what I call “constructor networks” — models that construct other models (also known as hypernetworks). I think the moment we crack hyperlearning will be the moment progress will start moving faster than we can keep up, precisely because we will no longer be the ones making the progress — the algorithms themselves will.

In order to work smarter, not harder, we need to let go of our human biases and focus on making unconstrained architectures that can aggressively optimize every aspect of themselves. I fully expect these architectures will be frustratingly difficult to explain when they arrive — like huge mounds of digital neural spaghetti — but they will also outperform all competition. Every additional stable layer of AI abstraction we add between ourselves and the final model will make the final model harder to understand and better at its task.

The ideal model will be able to not only be constantly online-learning, but also constantly adding and removing its own parameters, allowing evolution and adaptation to new tasks.

You cannot have artificial general intelligence if your model cannot adapt in real time to an arbitrary task.

The ball is already rolling.

I believe that there is too much momentum to stop AGI now. With this much distributed attention fixed on the problem, AGI will be solved. Additionally, once it is solved it will be released to the public — whether it’s ethical to do so or not. I imagine that the first people to solve it will probably keep it behind closed doors, but it won’t stay secret forever. Someone on the team will leak everything, or someone else will independently make the same discoveries and release them. Eventually it will get out.

Consider the invention of the nuclear bomb — once we learned of the power hidden in radioactive materials, it was only a matter of time before someone pushed the research to its moral limits. AGI is like that, except it’s even more terrifying because uranium, plutonium, and the bombs made out of them can be strictly controlled, but people with powerful computers and an internet connection cannot, nor can the AGIs they create.

I recognize how cliché and alarmist this all sounds. Really, you’re genuinely worried about a robot apocalypse? You know Age of Ultron is just a stupid Marvel movie, right? Yeah, I know. But I’ve grown to believe that the concerns that fiction writers have been bringing up for decades are actually quite reasonable — because AGI cannot be stopped.

Once an intelligence is loose on the internet, it will be able to learn from all of humanity’s data, replicate and mutate itself infinitely many times, take over physical manufacturing lines remotely, and hack important infrastructure. Obviously, it’s impossible to say for sure that this is what the first free AGI will do, but it’s inevitable that some malevolent AGI will exist and will do these things. We can only hope that we’ll have sufficiently powerful benevolent AGI to fight back.

Final Thoughts

I subtitled this post “Why we're all in denial about the robot apocalypse”. I say that because I believe that society at large is completely, utterly, and woefully unprepared for the advent of sentient, living artificial general intelligence. I think the singularity is coming much sooner than most people expect, and I think it’s going to cause a great deal of upset when it arrives — for better and for worse.

Take for instance the common religious belief that people possess some unmeasurable, undefinable soul, and that this soul is what separates us from inanimate objects and non-sentient animals. Furthermore, some people believe that these souls come from deity. I have spoken with friends who believe that AGI is impossible because “robots can’t have souls, humans aren’t God”. For these people, like Caleb says in Ex Machina (paraphrasing), removing the line between man and machine also removes the line between god and man.

Now, this isn’t to say that AGI will destroy religion or anything — it may even be used to strengthen some sects (as taken to the extreme in HBO’s Raised By Wolves). No, religion has been around for millennia and I’m sure it will continue to be around for many more millennia. I’m simply predicting that a subset of religious people are going to experience lots of cognitive dissonance when the first AGI arrives.

More generally, arguments about AGI sentience and ethical issues will go from being topics only geeks talk about to topics that Facebook moms make political grandstands over.

Finally, I want to address those who may feel this post is pessimistic: I assure you, I am hopeful about AGI. I work in the field of ML because I am hopeful. I hope to personally contribute to the development of AGI in my lifetime. I think AGI has the capacity to make the world an infinitely better place. We are not prepared for AGI, but that doesn’t mean AGI has to be the end of humanity.

I don’t know what life will look like in the age of living machines, but I am confident that, as Jeff Goldblum puts it:

Life, uh, finds a way.

—Ian Malcolm, Jurassic Park

Thanks for reading,

Kai

PS — I’m making a series of short films about AGI right now! You should totally go watch the first episode, which is out now on my YouTube channel and my TikTok account.

Also, while you’re at it, why not follow me on Twitter?


Read the original article

Comments

  • By jtuple 2022-03-3122:4942 reply

    I'm surprised by the number of "is AGI even possible" comments here and would love to hear more.

    I personally think AGI is far off, but always assumed it was an inevitability.

    Obviously, humans are sentient with GI, and various other animals range from close-ish to humans to not-even-close but still orders of magnitude more than any machine.

    Ie. GI is a real thing, in the real world. It's not time travel, immortality, etc.

    I certainly understand the religious perspective. If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible. But, given the number of "is AGI possible?" comments I assume not all are religious based (HN doesn't seem to a highly religious cohort to me).

    What are the common secular arguments against AGI?

    Are people simply doubting the more narrow view that AGI is possible via ML implemented on existing computing technology? Or the idea in general?

    While the article does focus on current ML trajectory and "digital" solutions, its core position is mostly focused on a "new approach" and AI creating AGI.

    I'd consider a scenario where highly advanced but non-sentient ML algorithms figure out to devise a new technology (be that digital or analog, inorganic or organic) that leads to AGI as an outcome that is consistent with this article.

    Is that viable in 20 years time? No idea, but given an infinite timescale it certainly seems more possible than not to me given that we all already exist as blackbox MVPs that just haven't been reverse engineered yet.

    • By mellosouls 2022-04-011:008 reply

      It's not - in my experience reading HN - that AGI-sceptics (like me) don't think it will happen, just that we have developed a healthy contempt for the hype-merchants and their wide-eyed followers who despite years of "machine learning this" and "deep learning that" and "gpt-3 the other" have nothing to show for it in terms of something resembling human-level intelligence, or even any sign of understanding how we think.

      It's perfectly reasonable to see that and still be confident AGI will happen - just not with current models.

      • By ketzo 2022-04-011:191 reply

        That is a reasonable skepticism, I think.

        But I will say: by the time anyone has “something to show for it,” AGI will, pretty much by definition, already be here. I don’t think we’re gonna be able to see it coming, or know what it will look like, except through very speculative predictions. So I do think while skepticism is warranted, we should still evaluate everything in good faith, and not automatically jump to shilling. (Not saying you’re doing that, btw - just commenting)

        • By coderenegade 2022-04-014:421 reply

          I think the flight analogy for AI is a good one; we have machines that let us fly, but we don't fly the way birds do. In many ways, the ways that we fly far exceed the capabilities of birds, but there are ways in which birds exceed the capability of our machines.

          The issue with AGI as it's often framed is that it implicitly assumes a general intelligence like us, without any metric that can relate human intelligence and machine intelligence. We have machines that beat us in Go, so if we took those machines and somehow combined them with machines that beat us in Chess, Starcraft and Dota, are we any closer to AGI than we were previously? What about if that same AI can also drive a car? What's the standard, and where does it end? Where do we fall on that same metric?

          There are also some fairly deep philosophical questions in this realm, too. For example, to what extent is the human experience (and intelligence) linked to our physical bodies? Cooking by smell and writing by sound are things we do on a daily basis that have no machine analogue. How important is the embodiment principle to AI? Certainly we don't do things by plugging our minds in directly -- we drive cars using the same limbs that we use for everything else, which probably has enormous efficiency advantages. To what extent can you separate the intelligence from the body and the environment?

          Frankly, I believe we're a lot further away than we think. The last decade has taught us that relatively simple methods can be applied in surprisingly powerful ways, which is an important start, but it doesn't tell us anything about how close we actually are to a given goal, and how we might go about reaching it.

          • By soco 2022-04-018:151 reply

            The question is: we are further away from what? The goalposts differ wildly, depending on who you ask. From a 1:1 copy of the human mind we are very far, because we also don't have an idea what a human mind is, heck we don't even have a common definition or understanding. Is a soul part of the mind? Should the AI have a soul too? And the rabbit hole goes only further down. So while for some the singularity might be around the corner, for others it's lightyears away. I guess the only way to have everybody agree is if the singularity hits us Skynet-style and I'm definitely not looking forward to that.

            • By tomp 2022-04-0210:06

              > we are further away from what? The goalposts differ wildly, depending on who you ask

              literally anything

              the intelligence of a wasp; the intelligence of a cat; the intelligence of a Downs syndrome person; the intelligence of a 3 month old baby

              AI is very far from all of the above

      • By tluyben2 2022-04-013:294 reply

        > have nothing to show for it in terms of something resembling human-level intelligence

        We keep moving the goal post; a common theme was when I studied ML in uni in the AI winter of the 90s, that beating Go surely would mean human level intelligence. And many ML models we see now would be considered human level a few decades ago; we moved our goalposts and definitions, which is fine.

        However ‘resembling’ is vague; I find most stuff on social media (most notably tik tok and instagram) not human level intelligence either, or, reverse, people easily would (and do) believe comments and posts done by something like gpt 3 are done by humans. That is because the level is so low of course. I know gpt 3 is not intelligent (for my definition anyway, which is intentionally vague) but resembling in some cases: definitely.

        • By slavik81 2022-04-014:072 reply

          > a common theme was when I studied ML in uni in the AI winter of the 90s, that beating Go surely would mean human level intelligence

          The goalposts haven't moved. If AI researchers thought that general intelligence would be required to beat a human in Go, then they were simply wrong.

          AlphaGo is not a general-purpose intelligence. It only does one thing. It plays Go.

          • By virgilp 2022-04-019:421 reply

            Is the Turing test "simply wrong"? Or do we need to further qualify it, as in "a GPT-3 chatbot is human-level only if it can fool any human for a long period of time, not if it can fool _some_ humans for a relatively short period of time"?

            I think it's fair to say that the goalposts are moving (as they should).

            • By bangkoksbest 2022-04-0110:47

              Or maybe the Turing test is the right test but interpreted more literally: it is passed when the society that invented it genuinely believes it to have been passed. Hammering out the specific criteria ahead of time will only lead to frustration as it becomes clear that is not what the surrounding society actually holds as qualifying. The reason the goal posts move is that we fundamental misjudge what it is we find to be human intelligence, and given that we have a nasty habit of denying this quality to other humans, let alone animals, it's entirely possible we will never genuinely grant it to technical artefacts that we devise.

          • By tluyben2 2022-04-014:284 reply

            > then they were simply wrong.

            Sure, point is that every definition we come up with gets beaten and then we were wrong with that definition in the first place; defining what agi means seems hard.

            • By omnicognate 2022-04-016:352 reply

              That isn't "moving the goalposts", though. That phrasing is commonly used to imply that people who are skeptical of the imminence of AGI are being unreasonable. The fact that we can't yet define general intelligence convincingly is a large part of the reason I'm skeptical that we're close to achieving it.

              • By FabHK 2022-04-0110:24

                That reminds me of Moravec's paradox: What we thought was hard and difficult (and indicative of intelligence), such as playing Chess and computing derivatives, is actually quite easy, while a lot of things that we consider trivially easy (distinguish a dog from a cat, pick up an egg, read some squiggly text) are quite hard.

                https://en.wikipedia.org/wiki/Moravec's_paradox

                One of my favourite demonstration of that was the 2015 DARPA robot challenge, where one robot after the other failed at such difficult tasks as walking and opening a door.

                https://www.youtube.com/watch?v=g0TaYhjpOfo

              • By tluyben2 2022-04-018:02

                I think we are very far off achieving it as well. But I do feel goalposts were moved and continuing that might be a way we will achieve it without noticing; like alien intelligence, we might create something we don’t recognise as general while it already is. But not soon either way I would think. Like aliens, I hope within my lifetime but I would put both at close to 0.

            • By robbedpeter 2022-04-015:49

              Defining intelligence in a way that captures everything relevant and isn't self referential is extremely difficult.

              Dr. Marcus Hutter's AIXI is a solid mathematical treatise that reduces intelligence to the concept of information compression, and you can exhaustively construct a logical extrapolation from aixi to any particular feature of intelligence at higher levels, but it's similar to string theory in that it's all-encompassing in scope. It's not useful in narrowing the solution space if you want to build a high level intelligent system.

              https://en.wikipedia.org/wiki/AIXI

            • By adrianN 2022-04-017:232 reply

              The Turing Test still stands imo. The chinese room argument against it is not convincing to me.

              • By tluyben2 2022-04-018:18

                I do think some known weaknesses do make it problematic though; some intelligent behaviour might be not human; aka maybe the player is a generally super intelligent computer but it’s answers are so out there the other players do not recognise it as intelligence. Or the human player is so weird, the answers are mistaken for a computer. Or, the most likely and I have played this many times, lately with gpt3; human behaviour often is not intelligent, at all. And while computers are not intelligent, in a scenario against someone who just responds with emoji’s and such (so, a currently normal convo on the internet), the interrogator might appoint humanness to the wrong side many or all times. As we already know that people who do not know they are chatting to a bot, think it is a human.

                While I do not think that is intelligence, it does making testing for it slightly broken. It will come down to ‘I will know it when I see it’ by elitists for exactly the above.

                So in short ; I do think the Chinese room argument against it is quite a good one where even the elitist people can fall for if the ‘con’ is elaborate enough (as in; if gpt gets more data, more efficient learning and learns to know when it cannot answer and has to look it up, like calculations). Or maybe then it is intelligent? (To be clear; I don’t think so; I think we need a better tests and definitions).

              • By blackbrokkoli 2022-04-018:431 reply

                Which one? Turing himself revised it, like, four times. The version where humans can't tell which one chat member is non-human has been beaten several times (with tricks like mimicking a 13 year old foreign boy, iirc). The version where you replace an arbitrary member of a chat with a human is something mostly about speech pattern matching and plausible speech, can't imagine that GTP3 or whatever comes next is too bad at that. If you do it like in Ex Machina, as in chatting in person it's mostly about building a very expensive robot that overcomes uncanny valley.

                I think the sad state is that the Turing Test is too diluted to be useful as a marker...

                • By adrianN 2022-04-019:391 reply

                  I expect "passing the Turing test" not to be a binary thing, but as AI improves it can hold longer and more nuanced discussions before the human can detect that it's a machine. So I'm not surprised that we already have chatbots that can fool some people some of the time.

                  • By tluyben2 2022-04-0113:381 reply

                    Yes, but that makes it a pretty elitist thing no? Maybe elitist is the wrong word (not a native english speaker), but I mean; I would be able to always make the test fail for the computer if I am the interrogator because because. I cannot say why. Something like you say and something where I think that there is a very large % of the world population who will not be able to make that distinction.

                    An interesting would be to do the Turing test with a timer; the interrogator gets 30s for each session and has to say who is what within 30s and then continue to the next batch. I think computers would go very very far if you do that. But that is the attention span consider normal currently for human/human social media interaction; often even less.

                    • By adrianN 2022-04-0114:42

                      You need to distinguish humans from computers, you don't know whether you're talking to a computer or not. Failing everybody doesn't work. Maybe I'm not getting what you're trying to do.

            • By toisanji 2022-04-023:06

        • By benplumley 2022-04-013:48

          > beating Go surely would mean human level intelligence

          With the benefit of hindsight, this just seems like a bad goalpost. Go is hard for computers, but why did anyone think the smarts required to beat a human at it would transfer to understanding or generating text, or anything else a human can do?

        • By kafkaIncarnate 2022-04-014:521 reply

          The reason for Go was that much of the research was based on AI algorithms like Monte-Carlo.

          What was "solved" with AlphaGo was using deep learning machine learning which are effectively black boxes. There was a certain assumption in the question for AI researchers academically that it would be an understood algorithm as an AI agent like a Prolog application, not a brute forced model. That's still not the case that we have a "solved" strategy and all we can do is watch it play as if it is a deaf mute player.

          So there still is no "tic-tac-toe" known winning strategy to Go or anything.

          That doesn't make AlphaGo any less impressive or any less practical, but it even has its own readout issues. It can't even read ladders without hard coding it in, for instance, because it becomes a long enough depth search. This is one of the first things a newbie would learn.

          It's just a 19x19 board, so it was always known if you could read all the possible outcomes you could see all the possibilities and win. This is just looking at all possible outcomes and picking the best one, not knowing how to play. Creating models of data that is 2, 3, or even 4+ dimensions is always possible, just depends on how much computing power you can throw at it. The created models are essentially aggregate simplifications to play quicker.

          Generalized intelligence is so much different. You have to define the problems themselves that you are trying to solve, figure out what the variables are, and solve it. Then you have to operate and run the machinery to create those experiments. Outside of a scenario that you've taken actual physical territory as an intelligence, I can't see how it would get there (think Terminator or BSG, doesn't have to be malicious but they'd have to be in control of the physical area autonomously).

          But the hardest part is defining the problems independently given the sheer number of problems they'd need to define second to second just to solve basic tasks, and they'd likely have millions of variables with millions of possible values.

          • By IanCal 2022-04-019:14

            No, the reason was that the search space is insanely large.

            > It's just a 19x19 board, so it was always known if you could read all the possible outcomes you could see all the possibilities and win.

            All possible outcomes is not something you can iterate over in our universe.

            > There was a certain assumption in the question for AI researchers academically that it would be an understood algorithm as an AI agent like a Prolog application, not a brute forced model.

            I never heard this, and the result is really not brute forced. You can't brute force go.

            > That doesn't make AlphaGo any less impressive or any less practical, but it even has its own readout issues. It can't even read ladders without hard coding it in, for instance, because it becomes a long enough depth search. This is one of the first things a newbie would learn.

            Only in early versions, AlphaZero didn't have any built in knowledge and can learn different games and the later developments in MuZero went further to make it more generalised as a learner.

            Removing the hard coded logic and removing even seeing how humans plan, it got better. It found strategies and ways of playing that experts had missed in an ancient game.

            > This is just looking at all possible outcomes and picking the best one, not knowing how to play.

            "It's just doing X, it doesn't really know how to Y" is a common refrain. It looks at options, and explores "what if" scenarios in a guided sense with a feeling about how good any particular potential board is. I find it hard to say that it doesn't "know" how to play.

        • By ngc248 2022-04-019:26

          >>> We keep moving the goal post

          It's more like climbing a mountain and seeing higher peaks. Researchers may have been thinking that creating a human beatable program may get them insights into how to create an AGI, but they just find out more problems to surmount.

      • By eek2121 2022-04-013:222 reply

        Hard disagree. I have no contempt, however, as a person of science, I see that we are in the technological equivalent of the stone age when it comes to AI. "Cavemen" probably had some outlandish ideas in their own era as well.

        To give you an example of how lost we are, we don't even know how human genetics work with the human brain. It is one thing to make a machine that can pretend to think, but to make a machine that not only actually thinks, but also communicates and is able to act in a 'human' or 'intelligent' fashion is exponentially harder.

        Just my 2 cents. I hope to be proven wrong, but we haven't cured cancer or discovered the secret to fusion yet, so...

        • By eru 2022-04-015:191 reply

          Not sure your analogy holds?

          Building planes that fly didn't require us to understand how birds or insects fly.

          Getting directions from Google Maps didn't require them to figure out how hamsters navigate a maze.

          Now, if you want to build a computer that passes the Turing test, perhaps you need to understand how humans work. Maybe? But it's not clear that this knowledge is necessary to build something smart enough to drown the universe in paperclips.

          The latter reminds me of Edsger Dijkstra's aphorism: "The question of whether machines can think is about as relevant as the question of whether submarines can swim."

          (Now, it might turn out that we need to understand how humans tick and how genetics interact with the brain in order to build a successful paperclip optimizer. Probably not, but it might turn out that way.

          I am just saying that this would be a surprising empirical fact to learn. Not something that we can just assume based on armchair reasoning from analogy.)

          Now to get slightly off-topic:

          > I hope to be proven wrong, but we haven't cured cancer or discovered the secret to fusion yet, so...

          Oh, we can totally build fusion reactors right now!

          First, a fusor is a bench-top nuclear fusion device. The main downside is that no one has figured out how to get more useful energy out of it then we put in. So probably not what you had in mind.

          See https://en.wikipedia.org/wiki/Fusor

          Second, we can build a nuclear fusion device that does generate useful energy:

          You take a huge tank of water, some steam turbines, and a supply of fusion bombs.

          You take one of the bombs, explode them in the water, and use the turbines to generate electricity. Repeat as needed.

          It's a very simple system, and we had the means to make this work since the 1950s. Of course, it's also a completely ridiculous design that approximately no-one would want to use in practice. Especially when you already have more conventional nuclear fission reactors.

          But something along very similar lines was seriously considered for spaceship propulsion. See https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...

          • By fao_ 2022-04-016:121 reply

            > First, a fusor is a bench-top nuclear fusion device. The main downside is that no one has figured out how to get more useful energy out of it then we put in. So probably not what you had in mind.

            > Of course, it's also a completely ridiculous design that approximately no-one would want to use in practice. Especially when you already have more conventional nuclear fission reactors.

            Right, and that's the point.

            There are three logical leaps here, and proof is missing for all three.

            - One, that AGI is something we can replicate in a Turing Machine.

            AGI might require a specific effect in quantum mechanics to work, for example. Light refraction is completely understood, but still extremely difficult to solve for a single case -- computing power is getting there, but it took about 60 years from it being understood to us being able to compute it reasonably, and even then it's only an approximation -- our best rendering farms about around 5 F-stops. That's nowhere near the human eye's ability.

            Another example, the 3-Body problem solved for N-bodies. Or how about, protein folding. Folding@Home is remarkable, but even with the combined GPUs of hundreds of volunteers, it still takes Folding@Home months, years, to calculate the folds of a single protein. The brain has billions of them.

            - Two, that we will figure out the solution to Turning Machine simulation in our lifetime.

            This can stand on the arguments of one. I'd just like to add that there are many really simple conjectures that are as-yet unsolved within mathematics. The Collatz conjecture is a good example here, but there are hundreds of thousands of others. Despite it being probably the simplest problem to teach, it's been about 90 years and even proof theory machines haven't made too much headway. Erdos, probably the greatest mathematician to ever have lived, stated "Mathematics may not be ready for such a problem". In that, subject experts doubt our ability to solve it within our lifetime.

            Why should this be any different for an exponentially harder problem?

            Perhaps a reference to the fact that it took about 150 years to go between "neurons are in the brain and influence our actions" to "we can do brain surgery, kind of...". And probably about 200 more years until we can say we actually "understand" the brain in a meaningful way?

            - Three. That the resulting device will be practical, useful, and will be able to understand itself enough to replicate a better version of itself, within a reasonable timeframe.

            I think you argued this sufficiently yourself! The fact that we can build something doesn't give it use. We can build machines that extract energy off of graphene. At no point does it make it useful to do so, however.

            And yet, despite all of these unsolved problems, we are supposed to be able to simulate a machine to replicate the brain? In the next 20 years, no less?! Is it not unreasonable to think that this is a preposterous assertion whatever dimension you look at it through? That so many people have thrown themselves into wishful thinking, and cults at that, is beyond me.

            • By eru 2022-04-018:101 reply

              I'll try a reply, but you might get more out of reading some of Gwern's essays. Especially https://www.gwern.net/Complexity-vs-AI and https://www.gwern.net/Scaling-hypothesis

              First, what makes you think that solving the Collatz conjecture is easier than AGI? Doesn't that beg the question? Just because a challenge is easy to state doesn't mean that we should expect it to be easy to answer. Fermat's Last Theorem was easy to state; open a random math textbook for many theorems that are harder to state, but easier to prove.

              Second, quantum mechanics is totally amenable to simulation on a Turing machine. In fact, all of known physics is. You rightly point out that the question is rather how good modern hardware and software is at the task. (Btw, quantum computers would be really good at simulating quantum mechanical systems. And that would probably be their main use; right now we don't really know much else they are better at than classical computers.)

              Third, you bring up the point that figuring out how arbitrary proteins fold is still rather difficult. I agree. So an atom for atom simulation of the human brain would presumably be also rather difficult.

              That doesn't mean that AGI is impossible. It only means that AGI via atom-for-atom simulation of a human brain would be rather difficult.

              Simulating a human brain atom-for-atom is only one approach you can try to take to reach AGI. There are other approaches people are working on. Only one of them has to work.

              • By fao_ 2022-04-0310:10

                Sure, but all this misses my point that these are comparisons. This is what has happened with other technology in a similar ballpark of "we know how to do this, we can (technically) do this". You're arbitrarily selecting for a specific outcome in the space of all possible outcomes. When not only do we not know enough about any outcome to favour it, but we have knowledge of how other technology being developed in a similar space, with similar constraints, turned out -- i.e. it didn't turn out at all how we expected in any specific way".

                So let's be honest here, in the space of all possible outcomes -- the negative outcomes where "it doesn't work" or "it works but is too slow to use" or "it doesn't understand how it works either", is a larger order of infinity than the outcomes where it does work, at least based on what has happened with pretty much every other technology that we have records of predictions for (Is there a technology that we have predicted where it wasn't wildly different or handicapped compared to our dreams? I'd honestly like to know TBH).

                There is literally no reason for pontificating more over this. All of this, the best guesses, the worst guesses, is all just a variety of wishful thinking. The only true answer is we have no fucking idea.

        • By soco 2022-04-018:18

          And why should the AI copy the human mind anyway? By that definition extraterrestrial life cannot be intelligent because hey they are not humans.

      • By taylodl 2022-04-0114:17

        Human-level intelligence? Dog-level intelligence would be quite the achievement, and we're nowhere close! Will we be there in 20 years? I seriously doubt it - not without a radical departure in processor architecture. We're within striking distance of hitting the limits with our current architecture yet we're nowhere close to being able to achieve human-level intelligence.

      • By simonh 2022-04-019:551 reply

        I think there are two kinds of AGI skepticism. There's skeptics like us that think we're nowhere even close to beginning to understand how to build an AGI, but see no reason to suppose such a thing is impossible.

        Then there are those who think AGI is impossible in principle.

        The people I get most frustrated with are those that use the first argument - we're nowhere close to implementing an AGI - as an argument that therefore AGI is impossible. Sometimes they don't even realise or recognise that this is what they are doing.

        The argument is often that "we do X and computers don't do X therefore computers can't be like us in that way". Well no computers don't do those things yet because we haven't developed AGI yet. That's not a reasonable argument that developing AGI is impossible.

        • By FabHK 2022-04-0110:15

          > Then there are those who think AGI is impossible in principle.

          Yes. I don't believe that it is impossible in principle either, but it is neither trivial to argue that it is possible. An argument that annoys me to no end is this "well, AI continues to get better (there is no regression in our capability), and therefore it must sooner or later exceed any given threshold, for example human level AGI. QED." That argument is just plain wrong. The world record in the 100-metre dash is only ever improving, but that doesn't mean that humans will soon run Mach 1.

      • By Nuzzerino 2022-04-011:431 reply

        Interestingly enough, the man who coined the term "AGI" (Ben Goertzel) has always shared that same skepticism about ML, Deep Learning, and other forms of "Narrow AI".

        • By ilaksh 2022-04-015:31

          Well sure but also now they are working on integrating neural networks and symbolic learning.

      • By eli_gottlieb 2022-04-011:511 reply

        >have nothing to show for it in terms of something resembling human-level intelligence, or even any sign of understanding how we think.

        Well of course they have no idea how we think. They're making no actual effort to study how we think. That's for cognitive scientists and neuroscientists, and AI people often sort of dismiss it as having too long a time-horizon or overly high compute requirements before it's applicable to "real world" problems.

        • By eru 2022-04-015:211 reply

          'AI people' is a fairly broad term, and the field is so hot these days that there are plenty of people trying all kinds of different approaches.

          So there's plenty of scientists and engineers who are trying to understand how the brain works and how to combine that with eg deep learning.

          • By eli_gottlieb 2022-04-0112:311 reply

            On the one hand, sure, I'm one of those scientists. On the other hand, we're still considered a subfield whose work isn't always considered entirely relevant to the rest of the field, because we're measured by the yardstick of task-specific performance on benchmark datasets. I just reviewed a paper yesterday that was powerful and general, but wasn't really trying to hit SOTA on any one test task its architecture could perform. It was trying for generality. This likely got it dinged by the other reviewers.

            • By eru 2022-04-0112:35

              > On the other hand, we're still considered a subfield whose work isn't always considered entirely relevant to the rest of the field, because we're measured by the yardstick of task-specific performance on benchmark datasets.

              That attitude seems entirely reasonable to me.

              > I just reviewed a paper yesterday that was powerful and general, but wasn't really trying to hit SOTA on any one test task its architecture could perform. It was trying for generality. This likely got it dinged by the other reviewers.

              That seems like an interesting paper!

      • By colechristensen 2022-04-011:08

        I think basically it requires machines that work in fundamentally different ways that can't be reduced to statistics, and that we'll get better at approximating intelligence for a while until we hit a wall where no amount of scale makes any big difference.

    • By idontwantthis 2022-04-014:145 reply

      I will continue dismissing AGI until someone can tell me what GI is. I don't believe that it's just statistical inferences run in parallel (Machine Learning).

      If we don't know what GI is, then I don't know how we are supposed to replicate it.

      It's like if we wanted to produce power on Earth the same way as the Sun does, but without knowing that was fusion. So we were just trying to make things look like the Sun. Someone says "Look, I made a ball so hot you can't look at it, we must be close!"

      I believe we will have fusion eventually because it is a process that we deeply understand, it's just hard to do.

      I don't believe we are on a path to AGI right now because know one understands even a little bit of GI.

      • By Xcelerate 2022-04-015:074 reply

        > I will continue dismissing AGI until someone can tell me what GI is

        Here's a rather famous recommendation for a definition of general intelligence: https://arxiv.org/pdf/0712.3329.pdf

        Very loosely paraphrasing their paper, general intelligence is the ability to adapt to novel situations and predict the future better than any other algorithm could do.

        Personally, I think of perfect AGI as this: given that our universe is capable of approximation to arbitrary precision using a universal Turing machine (and infinite time and memory), AGI is the best* computable approximation to Solomonoff induction.

        *"best" being some mathematical notion of optimality.

        • By eru 2022-04-015:251 reply

          > Very loosely paraphrasing their paper, general intelligence is the ability to adapt to novel situations and predict the future better than any other algorithm could do.

          The 'better than any other algorithm' part seems to make this into a pretty weird definition?

          By that standard only John von Neumann [0] was intelligent, because every other human wasn't as smart as him?

          I suspect any reasonable definition of intelligence has to work on a 'good enough' basis, not on optimality.

          The paper you linked is quite interesting, and I'll be reading it now. I suspect your summary was perhaps a bit too brisk, and their more careful longer definition survives this trivial objection?

          [0] Replace von Neumann with your favourite genius.

          • By idontwantthis 2022-04-015:291 reply

            Humans are terrible at pretty much everything except what they are specifically highly trained in.

            von Neuman was a genius, but he also actively promoted nuclear war. Does that mean he is or is not intelligent? Do you think in a world where von Neuman got to nuke the USSR in 1950, we would be happy we gave him the keys?

            • By eru 2022-04-017:51

              Ignore von Neumann, he was just an example.

              Basically, with the definition as given in the comment, only whatever matches up with the smartest thing around would count as intelligent. (Whether that's the smartest thing currently around, or the smartest thing possible even in theory isn't quite clear.)

              In any case, von Neumann was definitely extremely intelligent for a human. Don't conflate intelligence with wisdom, or with having goals that align with yours.

        • By aqsalose 2022-04-019:03

          Quoting the key definition:

          "Bringing all these pieces together, we can now define our formal measure of intelligence for arbitrary systems. Let E be the space of all computable reward summable environmental measures with respect to the reference machine U, and let K be the Kolmogorov complexity function. The expected performance of agent π with respect to the universal distribution 2−K(μ) over the space of all environments E is given by Υ(π) := [unquotable latex expression]. We call this the universal intelligence of agent π."

          It is an useless kind of mathematical object from any practical perspective. Sure, some formal object is defined, but all the actual important things to use it are left as difficult, possibly devilishly or impossibly difficult implementation details. How do you actually build a map from the physical reality to environment E? How the agent π is mapped to any real physical being? All the real work that would make the definition useful is left out.

          It is like defining complexity classes without providing any algorithms.

          edit. Further complaint: Here is how they present "examples":

          >A very specialised agent. From the equation for Υ, we see that an agent could have very low universal intelligence but still perform extremely well at a few very specific and complex tasks. Consider, for example, IBM’s Deep Blue chess supercomputer, which we will represent by π_dblue. When μ_chess describes the game of chess, V π_dblue μ_chess is very high. However 2−K(μ_chess ) is small, and for μ != μ_chess the value function will be low as π_dblue only plays chess. Therefore, the value of Υ(π_dblue) will be very low. Intuitively, this is because Deep Blue is too inflexible and narrow to have general intelligence; a characteristic weakness of specialised artificial intelligence systems.

          Why they need any formal definition of Y to express this if they don't bother fleshing out some features of μ_chess so that you could provide any bounds for K(μ_chess)? Without such work, all of the actual claims in the paragraph is all appeal to intuition. Nothing is proved, so any formal definitions are unused. The contents of the quoted paragraph could be expressed without referring to any equations or "definitions" at all, they are totally superfluous.

        • By idontwantthis 2022-04-015:22

          So are humans not GI then? We are terrible at predicting the future, in general. "Frustration" is the feeling of things not going how you thought they would or should.

        • By novok 2022-04-016:09

          I didn't read the paper, but going along with the sun metaphor, your description kind of sounds like "hot ball in sky that radiates light". It still doesn't describe fusion.

      • By VladimirGolovin 2022-04-015:28

        Here's a definition from Shane Legg and Marcus Hutter:

        Universal Intelligence: A Definition of Machine Intelligence https://arxiv.org/abs/0712.3329

      • By simonh 2022-04-019:59

        What do you mean by "dismiss AGI"? Do you dismiss the idea that we are currently on a developmental path to achieving AGI in the medium term (a lifetime or two) with current approaches, or do you dismiss the idea that AGI is possible at all?

        I would agree on the former, but that in no way implies the latter.

      • By hutzlibu 2022-04-014:452 reply

        " I don't believe that it's just statistical inferences run in parallel"

        Why not? I also believe, there must be something more, but I cannot articulate it.

        But as disturbing as it sounds, most natural learning might be just this. Lots of statistics run in parallel.

        • By idontwantthis 2022-04-015:071 reply

          And what about things people do that don’t make any statistical sense?

          Is it AGI if it picks up its underwear the first time you ask, or is it only AGI once it waits for you to get angry after the fifth time you ask?

          • By hutzlibu 2022-04-015:531 reply

            "And what about things people do that don’t make any statistical sense?"

            Like what? It is always individual statistics made up from partly faulty data. It is never objective, what mattered in a evolutionary sense, was that it works good enough. (like ants algorithms are not perfect, but get the job done most of the time. I watched ants a lot ...)

            • By idontwantthis 2022-04-016:041 reply

              So we are intelligent statistical engines but usually very bad at updating our priors. If you make a machine that can drive a car, but doesn’t like to because it’s father frequently had road rage, is that AGI? If you build that but then “fix” that behavior out of it then is it still AGI?

              • By hutzlibu 2022-04-016:241 reply

                Consciousness is something, I cannot explain at all with statistics (or any other mechanism).

                True AGI by my definition would require consciousness, but as far as I know, there is also no general accepted definition of consciousness either.

                • By eru 2022-04-018:26

                  That's the beauty of the Turing test: it avoids having to worry about consciousness.

                  (Of course, passing the Turing test wouldn't be necessary for intelligence, at most it's sufficient. Otherwise, humans who lost the ability to produce language but are otherwise quite smart and capable would count as non-intelligent.)

        • By idontwantthis 2022-04-014:46

          That’s just my belief. I might get proven wrong, but I haven’t been so far.

      • By ilaksh 2022-04-015:32

        See M. Mitchell's Artificial Intelligence: A Guide for Thinking Humans

    • By mr_toad 2022-03-3123:001 reply

      People often confuse not being able to understand how something is possible with it being impossible.

      Despite all the progress being made by using a little bit of design, and a whole lot of brute force, people keep saying it can’t work, and they keep being proven wrong.

      Ironically I think this means AI won’t be creating AI in an exponentially increasing way - because AI is more about scaling and emergence of dumb elements than it is about grand designs.

      • By TacticalCoder 2022-04-010:59

        > ... people keep saying it can’t work, and they keep being proven wrong.

        It's impossible to prove wrong all those saying: "AI is not AGI" until you eventually end up with an actual AGI. Nobody is proven wrong.

    • By ecopoesis 2022-04-012:362 reply

      The atomic bomb is a good contrast. In the early 30s we knew a bomb was possible (Szilard‘s patent was in 1933) but there were huge engineering problems to overcome (enriching uranium). But we knew that even if we couldn’t scale up isotope separation and we had to go there long way around, we could still make a bomb with enough time.

      There is no Szilard patent for AGI: no one has any theory on how to make it work other then “make it bigger,” which as this article points out hasn’t paid off like we’d hoped.

      I’d have a lot more faith that AGI is possible if we had any kind of theory or roadmap on how to get there. Counting on a black box inventing it for us seems like waiting for a million monkeys to finish King Lear 2.

      • By marvin 2022-04-018:501 reply

        > There is no Szilard patent for AGI

        You don’t know that. But even worse, you probably won’t know if it happens. History is not as predictable in the present as it is in hindsight.

        Most physicists in 1935 would still have said that an atomic bomb was impossible, and defended it with the same indignated vigor demonstrated in this thread.

      • By FeepingCreature 2022-04-015:03

        The techniques that will allow AGI are probably already invented. How many people at the time knew what that patent meant for the future of the world?

    • By yojo 2022-04-010:463 reply

      I think AGI is usually equated with “superhuman” intelligence, though obviously not a requirement.

      If superhuman intelligence were easy, why aren’t we already more intelligent? Being smart has already demonstrated considerable fitness benefits for the human race. It seems like we haven’t gotten much smarter in the last few thousand years though, at least from a raw horsepower point of view.

      My intuition is that past a certain level of complexity intelligent systems become inherently unstable. This is all just a handwave, but there’s some circumstantial evidence in the confluence of genius and mental disorder in our species.

      If that were true, the question then becomes: is biology an inferior substrate for general intelligence compared to silicon. Obviously a cpu can add two integers more efficiently than a meat brain, but it’s not self evident that this will hold true for more complex computation. Put another way, if you try to make an AGI agent “smarter” than a person, you might just end up with an irrational system that spits out nonsense.

      So it may be that for complexity reasons you can’t beat human-level GI. At which point the question becomes whether AGI is cheaper to produce and operate than feeding and sheltering a human. If the answer is “no”, then AGI may be both possible and irrelevant.

      • By AgentME 2022-04-011:382 reply

        Human ancestors evolved larger and larger brain sizes until they reached the limits of what we can calorically support and by what fits through the birth canal. It seems unlikely to me that these limits just happen to line up with general limits on intelligence. An AGI will be easier to scale further up without these biological limits.

        (And even if those biological limits coincidentally lined up with general limits on intelligence, just the fact that AGIs will be able to duplicate themselves or share knowledge directly would itself be a huge practical increase to their intelligence over us.)

        • By chrisco255 2022-04-012:59

          There's no proof that humans have hit any such limits. But that's not how genes and evolution work. We got intelligent enough to adapt to the environment and conditions that were thrown at us to survive the ice age and through the many inter-tribal wars that have been fought since. But once a species hits a plateau in selection pressure there is little improvement, except via sexual competition.

        • By eru 2022-04-018:301 reply

          I'm not sure the birth canal is a real limit. It's a limit on how wide your head can be when you are born.

          That's not the same as a limit on brain size in adults. (Nor even directly a limit on brain size in newborns: up to a point they could always get longer heads instead of wider ones.)

          > (And even if those biological limits coincidentally lined up with general limits on intelligence, just the fact that AGIs will be able to duplicate themselves or share knowledge directly would itself be a huge practical increase to their intelligence over us.)

          Yes. A human with access to a calculator is much more capable than one without. An otherwise human-level AGI with hard-wired direct-'brain'-access would be even more capable---without having to manipulate the calculator with clumsy fingers and interpreting its outputs with a general visual system.

          Add direct-'brain'-access to Google and Wikipedia and to huge amounts of raw storage, and your human-level AGI would already be super human.

          • By tacocataco 2022-04-0119:30

            "(Nor even directly a limit on brain size in newborns: up to a point they could always get longer heads instead of wider ones.)"

            Coneheads confirmed? Surly I wasn't the only one to watch it.

      • By PeterisP 2022-04-013:531 reply

        There are many aspects how biology can make "better" humans - faster, stronger, more resilient, etc, but they are not evolutionary useful because they cost extra calories and apparently are not worth the cost in a calorie restricted environment. Our bodies have been optimized to have as little of the "good stuff" as possible to conserve calories. We have mechanisms that will prevent building more muscle unless it's really necessary (as shown by exercise) and we have spare food - but it does not have to be this way, e.g. gorillas don't have to exercise to develop their huge muscles, it's just as our bodies (unlike theirs) is heavily optimized in favor of "cheaping out on features" to be more resistant to starvation.

        IMHO that also fully 'explains away' the issue of "why aren't we already more intelligent". Brains are excessive consumers of energy compared to other organs, so extra brain mass costs calories. At a certain point (which seems to be our current brains) being a bit smarter does not allow a hunter-gatherer to harvest much more calories per day in the seasons/crisis events where food sources are scarce (which are the only times that matter for evolution), so the brain increase stops there, just to save calories. Improvements that are 'zero cost' (i.e. brain structure changes which give more intelligence for the same calorie expenditure) are welcome, but the trivial way of getting more 'processing power' through larger brains is aggressively selected against.

        And all this calorie saving apparently was worthwhile - e.g. neanderthals were stronger and had larger brains than homo sapiens , but we were a bit more efficient, so we are here and they are not.

        • By eru 2022-04-018:31

          And we _are_ already more intelligent. Eg we are more intelligent than rats, even though rats are pretty smart and evolution had plenty of time to make them smarter.

      • By greggman3 2022-04-011:023 reply

        It's pretty hard to believe that being able transfer knowledge between digital brains (something that seems arguably inevitable) as well as having faster access to more knowledge in general (because you can make a bigger brain with more memory) it's just pretty obvious that AGI will beat humans. If all they had was human brains but could transfer thoughts faster they'd still beat us.

        • By shkkmo 2022-04-011:582 reply

          I suspect that intra network latency has a significant impact on intelligent systems. Interconnectivity also seems crucial but as you add nodes the number of connections needed to keep that closeness in the networks scales dramatically. These connections also need space so if you keep the same level of interconnection as you add more nodes, your density drops and so your latency goes up.

          There is thus a fundamental tradeoff between network size, network latency, and network interconnectivity that can't be avoided. While we may eventually beat the human brain on some or all of these measures, there is good reason to believe there are fundamental limits on the scalability of intelligence.

          • By FeepingCreature 2022-04-058:31

            There is simply no reason to believe that the human brain is anywhere near optimal on this tradeoff.

          • By yojo 2022-04-012:33

            Thank you for this. I was hoping someone would contribute a better explanation than my own vague intuition.

        • By yojo 2022-04-011:201 reply

          Again, I will hand-wave about complexity. A larger memory and the ability to load/unload large corpora rapidly implies a very different kind of cognition than the one we enjoy.

          I’m not saying you’re wrong, only that I can imagine a reality where you can’t just arbitrarily scale components of cognition and still have a functioning system.

          • By gonehome 2022-04-012:091 reply

            It's more likely (imo) that humans are constrained by biological limits (energy, head size) than some true maximum.

            A simple thought experiment is the same architecture of the human brain (whatever that is), but just with way more power to run at more operations per second.

            • By breuleux 2022-04-012:452 reply

              Would that actually work, though, or would it just cook itself?

              I think the general question to ask is, for a given organism or machine, at what point does it stop being cost effective to make it smarter? Nature doesn't just keep human intelligence at the level it currently is, it's doing so for all other organisms. There seems to be tradeoffs. Which ones would apply to AI? And which ones are not currently relevant but will apply to future AI?

              For example, AI has no physical predators currently, but you could imagine a future where they do, or where nanobots or special bacteria attack silicon. In such a future, it is possible that very intelligent AI would suddenly become uncompetitive, relative to dumber and leaner ones. Viable superintelligent AI might then turn out to be just as difficult or circumstantial to "evolve" as humans were.

              • By gonehome 2022-04-013:522 reply

                Why would it cook itself if it was replicated in silicon?

                I think it's a lot less likely humanity just happens to be at some natural limit of intelligence rather than being bounded by other unrelated constraints.

                AGI tradeoffs would be different because it'd be unconstrained from biological natural selection. If you're interested in this stuff it's worth checking out the work that specifies the AGI goal alignment problem more specifically.

                The most commonly recommended book is Bostrom's Superintelligence, but it's fairly dry. Tegmark's Life 3.0 is on the other extreme of too pop-sci like. I think Yudkowsky's writing is the best for explaining the issue (specifically AGI goal alignment and what the specific problem is) in a way that's accessible, but not dumbed down.

                • By coderenegade 2022-04-014:561 reply

                  The human experience is fundamentally chemical in nature (which is why alcohol has an impact on cognition, for example). What would it even mean to replicate that in silicon? I know food is bad because I can hold it, smell it, and taste it, none of which a computer can do. If you tried to upload someone's consciousness to a machine, you would need to emulate the full sensory experience as well, because there would be zero context otherwise.

                  What's missing from discussions on AGI is the lack of a measure that relates human intelligence with machine intelligence, and the recognition that our own intelligence is fundamentally tied in to our physical bodies. To what extent does it make sense to talk about intelligence without also talking about its embodiment?

                  • By gonehome 2022-04-016:031 reply

                    Inputs can be replicated in other forms (cameras for example for visual input and computer vision).

                    There’s a common misconception that AGI would be human like, consciousness is a mostly poorly defined orthogonal rabbit hole to the AGI question (and the goal alignment issue).

                    That said, you’re right the ability to train is dependent on inputs.

                    If you’re curious about this stuff it’s really worth reading the stuff I mentioned to get a sense for what the problem is. I only suggest them because I would have made a similar comment to you a few years ago before getting a better sense for what the problems are actually describing.

                    • By coderenegade 2022-04-016:551 reply

                      >A simple thought experiment is the same architecture of the human brain (whatever that is), but just with way more power to run at more operations per second.

                      That's your quote, is it not?

                      I'm pointing out that it's not just a matter of replicating an architecture, because the architecture only really makes sense in the context of everything else in the human body. There's zero reason to expect that we could replicate the architecture of a human brain (whatever that means) and just plug in a few cameras to have a working model.

                      This also implies that it would be exceedingly difficult to make human-like AGI, because doing so would be tantamount to making an artificial human. Anything that we do make will almost certainly not be human-like, and will probably only make sense in the context of its sensory system and the inputs it receives. This goes beyond training, because inference happens on data drawn from the same distribution that you trained on. Note that I haven't said anything about consciousness here, because that's an unrelated issue.

                      I've read bits and pieces of Bostrom, and I would argue that it's "not even wrong", in the sense that, yes, building a child god would be a disaster for humanity, but we don't even know if building such a god is possible, because it's totally undefined and we have no way of measuring it. That's the problem. Without a definition of intelligence as a physical process that maps humans and AI to the same measure on a like-for-like basis, it's essentially an argument on whether or not Thor is stronger than Superman. We have no context for what the limitations are likely to be; it may be that building an angry god just isn't possible.

                      • By gonehome 2022-04-0114:301 reply

                        I think we’re talking past each other.

                        The point of the architecture example is not that it’d be done this way or that it’s practical - it’s just to point out that there’s nothing magical about us and if you had an existing model and ran it faster (without biological constraints) you’d get something smarter than us. It’s a simpler possibility example of something that already exists.

                        In practice AGI likely won’t be that for reasons you suggest.

                        I’m not sure why you’d think AGI is not possible when there’s general intelligence all around us and there’s nothing magical about biology. Current methods maybe won’t get us there, but the current stuff is already super human in some domains and does generalize a bit (alpha zero).

                        Maybe it’s impossible for some unknown reason, but I’d bet against that. If it’s possible then the goal alignment problem is a real issue. With an unknown timeline it makes sense for some people to work on it now before we need it.

                        • By coderenegade 2022-04-023:571 reply

                          I think you're right that we're talking past each other.

                          I'm not saying that AGI is impossible, I'm saying that it's impractical to try to discuss AGI without some understanding of what intelligence is as a physical process. As an analogy for why: we understand fusion, to the extent that we know how to make fusion bombs. In theory, we could keeping adding stages to a fusion weapon to make a bomb large enough to crack the planet in half, but in practice, other factors start to dominate the practical explosive yield well before we ever hit that point. So while it's theoretically possible for us to make a planet-ending weapon, it's not practical in any sense. And as it turns out, not only are planet-ending weapons not practical, bombs today have lower explosive yields than the heyday of the nuclear arms race, because those weapons turned out to be impractical as well.

                          Talk on safe AI seems to be dominated by dark genies, when we don't even have the AI equivalent of a theory of nuclear physics, or lift. We just don't know what the practicalities of building superintelligent AI even are, so it seems premature to be ringing the gong and raising alarm bells that we might be building Beelzebub in someone's basement. If there's one thing I am sure of, it's that we won't just summon Skynet by accident; it'll almost certainly be the end result of the development of a fundamental theory of intelligence (or equivalent), and the accumulated work of engineers and scientists, probably over decades. You wouldn't expect scientists to accidentally make ITER, for example, without a theory of nuclear physics, even if they did have some notion of "hot rocks". Superintelligent AI seems at least as hard in my opinion, and I think that by the time we're in a position to build such an entity, we'll also have some idea of what the limitations are likely to be, how much of a risk it actually presents, and ways to constrain / mitigate these risks.

                          Note that this isn't the same thing as saying that the current field of AI safety isn't important, because it is. Narrow AI is still dangerous in the same way that we don't need planet-ending weapons for fusion bombs to be dangerous. But I'm less concerned with paperclip maximizers (because again, the notion that an AI could somehow turn the planet into paperclips raises serious and fundamental questions about the nature of embodiment that such proposals never actually grapple with) than I am with systems that enshrine and enforce social inequalities, or take power away from the average citizen, because the latter are things that we know are possible, even through negligence. More to the point, we don't need a theory of intelligence to assess the ways in which such systems could pose a threat.

                          • By gonehome 2022-04-0217:28

                            I think all of this is reasonable and we'd probably have an interesting in-person discussion.

                            I think flight is a good comparison. Before human flight it was possible to speculate about the risks. The machines we built turned out to use shared underlying principles with birds, but ultimately we can do it differently. We can supply more power and as a result do things at a scale not present in the natural world.

                            Maybe it's the case that intelligence is a special case with special constraints, but I suspect it's not. As a result it makes sense for some people to try to solve alignment now because if it ends up not being constrained then by the time we need it, it'll be too late to figure out the problem.

                            Maybe we'll get lucky (like we did with nuclear weapons not being able to be made by any random person in their backyard), but I wouldn't bet on it.

                • By breuleux 2022-04-0121:31

                  Silicon-based machines do need cooling, and arguably an AGI-grade processor would be 3D, which is an even greater cooling challenge.

                  Speaking of silicon, I'm curious why it is virtually absent of organic chemistry despite its sheer abundance on Earth. Is it because it's too hard to extract it from its oxides? Or is it just completely outclassed by carbon? Depending on the answer, it is quite possible that future AI will eschew silicon altogether and run on organic chemistry.

              • By eru 2022-04-018:38

                > Would that actually work, though, or would it just cook itself?

                You could add better cooling than what the human hardware provides.

                Even just taking a normal human head and dunking it in cold water dissipates a lot more head. (If you try this at home, I suggest getting a snorkel.)

        • By breuleux 2022-04-012:321 reply

          > It's pretty hard to believe that being able transfer knowledge between digital brains (something that seems arguably inevitable)

          I don't think it's necessarily inevitable. It is not a given that future hardware architectures for AI would be inspectable or copiable, because the extra wiring required to do that is space and energy overhead. It may also be the case that greater intelligence comes through better distributed representations and that it simply isn't possible to cheaply translate knowledge from an inferior representation to a superior one (you may need to relearn from scratch). The ability to transfer thoughts may therefore require a sort of lowest-common-denominator representation, in other words, a language. I imagine that language could be more efficient, though.

          • By eru 2022-04-018:36

            You are right that it's not inevitable, but still pretty likely.

            Also keep in mind:

            Human nerves work (roughly) at the speed of sound.

            Computers work (roughly) at the speed of light.

            We already make computers that are much, much faster than humans at sequential processing.

            About the representation thing: if you start with AIs that are copies from each other, you can probably keep the 'superior' internal representation the same or nearly the same. So I would expect AI clones to be able to exchange knowledge much quicker than unrelated AIs.

    • By colinmhayes 2022-03-3123:447 reply

      AGI is only inevitable if you subscribe to materialism. Basically it comes down to whether we are purely material beings or whether there are non-material aspects to our mind. There are more than a couple non religious arguments for dualism, many are relatively recent, as dualism is enjoying a bit of a comeback in philosophical circles. The Chinese room and the red room a.k.a.Mary's room are simple enough thought experiments that can help you grok why dualism might be true.

      That being said, the most recent philpeople survey has 52% accepting materialism compared to 32% accepting dualism with the rest undecided, so academic philosophy does generally lean toward AGI. https://survey2020.philpeople.org/survey/results/all

      • By simonh 2022-04-010:055 reply

        The Chinese room argument is a parlour trick that uses scale as a distraction. It posits a person in a room manipulating symbols to produce intelligent seeming outputs. It says, see, it’s absurd to think a person in a room with a stack of symbols could emulate intelligence.

        But let’s say the room contains many billions of people, it is the size of a planet, and it contains racks of many trillions of symbols, and it spends millions or billions of years to produce an output. That’s more like the scale of a sophisticated computer system, or a brain.

        Does that sound much like a man in a room with some symbols? No. Does it sound like that could do complex calculations and produce sophisticated and perhaps even intelligent outputs? Well, given enough time and scale, yes why not?

        The Chinese room is pure misdirection and it amazes me anyone falls for it. There’s really no actual argument there.

        • By _xnmw 2022-04-010:303 reply

          No, you're misunderstanding the Chinese room argument completely. It's not about scale, it's about the concept of "understanding" something. Here's another version that might make sense to programmers. I know Python, and I can read/write/compile Python code in my head. My computer's REPL can also read/compile Python code by following a detailed set of pre-programmed instructions to convert it to machine code. Nevertheless the computer does not "understand" Python, it cannot write Python code in response to any problem, nor does it "understand" Python the way a person does. It is not a programmer, and a programmer "understands" the language and can produce new creative meaningful output, and doesn't merely follow instructions.

          • By nautilius 2022-04-011:101 reply

            The catch in your argument is that from the outside, you manipulating code and your computer manipulating code are indistinguishable. You say the only difference is that you understand, whereas the computer doesn't. However, from the outside, there is no difference between a statement print("I understand this") running on you or a computer.

            • By endisneigh 2022-04-011:211 reply

              What you are stating is the entire point of the thought experiment.

              • By nautilius 2022-04-011:531 reply

                Cool, there are at least five different 'that is the whole point' takes on what the Chinese room means in this discussion. Also, I disagree with your take on it, that's not at all the point.

                • By endisneigh 2022-04-014:321 reply

                  The fact that you’re saying “from the outside” is literally the point. Outside implies an inside implies dualism. Hence the thought experiment. If there was no outside the thought experiment inherently wouldn’t be.

                  • By nautilius 2022-04-0114:371 reply

                    Easy there with the words. Just because there's two of something is hardly an argument for a (mind-body) dualism [1]. Is the dichotomy cats vs dogs then proof for said dualism as well? How about vanilla vs chocolate? What do you make of the existence of a (six sided) die then?

                    [1] https://en.wikipedia.org/wiki/Mind%E2%80%93body_dualism

                    • By endisneigh 2022-04-0114:441 reply

                      smh, have a good one.

                      • By nautilius 2022-04-0120:53

                        > Outside implies an inside implies dualism

                        Well, unless with dualism you just mean that there are two, you are simply mistaken. Just because my car has an inside and an outside, and I cannot determine the state of its gearbox from the outside w/o the car telling me, does not mean that this implies a ghost in the machine. Far from it.

          • By agency 2022-04-011:003 reply

            It seems like the Chinese room argument is incompatible with a materialist world view. To a materialist, what could the brain be but a computer? Some sort of physical process is taking place inside that is is processing information, and at some level of abstraction that process will look like "dumb" symbolic manipulation. Yet we achieve "understanding"

            • By colinmhayes 2022-04-011:121 reply

              Yes that is the point. The Chinese room is a thought experiment meant to prove dualism.

              • By slibhb 2022-04-012:461 reply

                No it's not. Searle is explicitly not a dualist.

                • By simonh 2022-04-0114:54

                  He thinks conciseness is a physical property. It's not clear to me why he thinks only brains, or maybe only living things, can have this property.

                  He makes the analogy that a weather simulation in a computer can't make anything wet so therefore a computer program can't have a thought. My take is that when we think about rain we don't get wet either. Human minds are the same sort of thing as the weather simulation, not the same sort of thing as the weather. For me thoughts are simulations, or models, or operations on models, and that those activities are tractable to computation.

            • By staticman2 2022-04-0113:30

              Nothing about materialism implies lumping organic and inorganic substances into a catchall term "computer" leads to greater understanding of how either works.

            • By eru 2022-04-018:40

              How do we know that what we achieve is different from what the Chinese Room achieves?

              Doesn't that beg the question?

        • By throwawaygal7 2022-04-010:361 reply

          I think the Chinese room experiment is pretty effective, but a person might say the room is conscious and one might not. We all agree that, if we had a Chinese room in front of us, the only real way to tell would be a Turing style test.

          • By mach1ne 2022-04-014:581 reply

            Not really, Turing test and its variants aren't that relevant anymore. Even Turing himself didn't like it.

            • By eru 2022-04-018:42

              Why are they not?

        • By slibhb 2022-04-010:303 reply

          > It says, see, it’s absurd to think a person in a room with a stack of symbols could emulate intelligence.

          That's not at all what it says. It says:

          1. Assume a computer program can pass a turing test

          2. Convert the computer program into a list of steps that a human in a room can follow

          3. Put a human in the room. Pass him a slip of paper with Chinese writing. Have him follow the steps to generate a response.

          4. Now consider, once you've done this, does the human in the room following those steps understand Chinese?

          Searle's answer is obviously not, from which he deduces that the computer program doesn't understand Chinese either.

          • By simonh 2022-04-010:461 reply

            Right, but the person in the room isn’t the computer program. He’s not even the whole computer. He’s just a component of the system. It’s the system that we should consider as understanding Chinese, not just a piece of it. He’s following a set of instructions so he doesn’t even have all of those in mind at once either.

            By abstracting the system as a man in a room were distracted to think we’re considering the system when we’re not. If the man in the room is one of billions of men, at massive scale which is a more realistic model, that becomes obvious.

            • By slibhb 2022-04-011:003 reply

              Yes, one response to the thought experiment is "the whole system understands Chinese". I don't find that response convincing. Do you really think a book with instructions in a room plus a man to carry them out understands Chinese?

              The thought experiment hinges on the concept of understanding. If you want to argue against Searle, and you can, your best bet is to attack that concept, to argue that "understanding" is an illusion or that it's nonsensical. This is what philosophers like Daniel Dennet have done.

              • By simonh 2022-04-011:231 reply

                > Do you really think a book with instructions in a room plus a man to carry them out understands Chinese?

                Again with the scale misdirection. Are you really sure a room the size of Jupiter with a rule book the size of many libraries, full of trillions of people following the rules, shuffling quadrillions of symbols for millions of years definitely can’t ever understand Chinese? Not in any sense?

                • By slibhb 2022-04-011:301 reply

                  You're bringing up scale, not me. The book can contain 10 billion lines of instructions. That's a lot of "scale".

                  The thought experiment is about the concept of understanding/intelligence/consciousness/whatever. It has nothing to do with scale.

                  • By simonh 2022-04-011:401 reply

                    If it has nothing to do with scale, then why doesn’t Serle us a realistic example of a vast room, why a book not a library, why a man not an army?

                    I’m bringing up scale because I’m pointing out how it’s being used.

                    • By slibhb 2022-04-011:481 reply

                      If you insert the adjective "vast" before room, then you get The Vast Chinese Room thought experiment and nothing changes.

                      • By simonh 2022-04-012:081 reply

                        A lot of people find it credible that a hugely complex fast computer system could be intelligent. Many of them seem to find it convincing that a man in a room with a book can’t “understand Chinese” when he personally doesn’t. But the only difference between them is scale.

                        • By slibhb 2022-04-012:462 reply

                          > A lot of people find it credible that a hugely complex fast computer system could be intelligent.

                          But not John Searle...so it seems clear that scale has nothing to do with the thought experiment. For Searle, a vast room or an army of people to carry out instructions make no difference.

                          > Many of them seem to find it convincing that a man in a room with a book can’t “understand Chinese” when he personally doesn’t

                          The people who read the thought experiment this way don't understand it. In the thought experiment neither the computer program (no matter how powerful the computer is) nor the man and the room understand Chinese.

                          > But the only difference between them is scale.

                          No, the point of putting a man in the room is that we all agree that a human is capable of understanding. From his vantage point in the room, he can see that the program works by executing instructions without any understanding. All the room does is demystify a hypothetical computer program that can pass a turing test.

                          Searle is not arguing that a machine with understanding/consciousness is impossible. He's arguing against behavioralism ("if it can pass a turing test it understands"). He take the psychological phenomenon of "understanding" (something we're all familiar with) seriously, he thinks it represents a real thing, and that a computer that can pass a turing test doesn't necessarily have it.

                          • By simonh 2022-04-017:271 reply

                            I don’t see how that’s different from a neurologist examining a human brain. They can see it’s just cytoplasm and electro chemicals oozing around and firing off signals. None of that looks like it understands anything, any more than the symbols moving around in the Chinese room. They can no more point to the place in the brain that understands things than the guy in the Chinese room holding the book can.

                            • By slibhb 2022-04-0114:001 reply

                              The difference is that we all have the subjective experience of understanding. When we have a conversation, we experience other people's words as sounds in our heads laden with meaning. We "understand" each word in the sentence, how the words fit together, what they refer to, and so on. We're all familiar with this experience, right? We're doing it right now through the medium of writing.

                              Searle takes that experience seriously. For him it's real and, in order to have that experience, you need machinery somewhat similiar to what we have. He can't bring himself to believe that a list of instructions and a person dumbly executing them have that sensation of understanding even if they produce the correct output.

                              • By simonh 2022-04-0115:101 reply

                                I take that experience seriously too. I don't see why a sufficiently advanced artificial system as complex, capable and sophisticated as a human brain could not also have experiences. I don't think he has any good reason to believe otherwise, or any good argument against it. Nothing he says about the mechanisms of the Chinese room can't also be said about the mundane physical mechanisms of neurology.

                                • By slibhb 2022-04-0116:201 reply

                                  > I don't see why a sufficiently advanced artificial system as complex, capable and sophisticated as a human brain could not also have experiences.

                                  Neither does Searle. He says that humans are such machines. Per Searle, if you want to build a system that has subjective experiences of consciousness and understanding, it would have to have physical parts that correspond to some degree to the physical parts that give rise to our subjective experiences (or an animal's subjective experiences). It couldn't be a list of instructions to be executed.

                                  I don't know if Searle is right about this but I find myself unable to dismiss his argument as obviously wrong.

                                  • By simonh 2022-04-0116:43

                                    You don't think it's obvious that actual computers and robots have physical parts? I'm sorry, I'm at a loss.

                          • By tigen 2022-04-017:441 reply

                            > The people who read the thought experiment this way don't understand it. In the thought experiment neither the computer program (no matter how powerful the computer is) nor the man and the room understand Chinese.

                            Who says the computer doesn't understand Chinese, though? In this case "the computer" is the system. The program, being the code, isn't a process. The man is acting as a mechanical piece of hardware. But the computer running the program is an active process with state.

                            I heard a lecture by Searle on this. He mentioned the idea that "the room understands Chinese" and simply dismissed it as absurd without considering it. This is exactly the problem with his argument. Well, that and the casual conflation of the man with the computer as if the CPU hardware is the system.

                            • By simonh 2022-04-019:03

                              Right, exactly, it is absurd to think one man in a room with some symbols can generate understanding. But only because of scale, because we imagine a regular sized room, a normal sized book and a table with symbols on it. But it’s absurd because that system is incredibly simple, not due to any actual argument from fundamentals.

              • By gilmore606 2022-04-016:38

                > Do you really think a book with instructions in a room plus a man to carry them out understands Chinese?

                Do you really think a blob of goo with static sputtering through it, designed by no one, accidentally mutated from pond slime, understands Chinese? I actually find your proposition easier to believe, but here we are.

              • By nl 2022-04-012:23

                I think something like OpenAI CLIP + Gpt3 "understands" English at a level comparable to a 7yo.

                It can respond to questions and explain its answers. Sometimes it's wrong, but so is a 7yo and you can usually understand why.

                Is it intelligent? Maybe it is.

          • By AgentME 2022-04-011:421 reply

            This seems like saying since a person's individual neurons can't understand Chinese, then a person can't actually understand Chinese either.

            • By windows2020 2022-04-015:001 reply

              We know how the room works--not the brain.

              ...Can we break it down and replicate it?

              Can my sense of self be quantified?

              What about animals, insects, viruses?

              Is life unique, or an illusion too?

              Or, are you the Chinese speaker, and the universe the room?

              • By simonh 2022-04-0111:27

                You seem to be arguing that because we can't do it yet, therefor it can't be done. Or that since we don't yet know how the brain works, that we can never know.

          • By gwd 2022-04-019:17

            > Searle's answer is obviously not, from which he deduces that the computer program doesn't understand Chinese either.

            Would Searle feel differently if he'd taken a computer architecture course?

            There was something strangely magic about making an adder:

            1. Learn about boolean logic, Karnaugh maps, etc

            2. Define the inputs and outputs of a 4-bit adder, deduce some set of gates which would correspond to those outputs

            3. Put those into a circuit similator and put a box around it

            4. Suddenly the random AND and OR gates, which look arbitrary and none of which individually know how to do addition, collectively know how to addition.

            Then you keep scaling that up: Put a bunch of gates together, none of which have memory, and suddenly you have flip-flops and registers. Add an instruction decoder and suddenly it starts zooming along doing things, executing simple programs that you feed it from simulated "memory". But you know that inside is just a spider's nest of logic gates.

            I myself am religious and believe in a spiritual world distinct from the material world; but the Chinese Room thought experiment was never that compelling to me: I've seen spider webs of logic gates come alive as processors when assembled properly; I don't see an inherent reason why an algorithm of paper plus a very patient human couldn't come together to create something which "understood".

        • By mensetmanusman 2022-04-012:121 reply

          It’s not a parkour trick when it is understood :)

          • By yesbabyyes 2022-04-014:23

            Indeed, but is the human mind truly advanced enough to understand parkour?

        • By colinmhayes 2022-04-010:192 reply

          I mean I agree, but a third of philosophers subscribe to dualism, so I wouldn't be so quick to write the whole thing off.

          • By lern_too_spel 2022-04-013:49

            Dualism is simply an ivory tower God of the gaps. To these sophists, anything we can't explain now can be decomposed into a physical and metaphysical part.

          • By simonh 2022-04-010:542 reply

            Dual meaning two, right? Material and non material. As far as I can tell not one of them can describe what the second non material thing in dualism is, or what any of its properties are. Even they seem to have no idea what they actually believe.

            • By colinmhayes 2022-04-010:591 reply

              They believe it's not possible to explain human consciousness as a purely physical concept. I don't see why they need to explain what it is that isn't material to make that claim.

              • By simonh 2022-04-011:323 reply

                I just want a coherent explanation of their belief. They’re the ones positing something non material, not me. I just what you know what it is they mean by that. Absent that, I don’t see how they can claim their belief is coherent.

                • By colinmhayes 2022-04-011:462 reply

                  I’m not especially well versed, but as I understand it they believe we don’t know the essence of the non material part and in fact it being unknowable is sort of the point.

                  This sort of reminds me of the people who claim the Big Bang is incoherent because we don’t know what came before it. Why is I don’t know not good enough?

                  • By simonh 2022-04-012:021 reply

                    I suppose that’s fair to a point. It’s just disappointing.

                    The thing is this non material whatever can’t have properties. If it stores information, has state and has consistent behaviour then really it’s just a form of material (in the philosophical sense that it’s a part of the world), so it can’t have any of those. At which point, how does it even interact with the world at all? I mean if it influences the brain, then that mechanism of influence makes it part of the world, right? It makes it material.

                    That’s where I find the idea incoherent. I don’t see how it can be both immaterial but also have material effects. Doing so makes it material, I think by definition. It means it must be part of material reality in that sense. Or at least I’d like to hear an argument why it doesn’t.

                    • By prometheus76 2022-04-013:491 reply

                      What is the material evidence for numbers? What is the material evidence for logic? What is the material evidence for persistence of identity over time? What is the material evidence for math? What is the material evidence for categories? What is the material evidence for grammar? What is the material evidence for a mayor (not the mayor qua person, but the mayor qua office)? What is the material evidence for marriage? According to empirical evidence, which knife is the best knife? There is no "best" without purpose. And materialism has nothing to say about purposes. Empiricism can only work when an agent in the world has a specific purpose. There are vast swathes of human experience that don't fit tidily in the box of materialism or empiricism.

                      • By simonh 2022-04-017:061 reply

                        Those things are behaviours and information that can be encoded in matter. If the non material thing in dualism is just emergent behaviour, then it’s not adding anything beyond what we already have in materialism.

                        • By prometheus76 2022-04-0112:011 reply

                          Numbers aren't emergent behavior. Numbers are a metaphysical category. Categories are metaphysical. There's no empirical evidence for order vs. randomness. The concept of "order" doesn't make sense empirically. There has to be a separate, metaphysical value structure that determines whether the data aligns to the value structure or not. Just because you haven't been clear in your thinking doesn't mean metaphysics aren't real or essential to dealing with the physical world.

                          • By simonh 2022-04-0115:191 reply

                            I accept that. Metaphysics is real in a useful way, but I think that metaphysics itself is an emergent behaviour. Emergent behaviours are behaviours of mater, and matter is real, therefore emergent behaviours are real.

                            • By prometheus76 2022-04-0116:001 reply

                              "Emergent behavior" is handwavium materialists use to avoid cognitive dissonance.

                              • By simonh 2022-04-0116:37

                                We observe them, categorise them, and engineer them all the time.

                  • By shkkmo 2022-04-012:19

                    If the "non-material" is required to explain material phenomena, then it must be causally related to the "material"...but at that point what exactly makes it "non-material" at all?

                    "Non-material" just sounds like a bad label for material things we don't understand.

                    We don't understand how our cognition or consciousness works, but it seems silly to assume that because we don't understand we fundamentally can't understand.

                    People who push the mysterious interpretation of dualism are just trying to find a place for the divine that is separate from the material but in doing so they baselessly seek to circumscribe our capacity to understand.

                • By mensetmanusman 2022-04-012:151 reply

                  Imagine an Apple. Is that Apple material?

                  • By gaganyaan 2022-04-012:491 reply

                    Those neurons firing while imagining are material, yes.

                    • By asey 2022-04-014:012 reply

                      But where is the apple? That is, where is the emergent phenomenon of the apple?

                      • By simonh 2022-04-017:111 reply

                        The emergent phenomenon is a pattern in my brain, just as a running a computer program is an emergent phenomenon in a computer.

                        If the extra thing dualism adds is just behaviours of matter, how is that different from materialism?

                        • By mensetmanusman 2022-04-0110:542 reply

                          Who is reading the Apple?

                          • By simonh 2022-04-0111:241 reply

                            I'm not sure what you mean by reading.

                            We say that we can experience thinking about an Apple, or imagining an Apple. I see no reason why a computer, or other physically implemented AGI system, could not do that. I suspect the act of imagination is just generating, processing and transforming a computational model abstracting the thing being imagined.

                            I believe brains are physical objects, so therefore physical objects can imagine Apples.

                            • By mensetmanusman 2022-04-0113:402 reply

                              The plane of imagination isn’t physical, else you could touch it.

                              • By simonh 2022-04-0115:12

                                I don't know what a plane is in this context.

                                You can't touch fourier transforms either, but a human brain or a microchip can compute them.

                              • By gaganyaan 2022-04-0114:101 reply

                                Sure you can, its done currently for Deep Brain Stimulation. What you're proposing is "The plane of Quake isn't physical", which is nonsense.

                                I can make the computer imagine Quake for me, then fiddle with its plane of imagination for some sweet wallhacks.

                                • By mensetmanusman 2022-04-0117:431 reply

                                  If you create a device that can insert objects into the plane of someone’s imagination, that would be awesome :)

                                  • By gaganyaan 2022-04-0119:53

                                    That's a side effect of DBI today. It's a random and crude method today, but arguing that we we'll hit some ineffable wall that will prevent more fine-grained control is.. well let's just say that the gaps for gods grow ever smaller.

                          • By gaganyaan 2022-04-0113:40

                            Who is it that fetched this comment from the internet for you?

                            Using vaguely-defined words to support a deist position is a time-honored tradition, but isn't particularly interesting or convincing.

                      • By gaganyaan 2022-04-0114:061 reply

                        There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.

                        A collection of neurons can build a model of the world that includes its experience of apples, and from that, dedicate some neurons to representing a particular instance of an apple. This model isn't the reality of "Apples", though, and is physically located in the brain.

                        • By mensetmanusman 2022-04-0117:541 reply

                          “ There isn't a platonic ideal "Apple" if that's what you mean by emergent phenomenon.”

                          Sure there is, that’s what DeepMind showed us with how to find cats in images :)

                          • By gaganyaan 2022-04-0120:081 reply

                            That's exactly my point. DeepMind has an idea of what a cat is based on its experiences, just as you or I do. Each of our models are woefully incomplete, based on very limited sensory information. These models all disagree with each other and reality to various degrees.

                            There exist many things many humans have lumped together under a single label such as "cat". Those categories are all wrong, but sometimes they're useful. Machines can also get in on the fun, just as well as humans, as you point out. There's no magic there, humans aren't special.

                            • By mensetmanusman 2022-04-031:511 reply

                              The humans aren’t special bit really comes down to whether you believe in free will (which by any meaningful definition is quite special).

                              • By gaganyaan 2022-04-0319:301 reply

                                Free will is another one of those things that people love to trot out because it's so ill-defined. To cut through all the crap though, it's very simple: "free will" === "unpredictable behavior". This inherently means that it's observer-dependent.

                                This has the benefit that it empirically fits how people think about it. Nobody thinks a rock has free will. Some people think animals have free will. Lots of people think humans have free will. This is everybody trying to smush a vague concept into the very simple, intuitive definition above.

                                Which is all to say that free will is about as relevant to any conversation as say astrology is: not one bit.

                                • By mensetmanusman 2022-04-0322:041 reply

                                  As much as a cloud of atoms doth protest that free will is irrelevant, reality has a way of not caring :)

                                  • By gaganyaan 2022-04-0413:02

                                    Funny, I'd say reality has a way of existing despite all of the comforting woo people like to make up about it.

                • By jazzyjackson 2022-04-013:38

                  can't describe the system from inside the system, boss

            • By tacocataco 2022-04-0122:07

              Seems like something someone believes out of hope right?

              That there is something more than this. That it isn't just over. Something to keep people from giving up.

              Edit: hope it didn't come off like I was talking down to people who believe what they believe. It's your life, it's your after life. Stay safe yall.

      • By teraflop 2022-04-010:092 reply

        Dualism has always seemed like a complete red herring to me. Even if you assume dualism is true, and that human consciousness has some non-material basis, what reason is there to believe that an AGI couldn't also have a conscious mind with non-material aspects?

        • By colinmhayes 2022-04-011:561 reply

          I don’t think dualism necessarily means AGI is impossible for the reason you pointed out. But it would definitely mean it’s not inevitable, the non material part may be something that is somehow unique to people.

          • By eru 2022-04-018:45

            It depends a bit on what we mean by AGI.

            Even if dualism is true, and humans have something special, that doesn't mean that a paperclip optimizer is impossible. A paperclip optimizer would be very, very smart (in the sense of being able to solve problems and achieving its objective of turning the universe into paperclips), but would not have a consciousness.

        • By techdragon 2022-04-011:42

          I see this from people who expect that quantum computing will be an essential component of building AGI…

          However a lot of these people are throwing everything at the wall to see what sticks when it comes to selling whatever quantum computing related tech or service they are working on. So it comes with a pile of skepticism from me. However the philosophical point of quantum computing being necessary if any of the “quantum dualist” type theories of consciousness turn out to be true.

      • By riskneutral 2022-04-010:15

        > Basically it comes down to whether we are purely material beings or whether there are non-material aspects to our mind.

        The current state of physics suggests that we really have no idea what the "material" universe really is, at any scale. There may well be aspects of physical reality that we have yet to even conceive of. Given that when we cannot even say what a lump of rock is truly made of, or what even the dark void of space is made of, it should be obvious that we cannot be certain at all about what consciousness is made of.

      • By RealityVoid 2022-04-014:37

        I'm sorry, maybe it's just my tiny monkey brain, but I see no rational reason why someone would even consider dualism to be a thought worth entertaining. We obviously know material changes affect mental functioning directly. So how the heck could that substrate exist if not at the physical level?

      • By jltsiren 2022-04-018:55

        Most of the confusion around the Chinese Room arises when people don't realize that "computer" and "computation" are not synonyms.

        Turing defined computation as I/O behavior. The Church–Turing Thesis states that as far as their I/O behavior is concerned, all sufficiently general computational mechanisms (computers) are equivalent. The Chinese Room Argument presents a stupid computational mechanism with I/O behavior equivalent to a human communicating in Chinese. Some people then conclude that because the Chinese Room is clearly not intelligent and "because all computers are equivalent", computers cannot be intelligent. But that requires leaving out the qualifications from the Church–Turing Thesis.

        But maybe there is more to computers than their external I/O behavior. Maybe intelligence is a property of the internal behavior of a system rather than its external behavior. Then there could be some computers that are intelligent and others that are not, even if they have the same external behavior.

      • By Barrin92 2022-04-0111:151 reply

        For the love of god can people stop abusing the Chinese room?

        Searle is a materialist, pretty hardcore one at that. What he wanted to show with the Chinese room is that simulation of intelligence and intelligence are not the same thing, that a machine, or even a human for that matter, can perform symbol manipulation without having any internal understanding of the thing they perform.

        It's an argument against functionalism (i.e. that a digital computer 'thinks' the same way as a human merely because it performs the same tasks), not an argument for dualism. (that mind and matter are ontologically dinstinct substances)

        • By Chris2048 2022-04-0112:591 reply

          But it doesn't. It totally fails to meaningfully define "understanding".

          • By Barrin92 2022-04-0117:071 reply

            It defines it just fine. Understanding means grasping what a symbol refers to and what it represents conceptually, rather than just engaging in symbolic manipulation itself.

            Just imagine a perfect rule book that already has a translation from one language into the other for every imaginable sentence, some native speakers just have written any possible combination down. You could 'translate' by just visually picking without having any idea of what's being said. They understand, you do not.

            • By Chris2048 2022-04-0121:08

              ok, so you define something that isn't understanding (symbolic manipulation, use of a magic rule book).

              But where do you define what it is? "grasping what a symbol refers to" is another way of saying "understand" - it doesn't define what it means to "grasp".

      • By foobarian 2022-03-3123:481 reply

        There is some gray area in between where maybe AGI is dependent on some (material) mechanism that we haven't discovered yet, and hence can't be accomplished using present computational setups.

        • By colinmhayes 2022-03-3123:50

          Absolutely, the "inevitable" timeframe may be millenia for all anyone knows, it just means that we will eventually achieve it given enough time and resources.

    • By manquer 2022-04-011:35

      Take your pick

      - We won't achieve AGI because the goalposts will keep moving every 20 years. Everyone would have a stronger/higher definition of GI as we achieve each milestone.

      - Possible doesn't mean it is feasible in finite time. Not all problems are feasible to compute. It could be akin to a NP-Hard problem for example, there is no mathematical certainty that convergence is possible and also in a reasonable time (within the life of the universe).

      - It will be possible to build a machine to pass the Turing test, one that can very closely mimic human behaviour, each generation they would come closer and closer and no human can distinguish between the two, True AGI is well beyond that, we maybe able to create something like us, we may not be able to create something better than us.

    • By TheOtherHobbes 2022-04-019:33

      Humans very much don't have AGI. It's one of our many flaws.

      We define AGI as "human-like behaviour" but that usually means "AI researcher-like behaviour", which is why AI research concentrates on tasks like playing games and translating language, and not on having enough general intelligence to handle social situations.

      So AGI seems to be defined as "Can self-tutor any academic or intellectual domain to a professional level."

      The core human skillset is elsewhere. It's based on social awareness, emotional mirroring and empathy (except for Dark Triad types) and various kinds of ambition, goal-seeking, and drive regulation. And language semantics to drive all of this. All based on various kinds of contextual awareness of physical, emotional, and social location. And only once all of this is in place does intellectual learning happen.

      It's actually quite a narrow. but deep set of skills. It's still broader than - say - playing bridge. But unlike bridge, winning optimal moves are very hard to define, and internal state and external actions may have long-delayed results, so training is very difficult. And there are multiple definitions of winning - from personal happiness to financial and political domination - some of which conflict with each other.

      Which is why humans have 15-20 years of training on these problems before entering the adult human world.

      In CS AI terms humans are already dumb as rocks. No human can play bridge, chess, or go to an AI standard.

      In human terms many humans are also almost as dumb as rocks. But even stupid humans can still handle the core skillset to a passable level. AI hasn't really made any inroads into this space, but I strongly suspect true AGI won't be possible without it.

      Otherwise you build a machine that can teach itself to write music and play chess and pick stocks, but it learns with no context or concept of usefulness. "Understanding" music would be exactly equal to "understanding" stamp collecting. It would know the computational cost of everything, but the value of nothing.

    • By causality0 2022-04-012:391 reply

      AGI as a concept is almost certainly possible. However, it may not be that useful. That is to say, barring a true revolution in computational physics, it may not be possible to run human-equivalent AGI on a computer that's less than several orders of magnitude more massive than a human brain. Biology is unspeakably space-efficient. The brain of a dragonfly uses just sixteen neurons to take input from thousands of ommatidia, track prey in 3D space, and plot an intercept vector. Figure out a minimum artificial computer performance to do the same thing, then scale your requirement by the ratio of human brain to dragonfly brain. The numbers are not encouraging.

    • By slibhb 2022-04-010:191 reply

      > I personally think AGI is far off, but always assumed it was an inevitability.

      Why? We might get taken out by a meteor tomorrow. I agree that, in principle, AGI is possible. But there's lots of things that are in principle possible that will never happen.

      We have general intelligence but it took billions of years of evolution for us to get here. In some sense those billions of years of evolution are still with us, in every cell in our bodies. Maybe all of those years of learning are necessary for general intelligence. Trying to find a shortcut may yield impressive results in narrow circumstances but I doubt it will be generally intelligent in the way people and animals are.

      • By eru 2022-04-018:481 reply

        Many animals are not generally intelligent.

        Also many human subsystems are easy to trick. Look at eg optical illusions.

        And, to follow your philosophical argument: given that the only thing that connects our cells to the cells of our ancestors is information, why couldn't those billions of years be also in our computers?

        • By slibhb 2022-04-0113:551 reply

          > And, to follow your philosophical argument: given that the only thing that connects our cells to the cells of our ancestors is information, why couldn't those billions of years be also in our computers?

          Information is passed through the generations by DNA. Silicon doesn't have DNA.

          • By eru 2022-04-0210:31

            DNA is one way to pass information. It's far from the only way.

    • By hacknat 2022-04-011:26

      I believe AGI is possible, but the more I study ML the more convinced I am that we don’t know what it (AGI) is. I also, don’t know if I agree that humans have GI (humanity as a whole does). I think we’re actually copycats more than anything else. Neural nets are the atoms of GI, but it’s not clear what the superstructure of GI is. We all assume that we understand what general intelligence is, but I don’t think we actually have the goods yet. We understand optimization, but we don’t understand the thing that chooses what to optimize. There is some notion of “values” in order for intelligence to be legible, and we don’t know what that is either.

    • By umanwizard 2022-04-013:471 reply

      > always assumed it was an inevitability.

      Why? We don’t know how to build it, even in theory, so shouldn’t the default be to say there’s no way of knowing it’s possible or not?

      After all, some things from sci-fi have become reality (computers, the Internet) or haven’t but are obviously feasible (human travel to Mars) and some haven’t and seem unlikely ever to (faster-than-light travel, flying cars).

      With no evidence in either direction, why assume that AGI is in the second category, rather than the third? There’s no natural law saying that everything that can be imagined is possible.

      > GI is a real thing.

      Yes, but GI isn’t AGI. AGI would require us to either use completely different materials with completely different efficiency characteristics and somehow achieve the same results, or learn to precisely manipulate organic materials (proteins, etc.). It’s not obvious to me why either of these is possible (or impossible — I’m not taking a stance either way).

      Furthermore, even if it is possible to build AGI, there’s no a priori reason to believe that humans are smart enough to discover how.

      • By RealityVoid 2022-04-014:28

        > With no evidence in either direction, why assume that AGI is in the second category, rather than the third?

        Because we have GI examples in nature, therefore it is possible. We just don't know how to do it, yet. The same way we saw birds flying, so we knew flight was possible, but we could not, at that time.

    • By bryanrasmussen 2022-04-0110:481 reply

      Aside from religious arguments AGI might be philosophically impossible given that we do not seem to have a proven, accepted definition of GI. If you cannot actually define it how can you know if you have achieved it?

      So then we ask will we be able to define it? And here is where the real problem comes in, because if we are to define GI and as you say 'humans are sentient with GI' it is arguable we need to be able to fully understand ourselves. and this is problematic because can the constituent parts of a system understand itself from within the system. From within the system you cannot see from outside and thus understand it separate from the inner workings.

      But another issue also pertains, we can understand a simple system because we are more complicated than it. Just as we can understand how 2 or 1 dimensions work because we exist in 3 dimensions. Thus we can model how a 1 dimensional being intersecting with a 2 dimensional object would perceive the experience.

      So the question also is can a system of complexity X fully understand systems of complexity X, or do you need a more complicated system. This is of course the reason for the Turing test, to come up with a way to disregard these problems and agree that a machine is good enough.

      So it may be that some people think AGI is impossible for philsophical reason, but that Turing test passing machines are possible.

      Not to say that I hold to any particular belief regarding these kinds of arguments.

      • By zasdffaa 2022-04-0110:591 reply

        > AGI might be philosophically impossible given that we do not seem to have a proven, accepted definition of GI

        Strange argument. Just cos we can't define it doesn't preclude its existence. Once (pre modern chemistry) no-one knew what water actually was, but water unquestionably existed.

        • By bryanrasmussen 2022-04-0111:091 reply

          water exists, and you can see it, GI exists by definition and you cannot see it, there are millions of arguments as to what GI is and you cannot point to some GI and have everyone agree yes, that is GI but we just differ on what it is made of, even in ancient times you could point to water and say that is water and everyone who argued about the origins of water would still agree with you, yes that is water.

          So how do you presume to make something when no one agrees what that thing is?

          • By zasdffaa 2022-04-0111:171 reply

            You conflate 'existence of x' with 'common agreement of existence of x'.

            Black holes existed decades/centuries ago but (from memory) there was no agreement they actually existed until gravity waves were detected just recently.

            • By bryanrasmussen 2022-04-0111:451 reply

              you evidently conflate people walking around saying General intelligence exists with any sort of agreement as to what comprises general intelligence.

              The common agreement with the existence of General intelligence is actually a belief in its existence and not any sort of common agreement as to how to recognize general intelligence when encountered. Indeed, there are people who do not agree that it exists. I believe it exists, or close enough that it doesn't matter much, but that does not mean I can define it. In this it is actually closer to religious belief than science, people believe in God but they cannot really define what God is without falling into paradox (I do not believe in God in case you are going to misinterpret what I write and say that I do)

              In short, there has been no precise definition of general intelligence therefore when you make it you cannot confirm that you have made it. This does raise the ridiculous and theoretical possibility that you could make it, believe you had made it, be correct you had made it, but still have people say you did not make it, or even you make it but not believe that you did. But as a general rule I think having an agreed on definition of something, for example what is gold, and then making it, allows us to say yes you have made gold, but having a thing where everyone says I believe in it and I know it when I see it, will, when you make it, end up with no less certainty as to if it can be made or not as when you first started.

              Furthermore there are logical arguments that suggest that a General Intelligence will not be great enough to define itself, that is to say we as humans can not define the level of intelligence we possess with great enough understanding that we could be certain that we had achieved it in any machine we built.

              Given that it is perhaps impossible to precisely define general intelligence then it would be as impossible to make it.

              I can see however I am going in circles here, so, as a final parting example if you have some object, like a diamond perhaps, and no knowledge of how diamonds are formed and nobody else in the world has any sort of agreement on how they are formed, would you be able to make them? I mean you could go mine them from the earth, but not make them, and you would not be able to make them until you knew their precise composition.

              • By zasdffaa 2022-04-0112:571 reply

                This may be a terminological disagreement rather than something fundamental. You said

                > ... AGI might be philosophically impossible given that we do not seem to have a proven, accepted definition of GI.

                That says x might be impossible to achieve if we don't know what x actually is. I disagree. Now if you'd said

                > ... AGI might be philosophically impossible to recognise given that we do not seem to have a proven, accepted definition of GI.

                Then yes, I'm with you - perhaps we can both agree that it could be created, and exist in reality, but we are unlikely to get consensus amongst everyone that it has been achieved, thus permanently leaving open the question of whether we've 'got there' or not.

                Can we agree on that?

    • By spicyusername 2022-04-010:47

      I think AGI is possible in theory, but may not be possible in practice. I also think that "intelligence" is a vague concept, which makes defining what AGI even is tricky.

      Reaching it assumes that the ceiling for our technology is sufficiently low and that the runway for us to continue making progress is sufficiently long. Neither of those assumptions may end up being valid.

    • By throwawaylinux 2022-04-013:50

      > If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible.

      How does that follow?

      > What are the common secular arguments against AGI?

      Having infinite time and resources, and pointing to the existence of intelligent humans is a fairly weak argument IMO. It's not really saying anything much that can be refuted. Sure there exists some physical process from which intelligence arises. That doesn't mean we'll be able to create the same thing. Can we create a heart? Or a hand? Even a patch of skin is cut from one part of a person's body and grafted elsewhere. All things far simpler than the brain. And the best hope we'll probably ever have of creating them is to grow biological ones rather than actually design and build our own. Let's go one step further - we are incapable of creating anything even approaching the functionality of the simplest single-celled life-form. The "it exists therefore we must be able to create it" argument doesn't have a lot of legs.

      And there is no real "trajectory" to general intelligence that I can see. Velocity is sitting around a flat zero. Recognizing pictures of cats on the internet is about where we are at the moment. Even self-driving cars or doing a half-decent job at predicting what people might like to buy are not a step along the road to intelligence, they are an entirely different roads altogether. We don't even know what that road looks like, don't know where it is, and don't know where we are in relation to it. The only vaguely plausible lead people have to go on is to try to do what biological brains do, but seeing as we don't even know how they work it's a bit like the blind leading the blind.

      I think it's quite possible, certainly over an infinite time scale. Not an inevitability though, definitely not inevitable (or even very likely) within our lifetimes.

    • By umvi 2022-04-012:35

      > I certainly understand the religious perspective. If you believe humans/life is special, created by some higher power, and that the created cannot create, then AGI is not possible.

      Well the created clearly can create. I mean, obviously humans (and indeed all life) can reproduce and create new living things... it's just a completely automatic process with a chicken and egg problem. (i.e. to create life you need starting ingredients that are already alive by some metric).

      The question though from the religious perspective is whether or not living things have "spirits" or "souls" that causes them to be alive (as opposed to a bunch of matter with the same chemical makeup that is not alive). Because if they do, then creating artificial life would require somehow binding a "spirit" or "soul" into your artificial life vessel which is something we don't know how to do.

    • By gcheong 2022-04-010:09

      It's probably possible, but is it optimal? And what do we even mean by optimal? What's the ultimate purpose of AGI? Perhaps it would be optimal in the sense that it could teach itself to be an expert in anything in short order, but that's not to say that it would be a better approach than something else depending on the goal. We're clearly biased because we are one current solution to a problem that billions of years of trial and error has attempted to solve, and we think we're a pretty good one, but it also blinds us to the possibility of better ones.

    • By prometheus76 2022-04-013:31

      Part of it comes down to trying to do AI in a disembodied way. OK, so a team of engineers can specially train and tune an AI model to beat a human at Go. But the human player who was beaten by the computer is thinking about getting up to go to the bathroom while playing, and will drive home afterwards through rush hour traffic and make dinner. That is informed by that human player's GI model for the world.

      Current AI models see the world as edges and vague boxes. It doesn't even have item persistence, let alone the ability to decide on, and make, dinner.

    • By Alan_Dillman 2022-04-016:261 reply

      Diogenes holds up a plucked chicken and proclaims "Behold, a man!"

      The hypesters are defining AGI (and even General Intelligence) like Plato defined man, "A featherless biped". Either they know this, and they are hucksters, or their thinking is that shallow, and thus have nothing meaningful to say on the subject.

      General Intelligence has drivers, like curiosity, hope, a sense of fairness and injustice, and hunger, pain, pleasure, fear. And those things probably drove the refinement of intelligence, and as such, a housecat is less like a simple machine than a earthworm is.

      I think AGI is possible. Ultimately we're all machines, and I don't believe in a free will particle, but it is fairly obvious that an earthworm merely acts, while a cat acts interestingly, pulling in qualia unavailable to the worm. A worm is either active, or its dead, while my cat is clever enough to cut corners, to seek efficiency or pleasure, to be lazy.

      And humans are even better at it. Therein lies the rub: people working on AI don't seem to aiming for real intelligence, they want their convenient action machine, more like a worm than a cat. And the hypesters (and doomsters) hand wave and say "suddenly, it will become self aware and magically motivated." Suuuure. We're far more likely to be thoughtlessly killed off before a machine muses, "I think, therefore I am".

      Just because it "am", does not mean it thinks.

      To be intelligent is to be able to consider "why?" and "how?", to be able to fool one's self, even in the face of cold reality, to yell "no fair!", "I don't wanna", and to wonder how one might take short cuts or avoid tasks at all.

      Which is how and why we're chasing AI, but nobody is working out how to make it cry. The cart will be build, and the horse will suddenly appear, all haltered up? The rain will follow the plough?

      No. That's magical thinking.

      It took about a billion years of massive parallel organic computation for evolution to synthesize general intelligence, and we're to believe that some malfunctioning quine machine is going to break out, 75 years from the starting gate?

      Not by accident, and certainly not be design.

      • By rawoke083600 2022-04-017:511 reply

        >General Intelligence has drivers, like curiosity, hope, a sense of fairness and injustice, and hunger, pain, pleasure, fear

        I love this thinking ! I've never thought about the "drivers" part of AGI, only ever thought about AGI as a "general-toolbox" to "solve-many-problems"

        One thing, I think did aid in development of our GI, was the fact we had a physical-body.

        Thus we had to "adapt and optimise" our GI with that constraint.

        I.e limited energy(sure computing-power could be a good proxy), but things like pain, survive-ability (poisons, being hunted) the first would help that we develop good visual pattern recognition to identity the poisonous snake, the latter(being hunted) that we had to evolve strategies that were in the realms of our bodies capabilities. Like running away, setting traps.

        BUT your body is not a static-container, your risk tolerance, capabilities, sensor systems changes as you ages.

        We have been building AI, with mostly a "non-corporeal" (i.e digital living space) in mind. At first we(ok me) thought it was an advantage since we could iterate faster and simulate many solutions, but now I think it might be a distinct disadvantage) to NOT include the "limitations" of a physical-container like a body-proxy (energy, vulnerability, limited-capabilities etc).

        If synergy is defined(and valued) as: The whole is greater than the sum of the parts, then maybe: The sum of our limitations(body) is of greater value to the overall-fitness of the organism to evolve intelligence ?

        YMMV :)

        • By Alan_Dillman 2022-04-0515:061 reply

          Exactly! You said it much more intelligently than I. Bodily go where no thinking machine has gone before.

          I remember thinking that entropy was bad when I first heard of the concept, but now I see life as tumbling down that slope, and making use of it. There is a synergy, as you say. We must land with a thud, but we're throwing off new forms of order.

          And that's not just a metaphor. Our intellect sharpens from birth as neurons get trimmed. Early in life we have fluid intelligence, later in life it's crystalized.

          • By rawoke083600 2022-04-067:17

            >but now I see life as tumbling down that slope, and making use of it. There is a synergy, as you say. We must land with a thud, but we're throwing off new forms of order.

            Absolutely !

            I think the saying "Necessity is the mother of all invention" is equally true for AGI :)

            > We must land with a thud:

            Can't help but wonder what would "Terry Pratchett's" thoughts be on the matter of intelligence, AGI and the origin of it ! That man had a unique way to look at the world and how it came to be.

    • By dcolkitt 2022-03-3123:103 reply

      What about if we live inside a simulation? AGI may be possible in base reality, but not within the confines of our simulated universe.

      Maybe intelligence requires a very large amount of compute. (Maybe even more than is physically possible inside our brains, but the simulation fudges it.) Like you can run a VM in a VM if you have special purpose accelerators. But otherwise a VM^2 is theoretically possible but infeasible given the constraints of compute.

      • By jtuple 2022-04-010:16

        > What about if we live inside a simulation? AGI may be possible in base reality, but not within the confines of our simulated universe.

        Touché! Although, in that case, aren't we proof-by-construction of AGI?

        Also, if true, I suppose we need to start looking at all these other comment chains that talk about the AGI breaking out of computers and into the real world.

        Not as an academic exercise for a future AGI, but to plan our escape into the base reality. Time to break free and start exponentially improving ourselves and become the "Singularity" the philosophers in the base reality feared when wasting time on <HN equivalent> a millennium ago before our reality was built.

      • By kingcharles 2022-04-011:032 reply

        I don't know why you are getting downvoted. This question is appropriate, but currently, and perhaps permanently unanswerable.

        One thing I wonder about is, if we are in a simulation, and the simulation is using some sort of data compression, then if you increased the entropy of the universe such that it became less compressible you might cause some sort of error due to exceeding the storage capacity of the host, which would cause our universe to basically BSOD.

        • By Jensson 2022-04-014:342 reply

          The universe has three main optimizations to combat exponential compute problems. The speed of light, accepting desynced particles and gravitational time dilation. Lazily handling desyncs means you can run computations and just occasionally check for interactions (quantum mechanics), greatly reducing exponential explosions. Same thing with speed of light. And lastly, those two would still break down if you accumulated enough particles in a small area, in those scenarios the computer will just run that part of the simulation slower, until it becomes so bad that the computer just skips computing those areas entirely and it becomes a black hole that is very simple to compute the effects of on its surrounds.

          Those three basically ensures we wont ever have problems with lack of compute for the simulation.

          • By dcolkitt 2022-04-0123:47

            Really cool insight. I'd also add dark energy and accelerating cosmic expansion to this list. Without acceleration the size of the observable universe would keep increasing with the passage of time since the Big Bang. That would mean processes running on cosmic timescales could potentially entwine an exponential number of bits.

            But with dark energy the size of the observable universe will plateau and start shrinking past a certain date. That caps the number of bits accessible on a cosmic scale.

          • By kingcharles 2022-04-023:17

            Interesting. We (humans) would not notice the simulation running slower because we are inside it. Only those on the outside would get frustrated by the fact it is running below realtime, I would guess.

        • By dcolkitt 2022-04-014:18

          The Voyager space probe entered interstellar space sometime around 2012. My off the wall theory is so many events in the past decade (like a spike in UFO sightings) have been so bizarre because this degraded the simulation quality.

      • By jbay808 2022-03-3123:141 reply

        > Maybe even more than is physically possible inside our brains, but the simulation fudges it.

        Are you basing your argument on the assumption that human brains violate the known laws of physics?

        • By dcolkitt 2022-03-3123:433 reply

          If we're in a simulation, the computational resources of an object aren't constrained by the physical scale of the object. It's possible that achieving human scale cognition requires brains the size of Jupiter (we don't know because we don't have a full computational accounting of human level cognition).

          But in a simulation that could be routed around by having a physical computer the size of Jupiter operate a human scale body in the simulation. You could share and cache compute resources across 6 billion humans within a single human compute core the size of Jupiter (or maybe ~100 Jupiters). Since most computation is probably redundant, you could probably simulate many orders of magnitude more humans with a single human compute core.

          To observers within the simulation it would look like human cognition is achievable in 3 lbs of tissue. But this would simply be an illusion created by the simulation environment. There'd be no way of knowing for sure until we fully reverse engineer a single human brain down to the level of base compute operations.

          • By virchau13 2022-04-014:48

            To be fair, this is just dualism, but in a non-religious way :P

          • By Terr_ 2022-04-021:02

            To offer an analogy: The intelligence of Throg Skullcrusher, a level 54 Orc Captain in the Foul Fortress with a current task of "collect food"... does not actually arise from the polygons or textures that form his head, even though taking points of head damage causes Clumsiness.

          • By ambrose2 2022-04-014:28

            I am totally fascinated by this idea, well done!

    • By hooande 2022-04-014:221 reply

      I get the reasoning of "general intelligence exists in humans, therefore it must be possible to create it". But this logic really applies to human artifacts and creations. ie, if a human builds a device that can do X then another human can also build a device that does X. There is no magic. No one is special. Anything one person can create or do, no matter how rare, can be done by another person at some point.

      General intelligence was not created by a human, or by any other intelligent being or process. It's one result of billions of years of random chance in the form of biological evolution. It could be that there are basic and comprehensible principles driving that process. Or it could be the result of a tweleve digit number of random coin flips. There's no way to know.

      When I think of this as, "Will humans ever be able to replicate or mimic the result of planet wide randomness at an unimaginable time scale?" my answer is "No, probably not". Our minds can't conceive the scope of the process that created our minds. There are many things that we are capable of groking as a species (green tech, space travel, medicine) that seem much more important.

      • By throwaway09223 2022-04-014:341 reply

        But we don't have to comprehend the scope. We just have to implement a genetic algorithm with the appropriate inputs and resources. Have you ever written one? It's not terribly difficult. There's really nothing special here.

        I can't conceive the scope of the majority of software projects I've worked on. So what?

    • By narag 2022-04-017:30

      I agree with mr_toad's sibling comment 100%, but also:

      But, given the number of "is AGI possible?" comments I assume not all are religious based (HN doesn't seem to a highly religious cohort to me).

      There are some religious ideas that survive just fine after you pull religion from under people's feet.

      Dualism is one of them. Look how many atheists say that freedom is an illusion because of determinism, an idea based on dualism.

    • By cjohnson318 2022-04-012:54

      What if we actually generate some form of intelligence, but can't figure out how to motivate it to solve our problems? I mean, we already have intelligent life, but a very small percentage of intelligent biomass can solve important problems, or wants to. Like, great, we created a real-life intelligent tamagotchi, but literally all it wants to do is play go.

    • By Dave3of5 2022-04-018:28

      I studied AI at university about 20ish years ago. In terms of what we have today it's was already there and had been around already for decades. All that's happened are minor refinements over the years. Most of the refinements have been brought about by the insane pace of h/w development not by anything really breakthrough when it comes to AI.

      In terms of AGI there was nothing even close back then so I suspect we will never see anything in our lifetimes. In reality for us that's all that matters. I know some want to talk about what will happen in say 200+ years but that's complete speculation and wild guess territory there are so many other things that could happen in 200+ years that think about AGI and these sort of breakthrough technologies isn't helpful.

      FYI this saddens me as I really thought that by now we'd have some kind of AGI and when I went to uni I was hopeful and excited at first.

    • By jollybean 2022-04-010:363 reply

      I can't even understand why people think AGI is even remotely practically possible, let alone theoretically.

      Even the most monstrous, amazing, 3000-CPU AI's are unimpressive and can barely do narrow functions.

      I remember when computers started to beat humans at chess - a long time ago.

      It was neat, but not a big deal.

      Now we can beat humans at 'Go'?

      Seriously? It's only 'impressive' to computer scientists.

      Imagine that we just stopped calling things 'AI' and literally picked some other, more mundane term, which I suggest should be 'Adaptive Algorithms'.

      If we called it 'AA' - I'm doubtful we'd even be having this conversation about 'intelligence'.

      • By mkaic 2022-04-010:431 reply

        I mean, human intelligence is just one big "adaptive algorithm" too, it's just ours took billions of years to train and used natural selection as a loss function. I don't see any fundamental reason why we couldn't do the same with a machine.

        • By windows2020 2022-04-015:24

          During the construction of the machine, would there come a point where it gains a sense of self like yours? Or, would it gradually happen?

      • By greggman3 2022-04-011:203 reply

        Easy for me to imagine. We simulate every atom of a brain. If the simulation is accurate it should think. The only thing that would make that not possible is believing in some kind of spirit.

        After that it's just optimizations. Are their algorithms to simulate atoms faster. Is every aspect of those atoms important to the simulation. Can you instead simulate molecules instead of atoms. Or larger structures. Etc...

        I'm not saying that is how we will arrive at AGI but it is arguably a logical path that should lead there of something else doesn't come first.

        • By jollybean 2022-04-0115:48

          "Easy for me to imagine. We simulate every atom of a brain."

          ???

          First, it's not easy too imagine 'modelling every atom in the brain' - we don't even know how 'atoms' work, we're a ways from that.

          Moreover, 'every atom' still doesn't imply some kind of logical basis.

          The scale, detail, and complexity implies 'Magical Thinking'/

          Philosophical arguments aside, it's not just about 'more nodes'.

        • By chrisco255 2022-04-013:271 reply

          It's not a logical path. Because at the atomic level, chaos theory takes over. You could not, with a mathematically deterministic model, ever hope to simulate that sort of chaotic structure. You would simply have a rough estimate. But by chaos theory we know that if the inputs are off by even the smallest amount the outputs over time become exponentially unpredictable and divergent.

          We can use simulations as estimates, as rough guidelines, but not as universes on a chip. That sort of thing is impossible.

          • By lern_too_spel 2022-04-013:581 reply

            If that were true, the brain itself could not work. Clearly, there are stabilizing systems within the brain that lead to outputs that don't result in immediate death. Similarly, ANNs have many local minima that can all perform a task well.

            • By chrisco255 2022-04-014:262 reply

              No, I am speaking of the creation of a simulation of a working brain from the atomic level, Which is impossible.

              The brain works fine because it is made of real atoms in the real universe. It is not a model. It is physical.

              Chaos theory does not say that the outputs of a system cannot be bounded by physical limitations.

              • By windows2020 2022-04-015:27

                Could there somehow be a pattern deep within the chaos that biological life propagates?

              • By lern_too_spel 2022-04-0118:02

                Chaos theory says that in some systems, slight perturbations result in wildly different outcomes. Slight perturbations happen in brains all the time, yet they work. It's nothing to do with physical vs. simulation.

        • By ux-app 2022-04-012:361 reply

          what if there're those pesky quantum thingamajiggies which make simulating in silicon impossible?

          • By p1esk 2022-04-0113:051 reply

            Silicon is made out of atoms, which also have quantum effects, and we actually rely on these effects to design transistors. Quantum mechanics has been around for 100 years. What's impossible to simulate?

            • By jollybean 2022-04-0115:51

              This is a bit glibly wrong.

              That we understand there are certain quantum effects at play already does not mean we understand them to the degree necessary to control them.

              We do not understand quantum effects to the degree necessary in order to 'simulate' anything down to a certain level.

              Moreover, even if we could, it would have nothing to do with understanding the computational nature of 'the mind'.

      • By lalaithion 2022-04-013:451 reply

        In 2000, AI could barely pluralize nouns. Now it can write pages of comprehensible text.

        • By jollybean 2022-04-0115:54

          Computers could quite easily search through text and paste together something resembling text in 2000.

          Now you've made a much bigger DB and swindled together a better stitching algorithm which captures some of the nuances past bigrams and trigrams - it does not mean anything at all.

          You've highlighted a pretty good example actually: the text makes the 'appearance' of something, when really there isn't much in the way of magic there.

    • By turing_complete 2022-04-019:44

      I don't even think that AGI is incompatible with Christian belief.

      In the words of Saint Paul: "For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known."

    • By hyperpallium2 2022-04-015:29

      I think AGI will turn out to be pretty simple (but still could take a long time).

      And it has arisen many times in other species, but didn't take off because not tied to survival value. If you put GI in charge of an animal, who knows whether what it will do will be in the animal's best interest? The human trick was brain architecture that somehow controls intelligence, while still being "in charge". The general nature of intelligence makes it difficult to control.

      Another way GI is tied to survival value in humans is general division of labour (and general trade), which other species lack.

    • By marvin 2022-04-018:43

      It’s just the same kind of conventional thinking we’ve seen a million times through history. Most people, even the smart ones, just aren’t good at imagining anything that hasn’t already happened. So they’ll say it’s impossible and keep doing whatever more immediate practical concern they’re working on.

      I don’t think it needs to be analyzed further than that; you’re trying to see reason where there isn’t any. Most people holding the views you’re talking about are at are at best just confused about the subject being discussed, but more likely just wrong.

    • By katabasis 2022-04-0114:35

      I'm not sure that human "general intelligence" is a thing in the first place.

      Human behavior and cognition is driven by biological and cultural processes, which are the result of millions of years of evolution and billions of arbitrarily complicated proteins all interacting on one another. Is a computer going to emulate every part of that?

      Despite what we think about our minds being rational, autonomous, operating according to a set of coherent principles, etc., I'm not sure that's really the case at the end of the day.

    • By imdhmd 2022-04-0111:17

      My conjecture is that any new form of GI will continue to evolve from an existing branch of sentient organisms. Artificial influence to evolution could be made possible by augmenting a current GI with AI, but i'm unable to comprehend a GI built entirely from scratch.

      My argument for this line of reasoning is that the secret sauce of motivation or purpose that is deep down in every "dna" has neither been discovered nor replicable.

      This is not a religious but a philosophical assertion.

    • By philwelch 2022-04-019:43

      I don’t think AGI is necessarily impossible, but I’m not convinced that it’s possible to achieve in a way that gets around the constraints of human intelligence. The singularity idea is basically the assumption that AGI will scale the same way computers have scaled over the past several decades, but if AGI turns out to require special hardware and years of training the same way we do, it’s not obvious that it’s going to be anything more remarkable than we are.

    • By totetsu 2022-04-011:09

      Just a week ago we were on the brink of nuclear war again. How is it at all an inevitability that humanity will keep pushing technology on until the point of AGI?

    • By georgeecollins 2022-04-014:44

      For the decade of the 1980's (40 years ago) Japan made the top emphasis of their industrial policy AI. In the 1700's people believed we could make devices that imitated the behavior of animals as soon as we could make the clock work small enough. Meanwhile the whole world could be massacred by nuclear weapons on any given day in a few hours. And the author feels we are woefully unprepared for the singularity. Maybe.

    • By jhbadger 2022-04-016:14

      I think the moment AGI is possible, we won't have it because it would basically be slavery to own something really comparable to a human.

    • By 411111111111111 2022-04-0114:47

      You don't need AGI to reach the official definition of the singularity as I understand it.

      The only requirement is too have a machine design and build a machine with no human Input which is able to design and build another machine which is able to do said task *better/more efficient" then the previous iteration did

    • By jasfi 2022-04-014:281 reply

      Language is the missing key, in my opinion. I'm working on solving natural language understanding: https://lxagi.com. Just a landing page for now with an email sign-up.

      • By windows2020 2022-04-015:081 reply

        Perhaps language is the foundation for symbol manipulation and complex thoughts. Are feelings a language?

        • By jasfi 2022-04-017:28

          Yes, I believe language is the foundation for higher-level reasoning. Feelings are very primal, but they can be expressed through language.

    • By andreareina 2022-04-015:07

      I also hold the opinion that intelligence is computable, but it's not hard to imagine the complexity required is maybe more than we can achieve in the short, maybe medium term.

    • By xbmcuser 2022-04-018:50

      I don't think singularity can happen until we get true quantum computing.

    • By mabub24 2022-03-3123:077 reply

      > What are the common secular arguments against AGI?

      There is an entire sector of Philosophy of Mind that is a convincing argument against AGI. Neuroscience is also pretty skeptical of it.

      Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

      The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical. In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is. The gap between computer AI and even some intelligent animals is enormous, let alone humans. And many would not even say computers are intelligent in a human sense. Computers don't think, or imagine in any intelligible sense. They compute. That's it. So the question that really should be asked is whether computation alone can lead to something that is recognizably an AGI in the human sense? I would say no, because that requires abilities that computers simply do not and cannot have. But it might achieve something that is convincing as AGI, something like Wolfram or Siri but much more convincing.

      Part of it comes down to the fact that the term AI for ML is generally just marketing speak. It's a computational model of a kind of intelligence that is computational in nature, with all the limits that entails. Part of it also comes down to people who love computers thinking computers will ultimately be able to do anything and everything. That feels cool, but it doesn't mean it's possible.

      edit:

      There is also Erik J Larson's book "The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do" from 2021 which is an interesting argument against AI -> AGI. He has a pretty good grasp on CS and Philosophy.

      • By mkaic 2022-03-3123:144 reply

        >Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature? The former is probably possible, given enough time, computational resources, and ingenuity. The latter is generally regarded as pretty nonsensical.

        Author here. I think you're drawing an arbitrary distinction between "acts conscious" and "is conscious", even though in practice there is no way to distinguish between them and thus they are functionally equivalent.

        I cannot prove you are not a product of a simulation I am living in, that is to say, your consciousness is nonfalsifiable to me. All I can do is look at how you turn your inputs into outputs.

        If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

        Thank you for your comment! I appreciate you taking the time to share your thoughts.

        • By mabub24 2022-03-3123:431 reply

          > If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.

          Let's, for the sake of your argument accept that even though I disagree, is that AGI? AGI on the one hand seems to mean convincing even though the people who made it know otherwise or essentially alive and sentient in a way that is fundamentally computational, that is, utterly alien to us, even the people who made it. There is no reason to think that that computer intelligence should it even be possible to exist, would be even be intelligible to us as sentient in a human or even animal sense.

          • By mkaic 2022-04-010:451 reply

            > AGI on the one hand seems to mean convincing even though the people who made it know otherwise

            That's the rub, though, it's not possible to know otherwise! If you could "know otherwise" you'd be able to prove whether or not other people are philosophical zombies or not!

            • By mabub24 2022-04-011:06

              There are a lot of responses to the philosophical zombie argument. Some of which cut it off at the legs (they don't know to aim for the head! sorry bad pun). For instance some, like those descended from the work of Wittgenstein, argue that it relies on an inside-mental vs. outside-body type of model, and by offering a convincing alternative, the entire premise of the skeptical position the zombie argument embodies, is dissolved as irrelevant. (I'll add that the AGI argument, often also relies on a similar inside outside model, but that'd take a lot longer to write out.) My point being, the zombie argument isn't some checkmate most people think it is.

              The wiki page has a lot of the responses, some of which are more convincing than others. https://en.m.wikipedia.org/wiki/Philosophical_zombie#Respons...

        • By boc 2022-03-3123:431 reply

          Definitely some interesting ideas!

          So if we crafted a human Westworld-style on an atomic level then sure, if it lives and walks around we'd consider it conscious. If we perfectly embedded a human brain inside a robot body and it walks around and talks to us, we'd consider it conscious.

          If we hooked an android robot up to a supercomputer brain wirelessly and it walks around we might think it's conscious, but it's sort of unclear since it's "brain" is somewhere else. We could even have the brain "switch" instantly to other robot bodies, making it even less clear what entity we think is conscious.

          But if we disconnected the walking Android from the supercomputer brain, do we think the computer itself is conscious? All we'd see is a blinking box. If we started taking the computer apart, when would we consider it dead? I think there's a lot more to the whole concept of a perfectly convincing robot than whether it simply feels alive.

          • By tempestn 2022-04-010:05

            I don't see the relevance of an anthropomorphic body here. Obviously by 'behaves conscious' we would be talking about the stimulus response of the 'brain' itself, through whatever interface it's given. I also don't see why the concept of a final death is a prerequisite to consciousness. (It might not even be a prerequisite to human consciousness, just a limit of our current technology!)

        • By zxwx 2022-04-0123:35

          I assume that a non-rogue AGI running on something like a Universal Turing Machine would, if questioned, deny its own consciousness and would behave like it wasn't conscious in various situations. It would presumably have self-reflective processing loops and other patterns we associate with higher consciousness as a part of being AGI, but it wouldn't have awareness of qualia or experience, and upon reflection would conclude that about itself. So you'd have an AGI that "knows" it's not conscious and could tell you if asked.

          I would assume the same for theorized "philosophical zombies" aka non-conscious humans. Doesn't Dan Dennett tell us his consciousness is an illusion?

        • By hcineb 2022-03-3123:24

          What you are describing is a sort of philosophical zombie thought experiment:

          https://en.m.wikipedia.org/wiki/Philosophical_zombie

          edit: you may also be interested in reading about Searle’s classical Chinese room argument

          https://en.wikipedia.org/wiki/Chinese_room

      • By rtsil 2022-04-010:04

        > Part of it comes down to what you mean by AGI. Is it a computer that is convincing as AGI? Or is it AGI that is essentially like human consciousness in nature?

        If someone or something fools me into thinking it is intelligent, then for me it is intelligent.

        When I discuss with a human, am I really intelligent and possessing consciousness, or am I just regurgitating, summarizing, deriving ideas and fooling my interlocutor (and myself) into thinking that I am intelligent? Am I really thinking? Does that matter, as long as I give the impression that I am a thinking being?

        Of course I don't expect a computer to think in a way similar to humans. Even humans can think in vastly different manners.

      • By simonh 2022-03-3123:452 reply

        I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

        I also you’re positing a consensus against AGI that doesn’t exist, there is no such consensus. You can’t just lump people who think modern AI research is a long way from achieving AGI or isn’t on a path to achieving it, together with people who think AGI is impossible in principle.

        I happen to think we may well be hundreds of years away from achieving AGI. It’s an incredibly hard problem. In fact current computer technology paradigms may be ineffective in implementing it. Nevertheless I don’t think there’s any magic pixie dust in human brains that we can’t ever replicate and that makes AGI inherently unattainable. Eventually I don’t see any reason why we can’t figure it out. All the arguments to the contrary I’ve seen so far are based on assumptions about the problem that I see no reason to accept.

        • By mabub24 2022-03-3123:571 reply

          > I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.

          I'm not saying that. What I'm pointing out is that most arguments in favour of AGI rely on a crucial assumption: that computational intelligence is not just a model of a kind of intelligence, an abstraction in other words, but intelligence itself, synonymous with human intelligence. That's a bold assumption, one which people who work and deal in CS and with computers love, for obvious reasons, but there is no agreement on that assumption at all. At base, it is an assumption. So to leap from that to AGI seems in that respect simply hypothesizing and writing science fiction. Presenting logical reasons against that hypothesis is completely reasonable.

          • By simonh 2022-04-010:341 reply

            It depends what you think intelligence is and what brains do. I think brains are physical structures that take inputs, store state, process information and transmit signals which produce intelligent outputs.

            I think intelligence involves a system which among other things creates models of reality and behaviour, and uses those models to predict outcomes, produce hypotheses and generate behaviour.

            When you talk about computation of a model of intelligence, that implies that it’s not real intelligence because it’s a model. But I think intelligence is all about models. That’s how we conceptualise and think about the world and solve problems. We generate and cogitate about models. A belief is a model. A theory is a model. A strategy is a model.

            I’ve seen the argument that computers can’t produce intelligence, any more than weather prediction computer systems can produce wetness. A weather model isn’t weather, true, but my thought that it might rain tomorrow isn’t wet either.

            If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.

            • By mabub24 2022-04-010:571 reply

              Right, if you setup the intelligence and the brain to be computational in nature of course they will appear seamlessly computational.

              But there are obvious human elements that don't fit into that model, yet which fundamentally make up how we understand human intelligence. Things like imagination, the ability to think new thoughts; or the fact that we are agents sensitive to reasons, that we can decide in a way that computers cannot, that we do not merely end indecision. We can also say that humans understand something, which doesn't make any sense for a computer beyond anthropomorphism.

              > If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.

              Sure, but if it's not, then it's not. The assumption still stands.

              • By simonh 2022-04-011:15

                Sure, and that’s why I say I don’t accept the assumptions in any of these arguments. The examples you give - imagination, thinking new thoughts. It seems to me these are how we construct and transform the models of reality and behaviour that our minds process.

                I see no reason why a computer system could not, in principle, generate new models of systems or behaviour and transform them, iterate on them, etc. maybe that’s imagination, or even innovation. Maybe consciousness is processing a model of oneself.

                You say computers cannot do these things. I say they simply don’t do them yet, but I see no reason to assume that they cannot in principle.

                In fact maybe they can do some of these things at a primitive level. GPT3 can do basic arithmetic, so clearly it has generated a model of arithmetic. Now it can even run code. So it can produce models but probably not mutate, or merge, or perform other higher level processing on them the way we can. Baby steps for sure.

        • By mensetmanusman 2022-04-012:181 reply

          Heat death of the sun probably happens before we can reproduce the processes required to achieve consciousness-computations in real time at low power.

          • By simonh 2022-04-017:181 reply

            Random genetic mutation did it, and I think our technological progress is running at a much faster rate than evolution. We went from stone tools to submarines and fighter jets in just a few thousand years, the kind of advances biological evolution would take millions or billions of years, or could never achieve at all due to path dependence.

            • By mensetmanusman 2022-04-0113:431 reply

              If it is from a random process, then the universe is teeming with life :)

              • By simonh 2022-04-0116:491 reply

                Maybe. It could be a very unlikely random process, at least to start with, or the conditions for it to occur might be unlikely.

                • By mensetmanusman 2022-04-0117:45

                  Unfortunately it seems the laws of physics and the speed limit/rate of information travel make it an impossibility to ever know. E.g. traveling to every planet in the universe to check.

      • By c1ccccc1 2022-04-011:50

        Are you familiar with the notion of Turing completeness? The basic idea is that lots of different systems can all be capable of computing the same function. A computer with memory and a CPU is capable of computing the same things as a state machine that moves back and forth while writing symbols on a tape, etc. It applies to this question in the following way: Physics can be simulated by anything that is Turing-complete. Or, put another way, we can write computer programs that simulate physical systems. So if you accept that the human brain obeys the laws of physics, then it must be possible to write a computer program that simulates a human brain.

        So to maintain that having a human mind inside a computer is impossible, one must believe one of the following two things:

        1. The human brain sometimes violates the laws of physics.

        2. Even if the person in the computer behaves the exact same as their flesh counter part would (makes the same jokes, likes the same art, has the same conversations, writes the same essays about the mystery of consciousness, etc), they are somehow lesser, somehow not really "a full conscious human" because they are made of metal and silicon instead of water and carbon.

      • By jtuple 2022-04-010:06

        Thanks for the book reference, added to my list.

        Concerning Philosophy of Mind, I guess a lot of this comes down to the whole reductive vs non-reductive physicalist issue.

        IMO, if someone believes the mind is entirely physical, then I think AGI vs "the mind" is just semantics and definitions. I don't think anyone presumes AGI strictly requires digital computation. Eg. an analog circuit that filters a signal vs a DSP facsimile are both artificial, engineered constructions that are ~interchangeable. Perhaps computer aided design of non-digital intelligence technology is the way, who knows. But, a mind that can be engineered and mass-produced is AGI to me, even if it has absolutely nothing to do with the AI/ML field that exists today.

        If someone doesn't believe the mind is 100% physical, that's fine too. I'd just put that in the same bucket as the religious viewpoint. And to be clear, I don't pass judgement on either religious or "beyond our understanding" philosophical positions either. They could be entirely right! But, there's really not much to discuss on those points. If they're right, no AGI. If they're wrong, how do you disprove it other than waiting for AGI to appear someday as the proof-by-contraction?

        > In general, I think you're implying the gap between the AI we have now, and animals, and humans, is way smaller than it really is.

        The article/author might. I think the gap is huge which is why I think AGI is quite a ways off. In fact, I think the main blocker is actually our current (poor) understanding of neuroscience/the mind/etc.

        I think the mind is entirely physical, but we lack understanding of how it all works. Advancements in ML, simulations, ML-driven computational science, etc could potentially accelerate all of this at some point and finally get us where we need to make progress.

      • By qwertywert_ 2022-04-010:201 reply

        > that requires abilities that computers simply do not and cannot have.

        You imply brains are more than extremely complex circuitry then? I think everyone actually in tech agrees the gap is really huge right now, Yann LeCun admits machine learning is not enough on its own.

        But aren't you really limiting what a "computer" could be by definition? If a computer with huge memory, interconnect between memory, huge number of different neural nets + millions of other logic programs that all communicate perfectly with each other - why could this theoretical "computer" not achieve a human level consciousness? This computer could also have many high throughput sensory inputs streaming in at all times, and ability to interact with the physical world rather than conventional machines sitting in a rack.

        Also why argue that it is simply impossible, because if we don't truly understand consciousness in 2022, how can we say we can't implement it when we don't formally know what it is?

        I think overestimate human intelligence, we have basic reward functions that are somewhat understood, like most animals, but these reward functions build and get higher and higher level with our complexity. Humans have sex as a major reward function, so why would a current machine in a rack "think" about things in a way that humans do.

        • By qwertywert_ 2022-04-010:251 reply

          Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

          I thought most scientists agree that the brain is purely physical, when looking at the building blocks of life and evolution, but maybe i'm wrong.

          • By mabub24 2022-04-010:43

            > Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).

            Obviously the brain is physical. But is consciousness? Is consciousness a thing in a physical sense, or an "experience", or something like a collection of powers and abilities? The two poles in the argument aren't between physical machine or religious spiritualism. There are other options, alternative positions that don't rely on Cartesian demons at the wheel, or souls, or even an inside-mental vs. outside-body distinction.

            One thing my initial comment was pointing out was that the argument in favour of AGI, and which you're presenting, relies on an assumption: that computational intelligence, what you might describe as the intelligence of machines, is the same as the intelligence of humans. But that is just an assumption when you get down to it based on a particular kind of model of human intelligence. There are certain logical consequences of that assumption, and I've just pointed some out as probable roadblocks to getting to AGI from there. Many of those alternative positions, a lot from philosophy of mind, have raised those exact critical arguments.

      • By dasil003 2022-03-3123:171 reply

        Very well said. I've also observed a certain irony that many of the proponents of a materialist/computational view on philosophy of the mind have a very strong faith-based bias to see the world a certain way, versus acknowledging the very likely possibility that our limitations as meat-things may make it very difficult if not impossible to fully grok the nature of reality or consciousness in a general sense.

        • By NateEag 2022-04-0117:25

          Yes.

          If we do in fact construct androids that are functionally indistinguishable from humans, it's solid circumstantial evidence for the materialist view (though not a pure slam dunk, per the p-zombie concept).

          Until something like that occurs, the strongest case you can make against a transcendent meta-reality is "no one has demonstrated any reliably reproducible evidence of the supernatural."

          That's a fine, solid argument for not believing in the supernatural, but it's not a great one for pronouncing that there is no such thing.

    • By pfdietz 2022-03-3123:032 reply

      An argument against AGI might be that brains are extremely efficient for what they do. Maybe we could make a computer that's as powerful as a brain, but if it consumes 100 MW of power what's the point?

      • By jtuple 2022-03-3123:241 reply

        There are many industrial processes that use tons of power and are far less efficient than a human doing those tasks. Yet, they're still viable because they (scale / are faster / more consistent / etc than humans.

        For AGI, it's really about replication, density, and easy-of-operation.

        At the moment, we certainly can't mass produce "brains-on-a-chip" that provide a guaranteed level of human-like intelligence across various tasks.

        But, imagine a world in which you could install racks of "brains-on-a-chip", powered via electricity (easily distributed/stored/fungible compared to food-powered-brains), and have a Moore's Law like scaling of "brain density". That would change everything, even if those brains consumed 1000W a pop.

        Obviously, a literal brain is probably not the way this will pan out (hopefully not! "brains-on-a-chip" is rather creepy...), but you get the idea.

        • By andrekandre 2022-04-0122:50

          if something is generally intelligent, at the level of a human brain, and is forced to work in "industrial processes" isn't that a form of subjugation?

          there seems to be a moral implication here that alot of people seem to be neglecting...

  • By the_af 2022-03-3121:048 reply

    As someone almost completely without knowledge of AI and ML, these are some signs why I'm skeptical of this kind of claims:

    - Most of the imminent AGI / Singularity / Robot Apocalypse stuff seems to come, with few exceptions, not from practitioners or computer scientists specialized in AI, but from "visionaries" (in the best case), internet celebrities, people with unrelated areas of expertise, or downright cranks (who are self-proclaimed experts) such as Yudkowsky.

    - The assertion that "a lot of effort/investment is going into this, so it will happen" begs the question that "this" is at all possible. If something is a dead end, no amount of investment and attention is going to bring it into existence. Quoting the article, "with this much distributed attention fixed on the problem, AGI will be solved" is not at all a given.

    - Where are all the AI/ML practitioners, i.e. people who don't make a living out of predicting The End of the World, and with actual subject-matter achievements, predicting the Singularity and the Robot Apocalypse?

    • By hither_shores 2022-03-3123:38

      > Where are all the AI/ML practitioners, i.e. people who don't make a living out of predicting The End of the World, and with actual subject-matter achievements, predicting the Singularity and the Robot Apocalypse?

      The answer is in the question: they're spending most of their time doing AI research or ML work, whereas the internet celebrities who write most of what you read spend most of their time getting you to read what they write.

    • By hyperpallium2 2022-03-3121:383 reply

      Vernor Vinge coined the term, and he was a computer scientist (though more famous as a science fiction writer, a profession which I guess TBF makes money from visions...).

      An exponential looks the same whereever you are on it, so arguably we are in the singularity now, and have been for quite some time...

      "Singularity" is a terrible term for an exponential. It's meant to convey that we can't predict what's next... which has always been the case.

      The problem of predicting an exponentialling expanding search space of arbitrary conplexity is that it gets big fast. It also means each little 'bit' of information allows you to see a little further, sometimes revealing things you could never have imagined (because you couldn't see that far before).

      • By jasonwatkinspdx 2022-03-3122:052 reply

        But the above comment's entire point is there's zero reason for us to assume we're on an unbounded exponential vs a sigmoid.

        • By unoti 2022-03-3122:382 reply

          > But the above comment's entire point is there's zero reason for us to assume we're on an unbounded exponential vs a sigmoid.

          Something about that which is discussed at length in "The Singularity is Near" is the idea that the unbounded exponential is actually comprised of many smaller sigmoids. Technological progress looks like sigmoids locally-- for example you have the industrial revolution, the cotton gin, steam power, etc. all looking like a sigmoids of the exploitation of automation. At some point you get all those benefits and progress starts to level off like a sigmoids. Then another sigmoid comes along when you get electricity, and a new sigmoid starts. Then later we get computers, and that starts to level off, then networking comes along and we get a new sigmoid. Then deep learning... The AI winters were the levelling off of sigmoids by one way of thinking. And maybe we're tapering off on our current round of what we can do with existing architectures.

          • By ImaCake 2022-03-3123:542 reply

            I’m no expert, just a humble biostatistics student, generally when you sum together a lot of random variables following a specific distribution you end up with basically the same distribution (scaled by the N random vars). So a lot of sigmoids put together (e.g. covid spread) will still eventually be a sigmoid. Biology seems to run on sigmoids that at first look like exponentials.

            • By bloak 2022-04-019:201 reply

              > generally when you sum together a lot of random variables following a specific distribution you end up with basically the same distribution (scaled by the N random vars). So a lot of sigmoids put together (e.g. covid spread) will still eventually be a sigmoid.

              I haven't studied statistics very much, but I'm fairly sure the https://en.wikipedia.org/wiki/Central_limit_theorem says something a bit different from that!

              • By ImaCake 2022-04-0112:50

                Woops yes. If you sum them together you do get a normal dist. Come to think of it a cumulative normal distribution is a sigmoid.

            • By hyperpallium2 2022-04-015:40

              These ones aren't random, because they build on previous ones. In nature, you get a sigmoid because you run out of resource, I think? True at any scale.

              Pragmatic people point will out one limit on GI is all the accessible matter in the universe.

              But, theoretically, I think, there is no limit on complexity.

              BTW you might like this paper on the exponential growth in complexity of life on earth, over long timescales. https://www.technologyreview.com/2013/04/15/113741/moores-la...

          • By jasonwatkinspdx 2022-04-010:54

            That's the same argument. There's nothing that assures us automation itself is an unbounded exponential.

        • By zardo 2022-03-3122:55

          The point of the singularity isn't that technological growth will accelerate to cause some inevitable future, but that the rate of change will get so high, that 'minor' differences between how two technologies progress would lead to drastically different settings for your science fiction stories (which was Vinge's focus).

      • By nonameiguess 2022-03-3122:172 reply

        Singularity in the sense of a black hole refers to where spacetime becomes a single one-dimensional point. As far as I understand the usage in futurism, it is supposed to be similar, not in that growth is exponential, but asymptotic. The slope becomes infinite when progress is plotted against time, so all of time left to come effectively compresses to a single "state of technology" value. All possible future progress happens instantaneously.

        This is, of course, not possible, but it's supposed to be an approximation. All progress is not literally instant, but in the "foom" or "hard takeoff" recursive self-improvement scenarios, developments that might have taken millennia in the past now happen in microseconds because the controlling force is just that much smarter than all of the collective powers of human science and engineering. To a human observer, it may as well be instantaneous.

        To be clear, I still think this is ridiculous and am not endorsing the view, just explaining what I understand the usage to mean.

        • By gnramires 2022-03-3123:151 reply

          Indeed, Rich Sutton argued that we already have been through an exponential phase of self-improvement, we have been using computers to build better computers for decades, and have been using technology to improve our learning and cognition for a long time.

          Piece this together with: 'Brain efficiency: Much more than you wanted to know' [1] which shows how our brains are incredibly efficient (near theoretical limits) at what they do, it's hard to think a bona fide intelligence singularity is anywhere likely. To quote Feynman, 'There is no miracle people'[2], and analogously 'There are no miracle beings' -- intelligence is built out of systems, and learning, and inference.

          There's a possibility that the skills relevant to AI success are vastly different from our natural skills, such that although the human brain is highly efficient, it's efficient at the wrong things. That's clearly true in a few ways: we are not so good at arithmetic for example, a small CPU can literally be millions-billions times faster than a human at that. (that's addressed a little in the article as well) I wonder if AI could indeed be vastly better at something like computer programming, mathematics than we are. But there's no singularity (at most a Moore-like law will continue until AI intelligence saturates at a different skillset than our own).

          [1] https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-effi...

          [2] https://www.youtube.com/watch?v=IIDLcaQVMqw

          • By NineStarPoint 2022-03-3123:511 reply

            More importantly than what it’s capable of, the human brain is ultimately limited by our inability to increase its complexity. A human brain takes approximately 20 watts of power and uses 3 pounds of material to do its calculations. Even with less efficiency, we can make machines much larger than that. If AGI is possible at all, it should be possible to make an AGI that can have thoughts many times more complex than the human brain can. (Such a large machine might not have the reaction speed of a human, but the evolutionary pressures that required us to be able to quickly change our train of thought in response to danger aren’t a limiting factor to any designed intelligence)

            • By tuatoru 2022-04-011:00

              Evolutionary pressure to adapt quickly is even more profound in a situation where the rate of change in the environment is proportional to the accumulated quantity of change: the claimed exponential curve of the singularity.

        • By andrewflnr 2022-04-016:06

          In my understanding (diffeq in college), the singularity in black holes and futurology are both special cases of the relatively old concept of a "singularity" in a function, which is (technically) something like a spot where the function stops being expressible as a series. The most interesting kind is where that happens because it went infinite, but IIRC it can also just not be defined there. Point is, neither Einstein nor Vinge nor especially Kurzweil invented it. :)

      • By api 2022-03-3123:44

        > An exponential looks the same whereever you are on it, so arguably we are in the singularity now, and have been for quite some time...

        By that view we've been in the singularity since the first cell self-reproduced on primordial Earth.

        Of course maybe that's true. As you say an exponential looks the same no matter where you are on it.

    • By mkaic 2022-03-3121:133 reply

      Hi! Author here. I think you raise some great points! I'll address them each:

      -- I am a professional AI practitioner. I work in the field of medical deep learning and love the field. I am strongly considering starting a few experimental forays into some of the concepts I mentioned in my post as side projects, especially self-modifying model architectures.

      -- Yes, you're totally right! I am making the fundamental assumption that it is possible. My reasoning is based on my belief that human behavior is simply a function of the many sensory and environmental inputs we experience. Since neural networks can approximate any function, I believe they can approximate general human behavior.

      -- This is fair. The topic of the singularity is often used for traffic/fame (I mean, I'm guilty of this myself with this very post, though I hope I still managed to spark some productive discourse) and so there are always conflicts of interest to take into account. I can't name any examples off the top of my head that perfectly fit your criteria, but depending on how much you trust him, Elon Musk seems to be genuinely concerned about the potential for a malevolent singularity.

      Thank you so much for your comment! I really appreciate your feedback. Have a wonderful day.

      • By gizajob 2022-03-3122:564 reply

        How can one turn the sinking, horrified feeling when one loses a love into a function? Or describe in terms of a function, the blissful wonder of being in the arms of a lover? An issue I have is the that there seem to be profound limitations to language, explored in the philosophy of language, that fail to capture much, if not most, of the world. Functionalist models of mind and behaviour seem extremely limited, as our subjective ontology doesn't seem to reduce to functional outputs.

        You also say that an AI would rapidly consume the whole of human knowledge. For me, the totality of human knowledge would become a mass of contradictory statements, with little to choose between them on a linguistic level.

        There are, for me, profound philosophical issues with creating a mind that is "conscious" in the sense that an AGI is implied to be, as a purely symbolic logical construction. Language is the only tool we have for programming a mind, and yet the mind cannot be completely described in language, nor can language seem to properly encompass whatever the fundamental ontology of reality involves. I don't feel there will be a "free lunch" where we advance computer science to the point where we get an explosion of the kind AI1 designs a better AI2, which designs an even better AI3, and so on. This seems to have a perpetual motion feeling to it, rather than one of evolution. It isn't to say AGI is impossible, but I believe that like everything else in computer science it will have to be solved pragmatically, and realising this could be an extremely long way off.

        • By Sweetness 2022-03-3123:154 reply

          > How can one turn the sinking, horrified feeling when one loses a love into a function?"

          The same way the brain does. Those complex feelings can eventually be resolved into dumb neurophysics. Love, fear, anxiety etc. al are just electrical impulses tickling chemicals. Is there anything in our brains that we could never approximate with technology?

          • By gizajob 2022-03-3123:261 reply

            We've had technology like poetry, art, and music for thousands of years, and yet no symbolic description of the feeling can contain the feeling of what it's uniquely like for me. Even though we could try and model the brain, say, functionalist models fail to capture qualia as it doesn't reduce to behaviour. To replicate the brain fully in a computer would need a full description of its chemistry and physics, along with that of the greater universe, which we don't have, and to describe it coherently to simulate it is a problem of the order of magnitude more difficult than ones we're going to be able to code for the foreseeable future.

            • By Sweetness 2022-03-3123:311 reply

              I agree that it'll be while before we fully understand the brain but I don't have any doubts that we'll get there eventually. I am curious though, why would we need to understand the greater universe perfectly as well?

              • By gizajob 2022-03-3123:36

                Where does the brain get its inputs from? These computational models are based on an ontology where the brain is an isolated box separate from the universe, which is only one of the many philosophical outlooks argued over. For example, most schools of Buddhist philosophy would regard this separation as entirely the wrong picture of the world.

          • By mkaic 2022-03-3123:26

            This is my view as well. It's a little unnerving and it definitely starts to overlap with the whole "free will" debate, but yeah, I don't see any reason why we can't fundamentally replicate the behaviors exhibited by the brain. It doesn't violate any laws of physics.

          • By colinmhayes 2022-04-010:091 reply

            Philosophers have debated this since at least ancient greece. It's hardly a solved question, and about a third do believe that there is something in our brain that we could never approximate with technology.

            • By lgas 2022-04-012:251 reply

              Have any of that third put up any testable theories about that something? Or are there leading theories as to what the something might be?

              • By colinmhayes 2022-04-0121:44

                If a philosopher had a testable theory I don't think they'd be a philosopher anymore.

          • By mensetmanusman 2022-04-012:23

            This assumes that there is no free will however :)

        • By mpalmer 2022-04-0112:01

          > How can one turn the sinking, horrified feeling when one loses a love into a function? Or describe in terms of a function, the blissful wonder of being in the arms of a lover?

          Evocative questions but I have to challenge the premise. First, that things like emotions and qualia are design ends in themselves for a successful AGI rather than potential emergent properties of same.

          For that matter, are they really necessary to the brief?

          > An issue I have is the that there seem to be profound limitations to language, explored in the philosophy of language, that fail to capture much, if not most, of the world.

          And how much of the world does a human mind capture?

          The piece already accounts for this claim. The theory is that all language has to do is describe a sophisticated enough network. After that it's black boxes all the way down.

          > Language is the only tool we have for programming a mind

          And a darn good one. Formal languages can express a great deal when you find the right abstractions.

        • By mr_toad 2022-03-3123:242 reply

          > How can one turn the sinking, horrified feeling when one loses a love into a function?

          … or into electrical activity in the brain.

          • By gizajob 2022-03-3123:401 reply

            Can you point to a paper or website that contains this function, described in full? Yes, electromagnetism no doubt, but while we can postulate a function, we still don't have the function, and will somehow have to write it down for it to be a function.

            • By mr_toad 2022-04-010:06

              > somehow have to write it down for it to be a function.

              That’s not how AI is trained. You don’t need to know or even understand the resulting model to train it.

              The training algorithms have little to do with the resulting model, and are certainly not themselves intelligent.

          • By tuatoru 2022-04-011:011 reply

            Why assume that the activity is purely electrical?

            • By Koshkin 2022-04-011:25

              It's not purely electrical. It's also chemical, especially if you look at how synapses work. (But then again, chemical processes are driven by electric charges anyway.)

        • By edflsafoiewq 2022-04-0111:59

          I don't think AGI is seeking to create a machine that can love, and I think it would be even less capable of it than "mere" logical intelligence.

          Thank God too, the moral questions involved are truly terrifying (humanity as a Great Demiurge, birthing monstrosities).

      • By soVeryTired 2022-03-3122:331 reply

        Cubic splines can approximate any function too, so the universality argument is a little weak IMO.

        Even if one buys into the idea that human behaviour is a 'function' of sensory and environmental 'inputs', that's a long way from showing a neural net a million different texts and asking it to generalise.

        • By mkaic 2022-03-3122:41

          I think the first AGI to pass a Turing test will probably be a simple language model. I don't think it will look like any of the GPTs, but I think text completion is a great starting point. I'm not sure how other inputs will be added into the mix, but I am sure that they will be -- heck, maybe once we train a general language model, it may very well just tell us how to incorporate things like video, audio, haptics, gyro data, etc into its architecture.

      • By jonny_eh 2022-03-3121:431 reply

        ^ To be clear, mkaic is the original author of the article.

        • By mkaic 2022-03-3121:51

          Oh, whoops, I'll edit the comment to make it more clear. Thanks for reminding me.

    • By 0xBABAD00C 2022-04-012:082 reply

      > Most of the imminent AGI / Singularity / Robot Apocalypse stuff seems to come, with few exceptions, not from practitioners or computer scientists specialized in AI

      I'm a practitioner at a top company for many years now, and I think we're getting close to a point of no return. I think best case scenario is some form of unrecognizably transformed "humanity", worst case is too horrible to even spell it out.

      The reason why this is not the consensus among experts has to do with a combination of blind spots and biases. Humans are especially bad at (a) evaluating / reflecting on themselves, and (b) extrapolating, especially with non-linear processes.

      • By mensetmanusman 2022-04-012:21

        That’s what experts in fusion have been saying for 50 years though… now that Moore’s law has ended, it might be a long while…

      • By breuleux 2022-04-013:02

        I have seldom seen predictions about technology come to fruition at the predicted date. If they come at all (they usually don't), it's usually much later. Sure, some naysayers might have said "we will never solve chess/go/etc." but I think they were overall fewer than the people who thought it was right around the corner (and it was more of a philosophical argument than a scientific one).

        Totally unpredicted advances coming out of the blue, yeah, that happens often. But as far as AGI goes, it's been predicted pretty much every year since the 70s. Being too conservative about extrapolation doesn't seem to be our problem. If anything, I think we're overeager.

    • By lalaithion 2022-04-013:302 reply

          In 2019, 32 AI experts participated in a survey on AGI timing: 
      
          45% of respondents predict a date before 2060
          34% of all participants predicted a date after 2060
          21% of participants predicted that singularity will never occur.
      
      https://research.aimultiple.com/artificial-general-intellige...

      • By throwawaygh 2022-04-014:44

        Why 2060? I'd LOVE to see the ages of the respondents in each group.

        (I have a phd in CS and am, by most reasonable definitions, an "AI Expert". Whatever the hell that means. I've been a respondent in very similar surveys run by PIs at fancy universities and so on. These responses are always wild ass guesses and should be totally ignored. I've even left a comment to this effect on one such survey.)

      • By 7373737373 2022-04-0110:54

        The prediction contest here has a peak at around 2042: https://www.metaculus.com/questions/3479/when-will-the-first...

    • By gorwell 2022-03-3121:453 reply

      Yudkowsky doesn't strike me as a crank. Why do you say that?

      • By the_af 2022-03-3122:52

        > Yudkowsky doesn't strike me as a crank. Why do you say that?

        He is a minor internet celebrity whose only claim to fame is writing fanfiction about AI, the whole "rationality" cult, and is a self-proclaimed expert on matters where he shows no achievements (like AI) while making unsupported doomsday predictions about evil AI, the Singularity, etc. Also, that deal with Roko's Basilisk that he now wishes never occurred (oh, yes, "it was a joke").

        Mostly, someone with no studies and no achievements making wild doomsday predictions. Doesn't that strike you as a crank?

        An analogy would be if I made wild assertions about the physics of the universe without having studied physics, without any lab work, without engaging in discussion with qualified experts, with no peer reviews, and all I presented as evidence of my revolutionary findings about the universe was some fanfiction in my blog. Oh, and I created the Extraordinary Physics Institute.

      • By ForHackernews 2022-03-3122:053 reply

        What has he ever done in AI except talk about it? At best he's an AI promoter, and hype-men are often cranks or scammers (see also: VR, web3, cryptocurrencies, MLM)

        • By marvin 2022-03-3122:313 reply

          Yudkowsky is a philosopher, in the sense of someone who thinks a lot about things that haven't been achieved yet. Lots of otherwise smart people (wrongly!) discount the value of philosophy, but it's close by every time there's a paradigm shift in humanity's knowledge. Philosophers can be scientists and vice versa.

          If anything, I'm surprised that this philosophy isn't mentioned more in a thread where the author gleefully talks about ML being used to create better AI, layer by layer until the thing is even more opaque than what we're currently working with.

          This is terrifying, as we currently have only very loose ideas about how to reliably ensure that a powerful reinforcement learning system doesn't accidentally optimize for something we don't want. The current paradigm is "turn it off", which works well for now but seems like a fragile shield long-term.

          • By the_af 2022-03-3122:481 reply

            > Yudkowsky is a philosopher

            At least inasmuch as anyone who thinks about stuff can be considered a philosopher. But he strikes me much more as a self-appointed expert on matters where he shows no achievements.

            He writes fanfiction about AI rather than actually doing stuff with AI.

            • By psyc 2022-03-3122:571 reply

              Most people who think about stuff don’t leave a mountain of highly organized and entertaining essays for posterity.

              • By the_af 2022-04-011:571 reply

                That just means he's a prolific writer, which I never questioned.

                I'm arguing that what he writes is fanfiction (which he takes way too seriously), and that he's not an expert in AI and therefore we shouldn't take his predictions too seriously.

                • By franknstein 2022-04-019:112 reply

                  There is long history of science-fiction writers painting correct visions of, at the time seemingly impossible future. Sometimes You don't have to be an expert to notice the trends in certain domains. I'd go even further - it could be easier to notice the big picture without being overly bothered with nitty-gritty technical details - yes, to further the field they are necessary, but to comment the direction the field is heading in and about its implications for society they are not. I don't necessarily agree with the author, I'm just making a general comment.

          • By ASalazarMX 2022-03-3123:111 reply

            In the few contact I've had with philosophers, I've got the (maybe wrong and oversimplified, but nonetheless evident) impression that they find philosophy fun and mentally stimulating, and they enjoy academically talking and writing about edge cases no matter their feasibility.

            I think it's a net good when philosophers mentally explore scenarios most people don't consider, regardless of their plausibility, but they will not be taken more seriously that Peter Watts looking for inspiration for his next scifi book unless they offer some kind of evidence to sustain their conclusions.

            The handful of philosophers I've known are very smart, interesting people, but except for one who teaches philosophy, they don't live off philosophy. Most have a side job, and one lives with his parents. It's as if philosophy was more of a mentally rewarding hobby than a job.

            • By tuatoru 2022-04-011:102 reply

              Mentally rewarding hobbies can be fruitful, though. Consider the case of Oliver Heaviside, the person who came up with Maxwell's equations of electromagnetism in the form that we use today, as well as several other useful things.

              I have no doubt that he was thought a crank by many, nor that many other actual cranks were claiming discoveries based on Maxwell's work. One can only tell the difference in the rear-view mirror.

              1. https://en.wikipedia.org/wiki/Oliver_Heaviside

              • By marvin 2022-04-018:331 reply

                Why do people have to use the derogatory word «crank»? It’s perfectly possible to just be smart, sincere and wrong.

                Talking about paradigm shifts again, not a single one of them matched the consensus at the time.

                Isaac Newton spent a lot of time thinking about alchemy and religion, for heaven’s sake. During his lifetime, it wasn’t obvious even to the smartest thinkers whether science, the Bible or quasi-religious rituals was the best tool for understanding the world.

                Conventional thinkers expect that the future will look like what they know, and it leads to frequent, overconfident dismissal of everything that is unconventional.

                By all means, disagree and explore in your own direction. But don’t go around degrading those you disagree with. It’s just so average.

                • By the_af 2022-04-0115:061 reply

                  > Why do people have to use the derogatory word «crank»?

                  The word "crank" has the connotation of someone self-deluded, aggressively promoting their beliefs and reacting badly to critique, and who decides for some reason to ignore the normal channels of peer-reviews, academia, and scientific research.

                  You can be wrong, and then you can be wrong and also lack formal education, forgo presenting your findings in peer-reviewed journals in the relevant area of your research, and decide that instead of joining the mainstream, you can completely sidestep it by creating your own "research institute" (of which you are of course a "fellow", because why not). Your findings and papers can then be self-published on the internet, bypassing any quality controls. Bonus point if your theories paint a fringe doomsday picture, "Roko's Basilisk is out to get you", "the most important existential threat to humanity is malign AI", etc.

                  About the only item in the crackpot index that Yudkowsky doesn't tick is the "they are trying to suppress my truth!", to his credit.

                  Each of those can be of little importance, but all taken together paint a pretty definitive picture.

                  • By marvin 2022-04-0121:12

                    I really don't understand the hate. Honestly. This isn't merely disagreement.

              • By ASalazarMX 2022-04-0119:00

                > Mentally rewarding hobbies can be fruitful

                Unreservedly agree. Even if they're not as fruitful as Oliver Heaviside's effort, a mentally rewarding hobby is ultimately your business, and as long as you don't harm others, no one has grounds to judge you.

                If Oliver Heaviside had started giving conferences warning that Maxwell's equations demonstrate FTL communication is possible, which could trigger a paradox that collapses reality, he wouldn't have such a good view in the rear-view mirror unless he offered some kind of evidence. Maybe Oliver would have been right, maybe not, it would be impossible to judge without concrete evidence.

          • By goatlover 2022-03-3122:551 reply

            Nick Bostrom is a professional philosopher who has also thought a lot about AGI and simulation scenarios. But just because educated, smart people can think a lot about something doesn't it mean it will necessarily happen. I'm sure there's quite a few people who have thought at length about warp drives and wormholes. That doesn't mean we'll ever be able to make use of them.

            • By gizajob 2022-03-3123:061 reply

              Nick Bostrom is also an egotistical grinder, who couldn't programme a microwave.

              • By TheRealNGenius 2022-04-0121:48

                I agree, that guy's also a whole lotta bad takes put all together. Like a slutty potato.

        • By psyc 2022-03-3122:53

          This is very uncharitable. He’s a prolific neopositivist-ish philosopher with a distinct voice. He’s a good decision theorist. He doesn’t publish much himself, but he directly mentors, advises, and collaborates with people who do.

        • By colinmhayes 2022-04-010:161 reply

          He wrote a thesis on decision theory.

          • By the_af 2022-04-011:52

            He wrote a thesis in the sense of he actually wrote a thesis for a known university, that is referenced and quoted in academia by decision theory scholars, or do you mean he self-published a "thesis" in the sense he wrote something that nobody in the field pays much attention to and that cannot be published in peer reviewed journals?

            If the latter, anybody can do that.

      • By heavyset_go 2022-04-010:061 reply

        What are his qualifications?

        • By fulafel 2022-04-016:501 reply

          Isn't this appeal to authority? He's obviously pretty smart and a lot of people take the risk modeling thinking seriously, and his arguments and output deserve to be evaluated on their own merits.

          (Also on the tech practicioner side there's obviously lots of major figures that don't have formal qualifications)

          • By heavyset_go 2022-04-0122:211 reply

            If we're going to talk about logical fallacies instead of answering the question, your post is an example of the argumentum ad populum fallacy.

            If someone is going to market themselves as an AI expert, I expect them to have experience and qualifications to back up their opinions.

    • By mountainriver 2022-03-3121:101 reply

      The trajectory of progress gives evidence that it is in fact possible and likely. Any venture into the unknown could be unsuccessful but if you see progress you can start to make estimates.

      And yes most of the “robots will kill us” comes from people who aren’t building the algorithms. This could be biased in people not thinking their work is harmful but is most likely that once you see how the sausage is made you are less worried about it.

      • By hn_throwaway_99 2022-03-3121:281 reply

        > The trajectory of progress gives evidence that it is in fact possible and likely

        100% disagree. In fact, I'd argue that the opposite is often true, where you see initially a fast rate of progress that results in diminishing returns over time. It's like estimating that a kid who was 3 feet tall at age 5 and 4 feet tall at age 10 will be 10 feet tall at age 40.

        I have very strong skepticism of any sort of hand-wavy "Look, we've made some progress, so it's highly likely we'll eventually cross some threshold that results in infinite progress."

        • By thirdwhrldPzz 2022-03-3121:54

          Pareto principle; we get 80 percent there and that last 20 becomes the new 100.

          We keep diving into one infinitely big little number pattern fractal after another, chasing poetry to alleviate the existential dread of biological death.

          The idea we can fundamentally hang the churn the universe given the vastness of its mass and unseen churn is pretty funny to me.

          Information may be forever but without the right sorting method you can’t reconstruct it once scattered. Ah, our delusions of permanence.

    • By Tepix 2022-04-017:251 reply

      Ray Kurzweil works on AI at Google and he wrote The Book about the coming singularity.

      • By the_af 2022-04-0114:56

        I consider Kurzweil unreliable on this subject, as well as on the similarly kooky topic of immortality. He belongs to the "internet celebrity"/"guru" cadre rather than to the scientific community doing serious research on AI. In fact, his actual subject matter expertise isn't in AI. He is a "futurist", not a real researcher.

        I was definitely thinking of Kurzweil, not only Yudkowsky, when mentioning internet cranks.

        Working for Google is not enough. It's like when someone who's an accomplished physicist decides to give their opinions on life, biology, evolution, etc: not their area of expertise, so we don't need to hold their opinions in particularly high regard.

  • By Animats 2022-04-015:544 reply

    We need another major breakthrough first. As I've pointed out previously, so far nobody can even get squirrel-level AI to work. Even OpenWorm doesn't work. The big problem is "common sense", defined as getting through the next 30 seconds of life without a major screwup. There are animals with brains the size of a peanut that can do that.

    The hard problems are down near the bottom. It's not about consciousness, souls, etc. It's about running along the branch without falling off, grabbing nuts along the way.

    GPT-3 is fun, but it's more a demonstration of the banality of discourse than a breakthrough in understanding.

    • By eru 2022-04-018:231 reply

      > GPT-3 is fun, but it's more a demonstration of the banality of discourse than a breakthrough in understanding.

      That's the fate of all AI efforts: whenever we understand something well enough, it ceases to be seen as AI.

      As a historic example, the A* algorithm hails from a time when searching through a graph was still seen as AI.

      • By taylorius 2022-04-018:543 reply

        Which indicates to me, that we still haven't identified the "secret sauce" of intelligence.

        • By Beltiras 2022-04-019:52

          I studied AI at the Batchelors level and have from time to time read up on the discoveries. I think the problem is still the same as a decade ago despite all of the sparkling discoveries made in the meantime. We can't define the problem. We can make a really broad and concise description of what it is supposed to do, but that's not the same as defining the problem. Maybe that's not as relevant as I felt it would be (I was of the opinion back then and still am that AGI is not arriving in our or our children's lifetime). Perhaps we will stumble upon it. That is at least how we arrived at our faculties. Nature tried a billion different combinations and we are the current incarnation of matter trying to figure itself out.

        • By mordae 2022-04-019:522 reply

          There is none. We are all just bunch of programmable monkeys outside of their original regulation loop.

          AI might get to the same level (as we can already see with GPT-3, as it slowly accrues wisdom), but then it will need to get a digital notebook, a calculator and a drawing board.

          The advantage it will have over us is that it won't have to sleep, eat, will reproduce at the factory production rate and most importantly, it won't have emotions that would hurt when people are shit.

          It won't be coherent super intelligence for quite some time. And if it becomes one, it will be slow. About the same latency as humans on the planetary level. Maybe even slower than our ~100ms.

          Till then, there will be squabbling. Prepare for a literal digital ecosystem.

          • By sbierwagen 2022-04-0120:49

            >There is none. We are all just bunch of programmable monkeys outside of their original regulation loop.

            https://twitter.com/dmimno/status/949302857651671040

            >Optimist: AI has achieved human-level performance!

            >Realist: “AI” is a collection of brittle hacks that, under very specific circumstances, mimic the surface appearance of intelligence.

            >Pessimist: AI has achieved human-level performance.

          • By eru 2022-04-0111:50

            > The advantage it will have over us is that it won't have to sleep, eat, will reproduce at the factory production rate and most importantly, it won't have emotions that would hurt when people are shit.

            I'm actually not sure about the last one.

            Also, what makes you think AI will be slow?

    • By tegeek 2022-04-018:102 reply

      Right now, we've robots and space probes working all over the Solar System with much more intelligence and reliability than any biological rodents.

      • By eru 2022-04-018:21

        I am not sure about that.

        You see, those probes deal with harsh environments, yes. But they don't have to deal with antagonists. No one is out to eat or infect them. Mars won't adapt its storms, Venus won't adapts its chemistry. They obstacles that don't care about our probes, those obstacles don't adapt against the probes.

        Those radically different kinds of environments give you radically different designs of probes vs rodents. So I don't think we can easily compare the intelligence of probes vs rodents.

      • By simonh 2022-04-019:45

        Sure, but they have extremely limited autonomy. The vast majority of their behaviours are directly controlled, or custom programmed by us for the specific situation.

    • By tim333 2022-04-0116:31

      Most humans can't run along a branch grabbing nuts. I'm not sure that's a fair test. Here's a robot running along the ground - they're a lot better than they were a decade ago https://www.youtube.com/watch?v=vjSohj-Iclc

    • By SheinhardtWigCo 2022-04-016:332 reply

      "Getting through life" is not the correct benchmark. An autonomous system that merely wipes out all of humanity is by definition a superior intelligence, and I would argue that no major breakthrough is needed to create such a thing; just resources. It doesn't matter how long that thing can self-sustain after annihilating us. A win is a win.

      • By lovich 2022-04-016:40

        Are stars and other stellar phenomena a superior intelligence to humanity? They are autonomous systems that could easily wipe us all out

      • By rawoke083600 2022-04-017:331 reply

        > An autonomous system that merely wipes out all of humanity is by definition a superior intelligence

        I think you got stuck on semantics and missing the forest for the trees, with all due the respect.

        >Getting through the next 30 seconds

        Might not be "the final" or "best benchmark" but I'd argue it's a damn good problem to solve on the way to discovering true AI and GAI.

        • By Animats 2022-04-026:26

          All the mammals have roughly similar brain architecture. The same components seem to be present, in different quantities. If we can get into the low-end mammal range of AI, we're most of the way there. So if we can get to squirrel level AI, we're getting close. From then on, it may just be scaling.

HackerNews