There are a lot of responses to the philosophical zombie argument. Some of which cut it off at the legs (they don't know to aim for the head! sorry bad pun). For instance some, like those descended from the work of Wittgenstein, argue that it relies on an inside-mental vs. outside-body type of model, and by offering a convincing alternative, the entire premise of the skeptical position the zombie argument embodies, is dissolved as irrelevant. (I'll add that the AGI argument, often also relies on a similar inside outside model, but that'd take a lot longer to write out.) My point being, the zombie argument isn't some checkmate most people think it is.
The wiki page has a lot of the responses, some of which are more convincing than others. https://en.m.wikipedia.org/wiki/Philosophical_zombie#Respons...
Right, if you setup the intelligence and the brain to be computational in nature of course they will appear seamlessly computational.
But there are obvious human elements that don't fit into that model, yet which fundamentally make up how we understand human intelligence. Things like imagination, the ability to think new thoughts; or the fact that we are agents sensitive to reasons, that we can decide in a way that computers cannot, that we do not merely end indecision. We can also say that humans understand something, which doesn't make any sense for a computer beyond anthropomorphism.
> If intelligence is actually just information processing, then a computer intelligence really is doing exactly what our brains are doing. It’s misdirection to characterise it as modelling it.
Sure, but if it's not, then it's not. The assumption still stands.
> Basically what I'm trying to say is how can anyone who believes the brain is purely physical (not spiritual), believe that we just simply cannot achieve human-level intelligence by machine (no matter how complex the machine gets).
Obviously the brain is physical. But is consciousness? Is consciousness a thing in a physical sense, or an "experience", or something like a collection of powers and abilities? The two poles in the argument aren't between physical machine or religious spiritualism. There are other options, alternative positions that don't rely on Cartesian demons at the wheel, or souls, or even an inside-mental vs. outside-body distinction.
One thing my initial comment was pointing out was that the argument in favour of AGI, and which you're presenting, relies on an assumption: that computational intelligence, what you might describe as the intelligence of machines, is the same as the intelligence of humans. But that is just an assumption when you get down to it based on a particular kind of model of human intelligence. There are certain logical consequences of that assumption, and I've just pointed some out as probable roadblocks to getting to AGI from there. Many of those alternative positions, a lot from philosophy of mind, have raised those exact critical arguments.
> I’m afraid all those arguments boil down to “we don’t know how to do it yet, therefore it can’t be done”, which is absurd.
I'm not saying that. What I'm pointing out is that most arguments in favour of AGI rely on a crucial assumption: that computational intelligence is not just a model of a kind of intelligence, an abstraction in other words, but intelligence itself, synonymous with human intelligence. That's a bold assumption, one which people who work and deal in CS and with computers love, for obvious reasons, but there is no agreement on that assumption at all. At base, it is an assumption. So to leap from that to AGI seems in that respect simply hypothesizing and writing science fiction. Presenting logical reasons against that hypothesis is completely reasonable.
> If a robot can do that, too (what you call "convincing as AGI") then we must assume it is also conscious, because if we don't, we'd have a logical inconsistency on our hands. If I am allowed to safely assume you are sentient, then I must also be allowed to safely assume a robot is sentient if it can convince me, because in both cases I have no method of falsifying the claim to sentience.
Let's, for the sake of your argument accept that even though I disagree, is that AGI? AGI on the one hand seems to mean convincing even though the people who made it know otherwise or essentially alive and sentient in a way that is fundamentally computational, that is, utterly alien to us, even the people who made it. There is no reason to think that that computer intelligence should it even be possible to exist, would be even be intelligible to us as sentient in a human or even animal sense.