AI Will Never Be Conscious

2026-02-2411:263643www.wired.com

In his new book, “A World Appears,” Michael Pollan argues that artificial intelligence can do many things—it just can’t be a person.

The Blake Lemoine incident is remembered today as a high‑water mark of AI hype. It thrust the whole idea of conscious AI into public awareness for a news cycle or two, but it also launched a conversation, among both computer scientists and consciousness researchers, that has only intensified in the years since. While the tech community continues to publicly belittle the whole idea (and poor Lemoine), in private it has begun to take the possibility much more seriously. A conscious AI might lack a clear commercial rationale (how do you monetize the thing?) and create sticky moral dilemmas (how should we treat a machine capable of suffering?). Yet some AI engineers have come to think that the holy grail of artificial general intelligence—a machine that is not only supersmart but also endowed with a human level of understanding, creativity, and common sense—might require something like consciousness to attain. In the tech community, what had been an informal taboo surrounding conscious AI—as a prospect that the public would find creepy—suddenly began to crumble.

The turning point came in the summer of 2023, when a group of 19 leading computer scientists and philosophers posted an 88‑page report titled “Consciousness in Artificial Intelligence,” informally known as the Butlin report. Within days, it seemed, everyone in the AI and consciousness science community had read it. The draft report’s abstract offered this arresting sentence: “Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious barriers to building conscious AI systems.”

The authors acknowledged that part of the inspiration behind convening the group and writing the report was “the case of Blake Lemoine.” “If AIs can give the impression of consciousness,” a coauthor told Science magazine, “that makes it an urgent priority for scientists and philosophers to weigh in.”

But what caught everyone’s attention was that single statement in the abstract of the preprint: “no obvious barriers to building conscious AI systems.” When I read those words for the first time, I felt like some important threshold had been crossed, and it was not just a technological one. No, this had to do with our very identity as a species.

What would it mean for humanity to discover one day in the not‑so‑distant future that a fully conscious machine had come into the world? I’m guessing it would be a Copernican moment, abruptly dislodging our sense of centrality and specialness. We humans have spent a few thousand years defining ourselves in opposition to the “lesser” animals. This has entailed denying animals such supposedly uniquely human traits as feelings (one of Descartes’s most flagrant errors), language, reason, and consciousness. In the last few years, most of these distinctions have disintegrated as scientists have demonstrated that plenty of species are intelligent and conscious, have feelings, and use language and tools, in the process challenging centuries of human exceptionalism. This shift, still underway, has raised thorny questions about our identity, as well as about our moral obligations to other species.

With AI, the threat to our exalted self‑conception comes from another quarter entirely. Now we humans will have to define ourselves in relation to AIs rather than other animals. As computer algorithms surpass us in sheer brainpower—handily beating us at games like chess and Go and various forms of “higher” thought like mathematics—we can at least take solace in the fact that we (and many other animal species) still have to ourselves the blessings and burdens of consciousness, the ability to feel and have subjective experiences. In this sense, AI may serve as a common adversary, drawing humans and other animals closer together: us against it, the living versus the machines. This new solidarity would make for a heartwarming story and might be good news for the animals invited to join Team Conscious. But what happens if AI begins to challenge the human—or animal, I should say—monopoly on consciousness? Who will we be then?


Read the original article

Comments

  • By WarmWash 2026-02-2415:102 reply

    This article is dripping with the same kind of cringey techno-engineering naivete you find in hollywood movies. The author is totally lost in the sauce of complex surface level analyses mixed with romantic ideals of human exceptionalism, and completely blind to the deeper abstractions and common under girding systems that an expertise in computation would reveal (and don't have any care for emotional concepts).

    The takeaway seems to be "Only meat brains can be conscious because I can feel it and computers aren't made of meat". Which is basically the plot line of every human/robot movie for the last 80 years.

    • By adamzwasserman 2026-02-2417:232 reply

      The interesting version of the argument isn't about substrate: it's about motivation.

      Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.

      Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.

      Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?

    • By tim333 2026-02-2510:10

      I'd hazard a counter prediction that we'll have AI seeming pretty conscious within a decade. People will say it's not real in the same way they say said driving cars will never work when they were driving around but it will become hard to argue against when you can hang out with real examples.

  • By orbital-decay 2026-02-2414:393 reply

    >The idea that the same consciousness algorithm can be run on a variety of different substrates makes no sense when the substrate in question—a brain—is continually being physically reconfigured by whatever information (or “algorithm of consciousness”) is run on it. Brains are simply not interchangeable, neither with computers nor with other brains.

    This is kind of self-contradictory. Then humans aren't conscious? Or each has their own consciousness? Then why not the machine? Not sure what's the point being made here. Yes, the states of a human brain and a transformer are absolutely incompatible (humans at least share the common architecture), that's why any attempts to map model's "emotions" to humans' and the entire model welfare concept are pretty dubious. That doesn't prove there's no (or can never be) consciousness in that, though.

    That's the most coherent argument from the entire article. It criticizes the Butlin report in particular and extrapolates that to "never", while ignoring modern takes on that (e.g. interpretability studies showing vague similarity of both on a level deeper than just the language) and any possible future evidence.

    In a sense the title is right, nobody ever formally defined consciousness, so you and I and anyone else are free to make almost any argument and spin any narrative according to our beliefs and it will be true! Ill-defined terms and baseless solipsism are the main problems with all these discussions. Good thing that in practice they matter as much as the question whether a submarine swims.

  • By mono442 2026-02-2413:343 reply

    We don't really know a consciousness really is and I think it is premature to dismiss the possibility of replicating the behavior with a mathematical model.

HackerNews