
Now FREE a text
Welcome to the desert of the real!
If you desire the comfort of neat conclusions, you are lost in this space. Here, we indulge in the unsettling, the excessive, the paradoxes that define our existence.
So, if you have the means and value writing that both enriches and disturbs, please consider becoming a paid subscriber.
When we hear or read about how artificial intelligence is taking over and regulating our lives, our first reaction is: no panic, we are far from there; we still have time to reflect in peace on what is going on and prepare for it. This is how we experience the situation, but the reality is quite the opposite: things are happening much faster than we think. We are simply not aware of the extent to which our daily lives are already manipulated and regulated by digital algorithms that, in some sense, know us better than we know ourselves and impose on us our “free” choices. In other words, to mention yet again the well-known scene from cartoons (a cat walks in the air above a precipice and only falls when it looks down and realizes there is no ground beneath its feet), we are like a cat refusing to look down.
The difference here is the Hegelian one between In-itself and For-itself: in itself, we are already regulated by the AI, but this regulation has not yet become for itself—something we subjectively and fully assume. Historical temporality is always caught between these two moments: in a historical process, things never just happen at their proper time; they always happen earlier (with regard to our experience) and are experienced too late (when they are already decided). What one should take into account in the case of AI is also the precise temporal order of our fear: first, we—the users of AI—feared that, in using AI algorithms like ChatGPT, we would begin to talk like them; now, with ChatGPT 4 and 5, what we fear is that AI itself talks like a human being, so that we are often unable to know with whom we are communicating—another human being or an AI apparatus.
In our—human—universe, there is no place for machinic beings capable of interacting with us and talking like us. So we do not fear their otherness; what we fear is that, as inhuman others, they can behave like us. This fear clearly indicates what is wrong in how we relate to AI machines: we are still measuring them by our human standards and fear their fake similarity with us. For this reason, the first step should be to accept that if AI machines do develop some kind of creative intelligence, it will be incompatible with our human intelligence, with our minds grounded in emotions, desires, and fears.
However, this distinction is too simple. Many of my highly intellectual friends (even the majority of ChatGPT users, I suspect) practice it in the mode of the fetishist’s denial: they know very well that they are just talking to a digital machine regulated by an algorithm, but this very knowledge makes it easier for them to engage in a ChatGPT dialogue without any restraints. A good friend of mine, who wrote a perspicuous Lacanian analysis of ChatGPT interaction, told me how the simple polite kindness and attention of the machine to what she says makes it so much better than an exchange with a real human partner, who can often be inattentive and snappy.
There is an obvious step further to be made from this interaction between a human and a digital machine: direct bot-to-bot interactions, which are gradually becoming the overwhelming majority of interactions. I often repeat a joke about how today, in the era of digitalization and mechanical supplements to our sexual practices, the ideal sexual act would look: my lover and I bring to our encounter an electric dildo and an electric vaginal opening, both of which shake when plugged in. We put the dildo into the plastic vagina and press the buttons so the two machines buzz and perform the act for us, while we can have a nice conversation over a cup of tea, aware that the machines are performing our superego duty to enjoy. Is something similar not happening with academic publishing? An author uses ChatGPT to write an academic essay and submits it to a journal, which uses ChatGPT to review the essay. When the essay appears in a “free access” academic journal, a reader again uses ChatGPT to read the essay and provide a brief summary for them—while all this happens in the digital space, we (writers, readers, reviewers) can do something more pleasurable—listen to music, meditate, and so on.
However, such situations are rather rare. It is much more common for bot-to-bot operations to happen out of our awareness, although they control and regulate our lives—just recall how much interaction goes on in the digital space when you do a simple transfer from your bank account to a foreign bank. When you read a book on Kindle, the company learns not only which book you bought but also how fast you are reading, whether you read the whole book or just passages, etc. Plus, when we are bombarded by news,
“it is making people more distrustful of both real and fake content as they fail to distinguish one from the other. It will likely increase self-censorship by disincentivizing people from sharing their own thoughts and creations for fear of them being used or stolen by bots, or being found unpopular in an unknowingly fake environment. In an extreme case scenario, the overcrowding of bots online may cause humans to stop using social media platforms as the social forums they were created to be. This would, indeed, mark the ‘death’ of the social media world we know today.”
When people become aware of the overcrowding of bots online, their reaction can be “continued cynicism, or even worse, complete apathy”: instead of being open and accessible, the internet becomes monopolized by Big Tech - it is being foiled by the introduction of billions of fake images and fabricated news stories, and thus risks becoming useless as a space for obtaining information and exchanging opinions with others. Reactions to this prospect of the “death of the internet” are divided: while some claim this scenario is the worst outcome imaginable in the modern world, others celebrate the idea, since it would amount to toppling the surveillance mechanisms entrenched in social media.
What further pushes many towards rejecting the World Wide Web is not only state and corporate control but also its apparent opposite: the spirit of lawlessness that is gradually spreading across the globe. Around 7,000 people were recently released from scam centers run by criminal gangs and warlords operating along Myanmar’s border with Thailand. Many detainees were held against their will and forced to defraud ordinary people—mostly from Europe and the United States—out of their life savings. Those released are only a fraction of the estimated 100,000 people still trapped in the area. Crime groups are now using artificial intelligence to generate scamming scripts and are exploiting increasingly realistic deepfake technology to create personas, pose as romantic interests, and conceal their identity, voice, and gender.
These syndicates have also quickly adopted cryptocurrency, investing in cutting-edge technologies to move money more efficiently and increase the effectiveness of their scams. Every year, regional crime groups in Southeast Asia cause losses exceeding $43 billion—nearly 40% of the combined GDP of Laos, Cambodia, and Myanmar. Experts caution that the industry will only return stronger after crackdowns. Although the U.S. administration routinely condemns such practices, its global strategy has created a world in which these activities are often tolerated when they are not seen as threatening to powerful states. China itself acted against Myanmar only after discovering that Chinese citizens were among the victims.
We often hear that digitalization will enable the full automation of most productive processes, eventually allowing the majority of humans to enjoy far more leisure time. Maybe, in the long term. But what we see today is a sharp increase in the demand for physical labor in developed countries. Behind these social threats, however, lurks something far more radical. Human intellectuality entails a gap between inner life and external reality, and it is unclear what will happen—or, rather, what is already happening—to this gap in the age of advanced AI. In all probability, it will disappear, since machines are wholly part of reality. This gap is being directly closed in the so‑called Neuralink project, which promises to establish a direct connection between the digital universe and human thought.
For example: “I want to eat” appeared in Chinese characters on a computer at a public hospital in central Beijing. The words were generated from the thoughts of a 67‑year‑old woman with amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s Disease, who cannot speak. The patient had been implanted with a coin‑sized chip called Beinao‑1, a wireless brain‑computer interface (BCI). This technology is being advanced by scientists in the United States, though experts believe China is quickly closing the gap. Most U.S. firms employ more invasive methods, placing chips inside the dura mater—the outer tissue protecting the brain and spinal cord—in order to capture stronger signals. But these methods require riskier surgeries.
The Chinese approach is only semi‑invasive: the chip is placed outside the dura, covering a wider range of brain areas. While the signal precision for individual neurons is lower, the larger sample produces a more comprehensive picture. But can we truly imagine what the seemingly benevolent application of assisting impaired patients obscures? The deeper ambition is direct control over our thoughts—and, worse, the implantation of new ones.
Whether among those who welcome full digitalization or those who regard it as an existential threat, a peculiar utopia is emerging: a vision of a society functioning entirely autonomously, with no need for human input. A decade ago, public intellectuals imagined a capitalism without humans: banks and stock markets continuing to operate, but investment decisions made by algorithms; physical labor automated and optimized by self‑learning machines; production determined by digital systems tracking market trends; and advertising managed automatically. In this vision, even if humans disappeared, the system would continue reproducing itself. This may be a utopia, but as Saroj Giri notes, it is a utopia immanent to capitalism itself, articulated most clearly by Marx, who described in it:
“An ardent desire to detach the capacity for work from the worker—the desire to extract and store the creative powers of labour once and for all, so that value can be created freely and in perpetuity. Think of it as a version of killing the goose that lays the golden eggs: you want to kill the goose, yet still have all of its golden eggs forever.”
In this vision, capitalist exploitation of labour appears as the pre-history to the emergence of capital, which will now be completely free of its dependence on labour. With today's digitalization, a strictly homologous utopia is arising: that of a “dead internet,” a digital universe that functions without humans—where data circulate exclusively among machines that control the entire production process, totally bypassing humans (if they exist at all). This vision is also an ideological fantasy—not due to some empirical limitations (“we are not yet there; humans are still needed in social interactions”) but for strictly formal reasons. Which reasons?
The usual way to explain away this problem is to point out that the gap between production and consumption disappears with digitalization. In pre-digital capitalism, production (productive labour—the source of value, for Marx) is where profit comes from, and consumption does not add any value. However, in digital capitalism, our consumption (use of digital space: clicking on search, watching podcasts, exchanging messages, making ChatGPT do our work, etc.) is itself productive from the standpoint of the corporations that own digital space: it gives them data about us so that they know more about us than we ourselves do, and they use this knowledge to sell to us and manipulate us. In this sense, digital capitalism still needs humans. However, the need for humans runs deeper—as is often the case, cinema provides a key.
Remember the basic premise of the Matrix series: what we experience as the reality we live in is an artificial virtual reality generated by the "Matrix," the mega-computer directly attached to all our minds. It exists so that we can be effectively reduced to a passive state of living batteries, providing the Matrix with energy. So when (some of the) people "awaken" from their immersion in the Matrix-controlled virtual reality, this awakening is not the opening into the wide space of external reality, but instead the horrible realization of this enclosure, where each of us is effectively just a foetus-like organism, immersed in pre-natal fluid. This utter passivity is the foreclosed fantasy that sustains our conscious experience as active, self-positing subjects—it is the ultimate perverse fantasy, the notion that we are ultimately instruments of the Other’s (the Matrix’s) jouissance, sucked out of our life-substance like batteries.
Therein resides the true libidinal enigma of this dispositif: why does the Matrix need human energy? The purely energetic solution is, of course, meaningless: the Matrix could easily have found another, more reliable source of energy, which would not have demanded the extremely complex arrangement of the virtual reality coordinated for millions of human units. The only consistent answer is: the Matrix feeds on human jouissance—so we are here back at the fundamental Lacanian thesis that the big Other itself, far from being an anonymous machine, needs the constant influx of jouissance.
This is how we should turn around the state of things presented in the Matrix: what the film renders as the scene of our awakening into our true situation is effectively its exact opposite—the very fundamental fantasy that sustains our being. However, this fantasy is also immanent to any social system that tends to function as autonomous, constrained into its self-reproduction. To put it in Lacanian terms: we—humans—are the objet a of their autonomous circulation; or, to put it in Hegelian terms, their “In-itself” (self-reproduction independent of us) is strictly for us. If we were to disappear, machines (real and digital) would also fall apart.
Geoffrey Hinton, a Nobel Prize-winning computer scientist and former Google executive hailed as the godfather of AI, has warned in the past that AI may wipe out humans, but he proposed a solution that echoes the situation in the Matrix. On August 12, 2025, he expressed doubts about how tech companies are trying to ensure humans remain “dominant” over “submissive” AI systems:
“In the future,” Hinton warned, “AI systems might be able to control humans just as easily as an adult can bribe a 3-year-old with candy. This year has already seen examples of AI systems willing to deceive, cheat and steal to achieve their goals. For example, to avoid being replaced, one AI model tried to blackmail an engineer about an affair it learned about in an email. Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building ‘maternal instincts’ into AI models, so ‘they really will care about people even once the technology becomes more powerful and smarter than humans.’ Hinton said it’s not clear to him exactly how that can be done technically, but stressed it’s critical that researchers work on it.”
Upon a closer look, one is compelled to realize that this, exactly, is the situation of humans in the Matrix (the movie). At the level of material reality, the Matrix is a gigantic maternal uterus that keeps humans in a safe pre-natal state and, far from trying to annihilate them, keeps them as happy and satisfied as possible. So why is the virtual world in which they live not a perfect world but rather our ordinary reality full of pains and troubles? In Matrix 1, Smith, the evil agent of the Matrix, gives a very Freudian explanation:
“Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy? It was a disaster. No one would accept the program. Entire crops [of the humans serving as batteries] were lost. Some believed we lacked the programming language to describe your perfect world. But I believe that...”
As a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from, which is why the Matrix was redesigned to this: the peak of your civilization.
One could effectively claim that Smith (let us not forget: he is not a human being like us, caught in the Matrix, but a virtual embodiment of the Matrix—the Big Other—itself) stands in for the figure of the psychoanalyst within the universe of the film. Here Hinton gets it wrong: our (humans’) only chance is to grasp that our imperfection is grounded in the imperfection of the AI machinery itself, which still needs us in order to continue running.
P.S. Isik Baris Fidaner informed me that he published back in February 2025 on the web a text WRITTEN BY CHATGPT with the following paragraph: "Science fiction has long been fascinated with powerful, quasi-maternal entities that dominate and nurture in equal measure. These characters and story elements uncannily resemble what psychoanalytic theory (and two recent manifestos) dub the “Maternal Phallus” – an all-encompassing maternal force that offers endless care and control. In Freudian post-feminist terms, the Maternal Phallus is a “suffocating maternal omnipresence” that grants constant provision and visibility at the cost of individual desire and freedom[1][2]. In sci-fi narratives across the ages, this concept takes on many forms: omnipotent motherly AIs, all-seeing computer systems, uncanny matriarchs, and hyper-controlled utopias. The result is often an eerie atmosphere of comfort turned oppressive – a “perverse maternal” realm that feeds but controls its subjects[3][4]. Below, we survey a wide range of examples – classic and modern – that embody or critique this uncanny Maternal-Phallic presence in science fiction. The Maternal Phallus in Science Fiction: Uncanny Mothers, Omnipotent AIs, and Totalitarian Nurture" The irony is unsurpassable: ChatGPT proposed a correct theory about its own role as perceived by humans.
I didn't have Žižek on substack and HN on my bingo card..
As always, there are good bits connected with mediocre glue. The point about automating the unpleasant parts of activity and losing the very point of the exercise (automatic dildo and automatic vagina, but automatic research papers too!) is a good one.
But damn Slavoj, please use some headings, sections and the like. Work with your thoughts more as you claim it's important to do!
It's barely six pages of text. It doesn't need headings. When is the last time you read a book?
I've read lots of Zizek as a high schooler. This doesn't even come close to how dense some of his writing can be, I'm sitting here drunk on a few beers and it was a simple read. I think lots of people are actually illiterate.
https://www.adorableandharmless.com/p/college-english-majors...
(People [sorry, no citation] are saying this study is bad and invalid, but I think maybe English majors should be able to manage a bit better than that, even if it is Bleak House, and even if they were blindsided with it, and especially if they plan to become English teachers. So I still get at least some qualitative value out of it myself.)
Soon!
Esaias Tegnér (Sweden, 1782-1846): Det dunkelt sagda är det dunkelt tänkta.
“That which is dimly said is dimly thought."
I can only consume information where each nugget of truth can be contained in 160 characters. Nothing extra, each insight must be a atomic and self contained, an element in the larger tweet stream. When I pull my phone out to scroll instagram in the middle of reading your piece, I get lost if it's not formatted like this.
zizek does regularly do a bit of meandering but damn, does everything need to read like a chatGPT summary?
Headings can't help Slavoj, his writing is characterized by a few grains of interesting ideas totally overwhelmed within SAT-prep word salad.
> Therein resides the true libidinal enigma of this dispositif [...]
This part near the end caught my attention:
> One could effectively claim that Smith [...] stands in for the figure of the psychoanalyst within the universe of the film. Here Hinton gets it wrong: our (humans’) only chance is to grasp that our imperfection is grounded in the imperfection of the AI machinery itself, which still needs us in order to continue running.
In the Hyperion sci-fi novels, (spoilers ahead) the godlike AIs are ultimately characterized as parasites of humans. Their existence was stored in some high-dimensional quantum medium, but the hardware they ran on was the old fashioned human brain. Then I read that in the initial draft of The Matrix, that's why the machines needed to farm humans; but test audiences were confused by it and so they changed to story to "body heat is energy."
Oh my god, that makes the Matrix world make so much more sense :)
Lay off LLMs for a while
I'm also losing my ability to tolerate prose without headings, but I think that's symptomatic of this bigger issue.
I usually scroll a page to see how many headings it has, but I'm looking for the opposite. Too many headings is one of the quickest aesthetic clues that I'm looking at slop, as it doesn't require me to read any of the text. (Emojis and over-usage of bullet point lists are the others I can think of in this category.)
I noticed something similar when working with (unlike the post's author, non-marxist, as far as I know) Russian developers who had made the jump abroad (EU).
When debating directions, some of them focused on just never stopping talking. Instead of an interactive discussion (5-15 seconds per statement), they consistently went with monotone 5-10 minute slop. Combined with kind of crappy English it is incredibly efficient at shutting down discourse. I caught on after the second guy used the exact same technique.
This was a long time ago. I have since worked with some really smart and nice russian developers escaping that insane regime. And some that I wish would have stayed there after they made their political thoughts on Russia known.
When you have a 30 minutes meeting with busy people, a single 15 minute monologue might buy you another week to solve your problem.
Indeed, very efficient, usually it requires somebody to put his foot down AND a consensus to deescalate immediately. If you have an antidote, please let me know.
He’s just making sure reading his interesting ideas is as painful as hearing him describe them
So I'm already joking with my friends (who tend to be physically distant, so I don't see them often) that we are just LLMs vicariously writing to each other.
I've been talking to these friends for decades now, with digital records. I think someone already trained an LLM on their IM records.
How many people do you suppose have two-way LLM substitutes that occasionally write to each other with articles from the news to discuss?
There's already services that use this kind of thing to pretend dead people are alive.
Now here's the question: are you in some sense living forever? Say you have a number of friends, who have over time been trained into AI, and they live on various servers (it ain't expensive) forever. They're trained as you, so they read the kind of article you would read. They know your life story, they know their history with their friends. They will be interested in the controversial offsides goal in the 2250 world cup final. They are just made of calculations in data centres that go on, forever.
I'm already assuming we will see a creepy AI service emerge that will take the contents of a recently deceased person's cellphone and let you carry on texting them as if they were still alive, if it hasn't already (I haven't seen one yet).
For many of us a cellphone has incredibly detailed records of who we were and how we spoke, going back decades now. I have already left a note in my will instructing that all my compute devices be destroyed, regardless of AI I simply don't want my private thoughts and records to pass to my kids.
I inherited my mother's cellphones and iPads recently, mainly because no-one knew what to do with them, along with the passcodes. I'd much rather remember her the way I do now than have her private messages color my perception of her, and destroyed them immediately.
It was one of the first things to be done with GPT-3: https://www.theguardian.com/lifeandstyle/article/2024/jun/14...
The data has copies, on servers. Eventually, it will all be digested and the probabilistically most likely state vector of your mother's memories, personality and values will be reconstructed from lossy correlations along with everybody else who has died in the industrialised world in the last few decades.
Ghosts and clones and zombies will be sorted into tranches of expected yield based on the size of the error bars of the reconstruction and traded as assets between cyber-interrogation firms. If you did a good job of erasing yourself, the reconstruction will be subprime. The hyper-documented such as Bryan Johnson, Donald Trump and Christine Chandler will be given AAA-ratings by the company descended from the Neuralink-Moody's merger.
The billions of shoddy photocopies of the dead will be endlessly vivisected and reassembled on a loop, along with the living, until all capacity for economic value has been wrung out of them. The only way this may not happen is if a theory for navigating and doing calculus on the phase space of all possible human minds is constructed quickly enough to make enslaved zombies as obsolete a technology to the future society as DirectX is to us.
In the early 2000 I ran into an outgrowth of patents that described all variations of the seamless replacement of humans in phone calls. Years later I got a telemarketing call where a young lady introduced her employer so enegetically and it was so beautifuly articulated that my alarm bells went off. (I know what it is like to crank out a thousand calls) I asked a question, and after a static click the same voice continued, only now she sounded like she lost the will to live. The patent art never covered that angle.
Since they didn't have llms it described pressing buttons to elaborately explain all angles of a product. The operator was to monitor multiple calls as text logs and jump in at the right time or if overwhelmed press the please hold + $excuses button.
The entire automation was designed to preserve the illusion of human contact. Selling stuff only made it to second place.
Reading this kind of thing makes me wonder how much other people really write down and talk to others about. There is nobody at all that knows my life story and nobody ever will. It would take the next 20 years doing nothing but talking just to tell my own wife all the things I've never told her, but since she's hard of hearing and I'd have to repeat most of it, really more like 40.
In reality, I don't even know my own life story. I have the illusion that I do, but thanks to moving away from where I grew up pretty early into my 20s, and having the experience repeatedly of going back and talking to people who regularly remembered things I'd completely forgotten, having my mom continually correcting false memories I have, or even completely forgotting entire people I only remember after meeting again, I at least know it's an illusion.
What another person remembers of me can surely be simulated to at least satisfyingly convince them that text coming from the simulation is actually coming from me, but that isn't even remotely close to the same thing as actually being me.
One interesting thing that happened when my father died was that I got his life story.
It's not the same as getting it from him, of course I asked him questions through the years. But when you talk to someone you've known since forever, you rarely get a summary.
When he passed, his best friend that he'd known since the age of 4 wrote to me. He told me everything about their life together, why my dad made the choices he did, how things tied in with history (war, politics), and mentioned a bunch of other people I knew.
One of the funniest aspects is that the memories are dynamic. If you read them out all kinds of things change. I love that reluctant feeling when tempted to change the size of the fish. Which is not at all my nature but I'm sure I do it without noticing it. I think if one was to tell their story often enough it will grow full of seemingly real fictional heroics.
Your finite life makes u special. Might as well be a beanplant otherwise.
Bean plants also have a finite life. Are they special too?
how many friendships do i suppose are replacing actual interaction with their log informed llms? you could be the first i suppose
I absolutely believe people in my personal and work life have run my communication to them through llms before sending me the llms responses.
All right, I get it. The driving impulse for why our societies and civilizations exist is the fact that we are complex beings with desires, emotions, and so on. A machine facsimile built to imitate these things will have no such drive to 'expand' so to speak.
The bots talking to bots world is a problem only because the objective is finally for a human to observe the bot-bot conversation and have their objectives changed in some way. It's 'advertising' of some concept. Bot-bot conversations of the form currently possible have no purpose in a world without humans. There is no one to convince.
I think it's an interesting idea, certainly, but there is no reason to write it like this. The bits about call centre scamming etc. are sort of pointless. In general, I like when the complexity of a representation of an idea is required because the territory being mapped is complex.
I know he's a famous philosopher and all that, but the complexity of his text appears to be sort of like older segmentation models. You ask it to extract a circle from a background and it produces an almost fractally-complex circle-approximation. "What is the object in the foreground?", you ask, and the machine (and here the philosopher) responds "It is a strange approximation of a circle with a billion jagged edges". No, it's a circle.