What is happening to writing? Cognitive debt, Claude Code, the space around AI

2026-02-1814:59135129resobscura.substack.com

"Cognitive debt," Claude Code, and the negative space around AI

Last week, an essay laying out a case for how AI will change knowledge work went enormously viral on X, where it currently sits at 84 million views:

I agree with much of it, though parts were overstated and other parts were clearly submarine marketing. But what was most striking to me wasn’t the content, but the obvious fact that large swathes of it were written by AI. I suspect that the “AI slop” quality was actually part of why it did so well. In 2026, we must face the fact that slop is well-liked. Perfectly formatted writing, with statistics ready at hand, copious length, a chipper tone, and lots of antithetical phrasing — the infamous “it’s not that; it’s this” formula — is showing up everywhere for a reason. People might say they hate it, and I’m sure many do. But the truth is that many other readers clearly enjoy this style of machine-generated prose.

That revealed preference worries me more than anything else.

I learned to read and write quite late (worryingly late, by the standards of millennial parents like myself). At age 8, I was barely literate, way behind classmates. Something clicked at age 9. I read The Lord of the Rings twice before graduating to The Silmarillion. Fifth grade was the year of Michael Crichton. By age 12 it was Virginia Woolf and Tolstoy — both of whom I barely understood. But I loved the unfamiliar sound of the words and the unfamiliar sensations those words evoked. I loved feeling like I could transmute some of those feelings and images into my own words on a page (I even started a novel, which had something to do with an ancient Egyptian goldsmith searching for his father).

Writing became something I was self-evidently good at. It was the one thing in my life that truly opened doors for me.

My first real job after college in 2007 was as an assistant for a crusty New York labor lawyer who wrote exclusively on legal pads and called me “college boy.” He had an incredible memory and an agile mind, but he wasn’t much of a prose stylist. That task was offloaded to me. Every morning, the lawyer would toss yellow legal pads on my desk. It was my job to proofread and “punch up” these rough drafts. I would toil happily for a few hours in Microsoft Word, usually earning his gruff approval by around lunchtime. Then I spent the rest of the day reading about mellified man and cocaine mummies and the like on Wikipedia. I did well enough at this job that it felt like a viable, permanent career.

Needless to say, that career is gone now. Gemini is at least as good at transcribing a lawyer’s longhand as I am, and Claude will undoubtedly output more perfectly formatted legal footnotes than I could ever hope to achieve.

So what is left for human writing?

As a historian and teacher — distinct from my identity as a writer — I don’t experience much of the existential dread that essays like “Something Big is Happening” seem intent on getting me to feel. As I wrote here, historians rely on non-digitized archives and tacit, often physically-embodied knowledge. That is a very good thing when it comes to resisting replacement by the likes of Anthropic’s newly released Sonnet 4.6 model, which is remarkably good at “computer work” tasks.

When I think of AI-proof jobs, I think of people like electricians, plumbers, or the surf instructors of Santa Cruz. But I also think about history professors and anyone else whose output includes some combination of in-person engagement and travel-based or otherwise embodied work in a regulated industry. No less than a surf instructor, historians are performing physical services in the real world, although we don’t tend to think of it in those terms. We are going into parish church basements to read through baptismal records, finding weird old non-digitized books in rare book shops, piecing together who called Margaret Mead on a certain day in 1954 by reading through her secretary’s notes. These are not the everyday tasks of a historian’s life, but they are the kind to things we might do, say, once a week. Couple that with twice a week in-person classroom time, and I simply flatly disagree with anyone who thinks this combo will be replaced by a Sonnet 4.6-type model, no matter how good it gets at creating Excel spreadsheets, translating Latin, or explaining linear algebra.

Anyone who has led a class discussion — much less led students on a tour of Egypt or Okinawa, as my colleagues regularly do — knows that there is a huge gap between solo learning online and collective learning in meat space. And although it is possible to imagine a humanoid robot instructor with a brain powered by some future OpenAI model, it is very, very difficult for me to imagine such a thing being both popular with students and permissible within regulatory frameworks.

The claim that robot teachers will surely happen someday is one thing, and I can’t disagree with it. But culture and regulations change far more slowly than technologies. Those in the technology sector who are predicting time horizons of a few years for these changes are, I think, confusing the pace of technical change with the pace of social change. The image above, produced in 1890s France, was imagining machine-generated education by the year 2000. For my part, I can picture educational systems dominated by robot teachers or even direct-to-brain interfaces by the year 2100, and possibly a bit before that.

But it’s worth thinking about how surprised the creator of the image above might be to learn that in my classrooms in 2026, I am still using a blackboard and chalk.

So although I am very sure at this point that AI will be transformative for a broad range of human jobs and interests, I actually think historians and teachers are going to be fine in the medium term of the next couple decades.

Share

So it may go for historians. What about writers?

There I’m less sure. Certainly, the bottom will drop out from the lower rungs of eyeball-grabbing written content, from pulpy novels to historical click-bait writing online. Probably, it already has. That GPT-4 level models could passably imitate the imitators of, say, Colleen Hoover or Dan Brown was known to anyone who experimented with them.1

These experiments could be interesting, but could never be mistaken for good writing. The thing is: the most recent Anthropic models actually write quite well.

For this reason, I find myself deeply sympathetic with software developers lately. Their struggles are my own, just from a different angle: what happens when something you are good at, a skill which is relatively rare and relatively valued, suddenly becomes commoditized? For both professional writers and professional developers, this is not a question but simply a daily reality.

Like Andrej Karpathy and others, I find myself deeply energized by the potential of these new tools, while also wondering what their mass adoption will do to everyone’s brains (Karpathy celebrates the “phase shift” in capabilities that occurred over the past few months, but adds: “I've already noticed that I am slowly starting to atrophy my ability to write code manually”).

On the one hand, I feel a real sense of loss for the world of early 2000s New Yorker style long-form writing and criticism, the world of The New York Review of Books and Lapham’s Quarterly and the amazingly good Dennis Johnson novel I’m currently reading (Tree of Smoke).

Yet I am taking real delight in being able to create truly novel — though sometimes interesting only to me — hybrids of software, history, and writing. This Henry James simulator from last fall was, in truth, just the tip of a somewhat obsessive Claude Code iceberg for me.

Some recent examples of what I’ve been working on:

Drawing on my earlier work on historical simulation using ChatGPT prompts, I made a full-on history simulator (GitHub here) which randomly drops you in different historical eras and cultures, with pixel art graphics augmenting an LLM-generated narrative engine which is grounded in real primary sources and a rather elaborate system for generating at least somewhat accurate historical context.

At left, I enter the prompt “Make me a toymaker in Renaissance Florence.” At right, I become Lorenzo Bellini, a 35-year-old toymaker in the year 1500.

I’ve already used in this in my teaching. While it’s imperfect, the mechanic of allowing a user to generate a historical simulation based on text input — like “I want to be a salt miner in a frontier province of Ancient Rome” or “1930s Istanbul bike mechanic” or “Aztec farmer with a troubled past” or whatever else you can think of — is fascinating to me. More on this in a future post.

I tried to make a game called MKULTRA (playable version here, GitHub here). This game is based directly on primary sources relating to George Hunter White’s now-infamous CIA-funded work on dosing unwitting civilians with LSD which I had gathered for my book Tripping on Utopia. As with History Simulator, the text responding to player input here is generating by Google’s Gemini 2.5 model. The difference here is that it gets more and more outlandish and surreal as you, playing as a procedurally generated person in 1950s San Francisco, are dosed with psychedelics. As your “trippiness” meter increases, the UI itself begins to break down and change. Compare the relatively sober player view at left with the altered one at right:

An imaginary San Franciscan named George Schmidt ends up in the actual safe house at 225 Chestnut Street where the CIA tested LSD on civilians.

Premodern Concordance (live page here, GitHub here) is the most promising use case for LLMs in historical research that I’ve come up with so far, and something I am currently seeking grant funding for along with my historian colleague Mackenzie Cooley. As a proof of concept for this larger project, I fine-tuned an embedding model to analyze premodern medical, scientific and natural history texts, searching for non-obvious links between concepts that fuzzily match up across languages and eras. For instance, if you click “piedra Bezaar” on the page below, you end up at a cross-linguistic concordance page for Bezoar stones which tracks mention of them across texts in Portuguese, Spanish, Italian, French and English.

There’s actually several more projects I’ve been working on, but I won’t bore you with long descriptions. In brief:

• I made an apothecary simulator based on 17th century texts, where you treat patients in colonial Mexico City and potentially get in trouble with the Inquisition.

• I made a historical persona generator for use in history classroom activities, roleplaying, and the like, as a spinoff of the larger history simulator project.

• I made a literary canon explorer which contrasts mentions of Victorian authors on Google Books versus their page views on Wikipedia.

These are all things which, in the past, I would have explored through the written word. Now, I’m exploring them through Claude Code, which I use to write the TypeScript code that summons them into being, and Google’s Gemini language models, which I use to populate them with “speaking” characters for the simulation aspects.

I have greatly enjoyed the challenge of learning how to build these games and tools over the past year. But it also seems to me that the work I have put into them — the enjoyment I have gained from them — is, in itself, a somewhat worrisome leading indicator.

It’s not just that AI slop is replacing some of the lower-rung forms of fiction and prose. It’s that the audience share for writing as a whole will be increasingly displaced by interactive “writing-adjacent” things like the MKULTRA game or interactive text-based simulations of life as a Florentine toymaker in the year 1500, or whatever else you can imagine. Gemini’s outputs for these scenarios read like a bad attempt at historical fiction. But the dynamic nature of these sorts of tools — their “choose your own adventure” quality — is genuinely new. That dynamism and hand-tailored quality will, I suspect, be more compelling for many than simply reading mediocre novels about MKULTRA or Florentine toymakers.

At the end of the day, I didn’t enjoy any of the projects described above as much as I enjoyed researching and writing Tripping on Utopia. Or as much as I am enjoying starting work on my new book, which, not coincidentally, is about how William James his world reacted to the automation of life and thought in the first machine age.

I miss the obsessive flow you get from deep immersion in writing a book. Such work has none of the dopamine spiking, slot machine-like addictiveness of Claude Code — the rapid progress of typing two sentences into a terminal window, watching Opus 4.6 build a new feature over the course of ten minutes, and then seeing it come to life on a screen.

But the very lack of progress that defines serious writing — the staring at a blank page, the rewrites, the chapters you throw away — feels earned (and therefore intellectually meaningful) in a way that Claude Code simply never will.

Cognitive debt” is a term that software engineers have recently adopted to describe the disconcerting feeling of losing touch with the ground truth of what is actually happening in one’s codebase. As Simon Willison recently put it: “I've been experimenting with prompting entire new features into existence without reviewing their implementations and, while it works surprisingly well, I've found myself getting lost in my own projects.” It is worth noting that this term didn’t originate in the world of code. It was coined last year by MIT researchers to describe “cognitive costs” from relying on ChatGPT to write essays. The frontlines of cognitive debt, in other words, are in the realms of both writing and coding.

I have about 50 ideas sitting in my Substack drafts folder. It is tantalizing to imagine that I could simply open a Claude Code terminal window, direct it to these drafts, and tell it to output a year’s worth of Res Obscura posts. This, of course, is not a hypothetical but simply what a a lot of people are doing right now.

But if you do that, what is the point? What was I working toward when I was 10, or 20?

The work is, itself, the point.

My quasi-obsession with vibe coding over the past year or so — see above — made that less clear to me for a time. I was never tempted to use AI to write this blog (something I have never done and never will). Rather, what did tempt me was the illusion of productivity and innovation that one gets from using AI to create custom software. At times, it has felt more productive to produce digital humanities projects and historical educational games like Apothecary Simulator than to put words on a page. As I write that now, I feel some sense of shame, as if I am confessing to having become addicted to junk food or gambling.

Is Claude Code junk food, though? I also spent the past year learning about data taxonomies and single stores of truth and embedding models, and although I have barely written a line of code on my own, the cognitive work of learning the architecture — developing a new epistemological framework for “how developers think” — feels real. I also think that at least a few of my projects, especially the Premodern Concordance, are useful tools that I will be returning to and learning from for years.

And yet: I miss pre-AI writing and research. More than that, I miss thinking through writing in public.

The novelist Martin Amis once wrote: “style isn’t something you apply later; it’s embedded in your perception, and writers without that freshness of voice make no appeal to me. What is one in quest of with such a writer? Their views? Their theories?”

The current frontier AI models are fascinating because they are an entirely new kind of tool, untested and unexplored. They are even more fascinating as implicit puzzles about the nature of consciousness and selfhood.

What they aren’t are replacements for thinking itself, or for the elusive, deeply personal sense of style Amis was talking about.

It’s not just that “writing is thinking” (an oft-repeated phrase these days).

It’s that writing is a special, irreplaceable form of thinking forged from solitary perception and labor — an enormous amount of it — but tested against a reading public. That fusion of aloneness and interiority versus togetherness and exteriority is what makes it fascinating to me. I fear for a world in which people are simply able to create a simulacrum of writing and of thought by inputting a prompt and singly experiencing the result.

In short: the production of writing is deeply solitary and personal, but the consumption of writing is just as deeply public and shared.

That combination seems to me to be the true negative space around AI capabilities. AI will get very, very good at creating compelling, hyper-customized content. Claude Code’s self-evident addictive qualities is a strong early signal of that. It will not, I suspect, get good at creating the public debates and shared intellectual communion that characterize great literature and great historical writing. And it certainly won’t be able to capture the perception-based, physically embodied sense of personal style and taste that Martin Amis described.

And so, reader: I say to you now, I will keep writing without AI, even as I explore what it’s possible to do with it. I will be doing it simply because I enjoy talking to you and thinking with you — not as a solitary individual in a chat transcript but as a collectivity of actual human readers reading actual human thoughts.

Thank you for creating that with me. Long may it last.

• “Two attempts to replicate Balabanova’s findings of cocaine failed, suggesting ‘that either Balabanova and her associates are misinterpreting their results or that the samples of mummies tested by them have been mysteriously exposed to cocaine’” (Wikipedia)

• Archaeologists report an ancient elephant bone which may be the first physical proof of Hannibal’s crossing of the Alps with a troop of North African war elephants — something I am still staggered by whenever I teach it to the students in my world history class (BBC).

Leave a comment

Share


Read the original article

Comments

  • By ericdykstra 2026-02-1823:033 reply

    I won't ever put my name on something written by an LLM, and I will blacklist any site or person I see doing it. If I want to read LLM output I can prompt it myself, subjecting me to it and passing it off as your own is disrespectful.

    As the author says, there will certainly be a number of people who decide to play with LLM games or whatever, and content farms will get even more generic while having less writing errors, but I don't think that the age of communicating thought, person to person, through text is "over".

    • By SkyPuncher 2026-02-192:205 reply

      It's easy to output LLM junk, but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved. I'm not talking a 10 turn chat to whip out some junk. I'm talking deep research and thinking with Opus to develop ideas. Chats where you've pressure tested every angle, backed it up with data pulled in from a dozen different places, and have intentionally guided it towards an outcome. Opus can take these wildly complex ideas and distill them down into tangible, organized artifacts. It can tune all of that writing to your audience, so they read it in terms they're familiar with.

      Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

      Our customers don't care how we communicate internally. They don't care if we waste a bunch of our time rewriting perfectly suitable AI content. They care that we move quickly on solving their problems - AI let's us do that.

      • By NathanielK 2026-02-192:54

        > Reading it isn't the most fun, but let's face it - most professional reading isn't the most fun. You're probably skimming most of the content anyways.

        I find it difficult to skim AI writing. It's persuasive even when there's minimal data. It'll infer or connect things that flow nice, but simply don't make sense.

      • By giraffe_lady 2026-02-192:283 reply

        I hear stories like this a lot (on here anyway) but I haven't seen any output that backs it up. Any day now I guess.

        • By mlinhares 2026-02-192:37

          Pretty sure people are trying to prompt chatgpt to write Brandon Sanderson-like stories and we'll see their successful prints anytime now.

        • By SkyPuncher 2026-02-1913:492 reply

          I don't really understand this retort. I assume most of us work in a professional environment where it's difficult, if not impossible, to share our work.

          We've been discussing these types of anecdotes with code patterns, management practices, communication styles, pretty much anything professionally for years. Why are the LLM conversations held to this standard?

          • By giraffe_lady 2026-02-1916:27

            Well, because I've worked in different places, and with different organizations, and can see for myself how different approaches to professional conduct manifest in the finished product, or the flexibility of the team, effectiveness of communication, etc.

            Especially with things like code and writing, I assess the artifacts: software and prose. These stories of incredibly facility of LLMs on code and writing are never accompanied by artifacts that back up these claims. The ones that I can assess don't meet the bar that is being claimed. So everyone who has it working well is keeping it to themselves, and only those with bad-to-mediocre output are publishing them, I am meant to believe? I can't rule it out entirely of course, but I am frustrated at the ongoing demands that I maintain credulity.

            FWIW I have sat out many other professional organization and software development trends because I wanted to wait and assess for myself their benefits, which then failed to materialize. That is why I hold LLMs to this standard, I hold all tools to this standard: be useful or be dismissed.

          • By YeGoblynQueenne 2026-02-1920:39

            Because I have a proof of the Riemann hypothesis but I'm not showing it to you because I don't want you to steal my idea.

        • By throw10920 2026-02-194:45

          It's really interesting that I've only seen a few actual pieces of large-scale LLM output by people boasting about it, and most of them (e.g. the trash fire of a "web browser" by Anthropic) are bad.

      • By sonofhans 2026-02-193:05

        To build what, though? I’m truly curious. You talk about researching and developing ideas — what are you doing with it?

      • By habinero 2026-02-198:18

        > but I and my colleagues are doing a lot of incredible work that simply isn't possible without LLMs involved

        ...Which part is impossible? "Writing a bunch of ideas down" was definitely possible before.

    • By dw_arthur 2026-02-192:392 reply

      I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject. Writing about something you know well tends to come easy and usually is enjoyable. Why would you use an LLM for that and how could you be okay with its output?

      • By NathanielK 2026-02-192:471 reply

        Writing a first draft may come easy, but there's more to the process than that. An LLM can go from outline to "article" in one step. I can't.

        I don't write often, so revising and rewriting is very slow for me. I'm not confident in my writing and it looks clunky to my eye.

        I see the appeal, though I want to keep developing my own skills.

        • By AlecSchueler 2026-02-1910:132 reply

          > An LLM can go from outline to "article" in one step. I can't.

          But the point is that the results tend to be very grating.

          > I'm not confident in my writing and it looks clunky to my eye.

          AI writing is clunky!

          > I don't write often, so revising and rewriting is very slow for me.

          This is totally fair, but maybe consider editing the AI output once it's given you a second draft?

          • By NathanielK 2026-02-1913:23

            I agree entirely. Seeing all llm garbage being published made me realize how insecure people are about their writing.

            Since realizing, I've been stubbornly improving my own writing and not touching LLMs. Takes a bit of work though.

          • By chairmansteve 2026-02-1910:48

            "maybe consider editing the AI output once it's given you a second draft?".

            I would completely rewrite the LLM output. Use it as a researcher or idea generator.

      • By aleph_minus_one 2026-02-192:541 reply

        > I assume if someone used an LLM to write for them that they must not be comfortabley familiar with their subject.

        This statement assumes that the writer is a native speaker in the language in which he writes the text.

        • By habinero 2026-02-198:21

          If you're not a good enough speaker to write it, you're not good enough to proofread it, either.

    • By botusaurus 2026-02-190:212 reply

      some people might be better at prompting a LLM than you

      just like when you go to a restaurant to have a chef cook for you when you can cook yourself

      • By throawayonthe 2026-02-192:09

        a chef can only do so much with a frozen microwave meal

      • By trollbridge 2026-02-192:25

        Most restaurants, by volume, these days churn out ultra processed, mass-marketed slop.

        It’s true there is the occasional Michelin starred place or an amazing local farm to table place. There is also the occasional excellent use of LLMs. Most LLM output I have to read, though, is straight up spam.

  • By apsurd 2026-02-1821:393 reply

    Axios got traction because it heavily condensed news into more scannable content for the twitter, insta, Tok crowd.

    So AI is this on massive steroids. It is unsettling but it seems a recurring need to point out that across the board many of "it's because of AI" things were already happening. "Post truth" is one I'm most interested in.

    AI condenses it all on a surreal and unsettling timeline. But humans are still humans.

    And to me, that means that I will continue to seek out and pay for good writing like The Atlantic. btw I've enjoyed listening to articles via their auto-generated NOA AI voice thing.

    Additionally, not all writing serves the same purpose. The article makes these sweeping claims about "all of writing". Gets clicks I guess, but to the point, most of why and what people read is toward some immediate and functional need. Like work, like some way to make money, indirectly. Some hack. Some fast-forwarding of "the point". No wonder AI is taking over that job.

    And then there's creative expression and connection. And yes I know AI is taking over all the creative industries too. What I'm saying is we've always been separating "the masses" from those that "appreciate real art".

    Same story.

    • By ngriffiths 2026-02-1822:552 reply

      > Additionally, not all writing serves the same purpose.

      I think this is a really important point and to add on, there is a lot of writing that is really good, but only in a way that a niche audience can appreciate. Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff targeted to a general audience, and it's still nowhere close to some more niche classics.

      An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

      I kind of think... there is still something fundamental that would get in the way, but that it is still totally achievable to overcome that some day? I don't think it's impossible for an AI to be creative in a humanlike way, they don't seem optimized for it because they are completely optimized for the sort of analytical mode of reading and writing, not the creative/immersive one.

      • By anon-3988 2026-02-191:361 reply

        > An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

        I am sure it could but then what is the point? Consider this, lets assume that someone did manage to use LLM to produce a very well written novel. Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?

        The moment I know how its made, the exact prompts and process, I can then have an infinite number of said great novels in 1000 different variations. To me this makes the output way, way less valuable compared to the input. If great novels are cheap to produce, they are no longer novel and becomes the norm, expectation rises and we will be looking for something new.

        • By kartoffelsaft 2026-02-194:022 reply

          I'm inclined to believe that the difference that makes the upper bound of human writing (or creativity) higher than that of an LLM comes from having experiences in the real world. When someone is "inspired" by others' work or is otherwise deriving ideas from them, they inevitably and unavoidably insert their own biases and experiences into their own work, i.e. they also derive from real-world processes. An LLM, however, is derived directly and entirely from others' work, and cannot be influenced by the real world, only a projection of it.

          > Would you rather have the novel that the LLM generated (the output), or the prompts and process that lead to that novel?

          The "process", in many cases, is not necessarily preferable to the novel. Because an important part of the creative process is real-world experiences (as described above), and the real world is often unpleasant, hard, and complex, I'd often prefer a novel over the source material. Reading Animal Farm is much less unpleasant than being caught in the Spanish Civil War, for example.

          • By malnourish 2026-02-194:291 reply

            I agree with you.

            I also think it's a matter of time before we start constructing virtual worlds in which we train AI. Meaning, representations of simulated world-like events, scenarios, scenery, even physics. This will begin with heavy HF, but will move to both synthetic content creation and curation over time.

            People will do this because it's interesting and because there's potential to capitalize on the result.

            I thought of this in jest, but I now see this as an eventuality.

            • By habinero 2026-02-198:541 reply

              > People will do this because it's interesting and because there's potential to capitalize on the result.

              I don't know why anyone admits to thinking this. For one, there's nothing stopping you from making movies or writing stories now. You're not suddenly going to develop creativity or interesting ideas using LLMs, either.

              Also, think it through. If everyone can yell at computer until movie fall out, there will be millions of them and nobody will pay for anything.

              • By malnourish 2026-02-1912:301 reply

                I don't want AI content, but there's a market in the belief that people do.

                • By habinero 2026-02-208:19

                  Is there? It sound like a bunch of uncreative people wishcasting.

          • By anon-3988 2026-02-195:381 reply

            > The "process", in many cases, is not necessarily preferable to the novel. Because an important part of the creative process is real-world experiences (as described above), and the real world is often unpleasant, hard, and complex, I'd often prefer a novel over the source material. Reading Animal Farm is much less unpleasant than being caught in the Spanish Civil War, for example.

            I think you misunderstood what I meant by "prompts and process that lead to that novel". I am talking about the process that the "author" used to generate that novel output. I am more interested in the technique that they use, and the moment that technique is known. Then, I can produce billions of War And Peace.

            I suppose the argument is that, the moment there's an LLM that can produce a unique and interesting novels, what stops it from generating another billion similarly interesting novels?

            • By habinero 2026-02-198:441 reply

              > Then, I can produce billions of War And Peace

              You cannot and will never lol.

              This so fundamentally misunderstands (1) the point of writing a novel and (2) what makes a novel interesting.

              A novel isn't just a buncha words slapped together, bing bam slop boom, done.

              What makes a novel interesting is the author and the author's choices, like all art. It's the closest you can get to experiencing what it's like to be someone else. You can't generate that, it's specific to a person.

              • By anon-3988 2026-02-1911:091 reply

                The GP assumes that an LLM is able to write such novel. So I was working from there. My thesis is that even IF LLMs are able to produce "novelty", it will become the norm and we will simply demand even more exotic novelty.

                > An interesting thought experiment is whether it's possible that AI tools could write a novel that's better than War and Peace. A quick google shows a lot of (poorly written) articles about how "AI is just a machine, so it can never be creative," which strikes me as a weak argument way too focused on a physical detail instead of the result. War and Peace and/or other great novels are certainly in the training set of some or all models, and there is some real consensus about which ones are great, not just random subjective opinions.

                • By habinero 2026-02-208:17

                  It can have anything you like in a training set, you still can't build specific human experiences.

                  I haven't read War & Peace -- I don't have the patience for Russian literature -- but a much more accessible example is the Vorkosigan series by Lois Bujold. She uses a lot of Tolstoy lol.

                  While you can read them as fun military scifi, that's not why the series is so good and so famous. In her books, humanity invented two critical things: wormhole FTL travel and uterine replicators.

                  A lot of the series is exploring how people actually would use and abuse those two things. And then on another layer the books are about her thoughts on parenting, marriage, power, inheritance, and so on.

                  Good art isn't about accepting someone's opinion that it's good art. Good art impacts you. I think about things differently after those books.

                  You cannot write a good novel using the algorithmic mean of a lot of different stories.

      • By lich_king 2026-02-1823:184 reply

        > Today's AI can basically compete with the low quality stuff that makes up most of social media, it can't really compete with higher quality stuff

        But compete in what sense? It already wins on volume alone, because LLM writing is much cheaper than human writing. If you search for an explanation of a concept in science, engineering, philosophy, or art, the first result is an AI summary, probably followed by five AI-generated pages that crowded out the source material.

        If you get your news on HN, a significant proportion of stories that make it to the top are LLM-generated. If you open a newspaper... a lot of them are using LLMs too. LLM-generated books are ubiquitous on Amazon. So what kind of competition / victory are we talking about? The satisfaction of writing better for an audience of none?

        • By nonameiguess 2026-02-1823:38

          Tens of millions of people, if not hundreds now thanks to the popularity of the television adaptation, have been waiting 15 years now for Winds of Winter to get published. If AI is such a good writer and can replace anything, write Winds of Winter for George. I don't really give a shit what's ubiquitous on Amazon. Nobody will remember any of it in a century the way we remember War and Peace. People will remember the Song of Ice and Fire books.

          I think it's fine. As said above, most reading isn't done because people are looking for thought-provoking, deeply emotional multi-decade experiences with nearly parasocial relationships to major characters. They're just looking to avoid the existential dread of being alone with their thoughts for more than a few minutes. There's room for both twinkies and filet mignon in the world and filet mignon alone can't feed the entire world anyway. By the same token, if we expected all journalists to write like H.L. Menken, a lot of people wouldn't get any news, but the world still deserves to have at least a few H.L. Menkens and I don't think they'll have an audience of "none" even if their audience is smaller than Stephanie Meyer or whoever is popular today.

          If it were me, I don't know man, does nobody on Hacker News still care about actually being good at anything as opposed to just making sales and having reach? Personally, I'd rather be Anthony Joshua than Jake Paul, even though Jake Paul is richer. Shit, I think Jake Paul himself would rather be Anthony Joshua

        • By apsurd 2026-02-1823:361 reply

          > if you get your news on HN, significant portion that make it to the top are LLM-generated.

          You mean this anecdotally I assume.

          This makes me think of the split between people who read the article and people who _only_ read the comments. I'm in the second group. I'd say we were preemptive in seeking the ideas and discussion, less so achieving "the point" of the article.

          FWIW, AI infiltrates everything, i get that, but there's a difference between engagement with people around ideas and engagement with the content. it's blurry i know, but helps to be clear on what we're talking about.

          edit: in this way, reading something a particular human wrote is both content engagement and engagement with people around an idea. lovely. engaging with content only, is something else. something less satisfying.

          • By shimman 2026-02-1823:491 reply

            There are very few things worth reading submitted to this site. The only meaningful thing I'm glad to have read was the "I sell onions on the internet" blog post. Everything else I've forgotten, mostly VC marketing fluff or dev infighting in open source; hardly anything worth noting.

            This place is up there with reddit, it's all lowish calorie info; 90% forgettable, 10% meaningful but you have to dig quite quite deep to find it.

            • By iugtmkbdfil834 2026-02-190:14

              To be fair, it has gotten harder, but when the meaningful stuff does happen, it is hard to beat. Some of the audience can have rather pointed takes. And if it is then somehow topped by 'off the beaten path' guy, it really makes it for me ( in the sense that maybe not all is lost quite yet ). I still sometimes reel from 'manifest bananas' guy.

        • By aaaasmile 2026-02-190:01

          >The satisfaction of writing better for an audience of none?

          The satisfaction of writing for an engine. The last of what could still be recognized as a real human being writing. There’s no competition with AI, but also no resignation and no fear of being limited compared to the vast knowledge of an LLM. Even in a context of an "audience of none", somewhere there will be a scraper tool interested in my writing. And if it gets hallucinated... wow!

        • By fdefitte 2026-02-1823:451 reply

          [dead]

          • By iugtmkbdfil834 2026-02-190:18

            << most writing was already bad before LLMs.

            I am not sure this is the problem. The problem, as it were, is that writing muscles will atrophy and in a year or two we will be looking at those tiktok reels as long lost havens of enlightenment. Personally, if anything, I write a lot more now, but then I am fascinated by llms and how they work, so .. I test and that requires writing. I might be bad, but there is hope I won't need ugh to English llm translator.

    • By plastic-enjoyer 2026-02-1821:434 reply

      > "Post truth" is one I'm most interested in.

      I have this theory that the post-truth era began with the invention of the printing press and gained iteratively more traction with each revolution in information technology.

      • By robot-wrangler 2026-02-1822:38

        Doesn't matter when post-truth started because it's now over, and it's more accurate to characterize this era as "post-rationality". Most people do seem to understand this, but we are in different stages of grief about it.

      • By Finbel 2026-02-1821:50

        So slightly before 1440 was peak Truth for humanity?

      • By atommclain 2026-02-1913:33

        Maybe I’m viewing truth too narrowly, but I feel like the printing press brought us as close as we could come to a “truth era”. Authorship of text, and the friction and cost involved with publishing seems to bend towards transmitting truth. I guess how are you evaluating or measuring truth?

      • By yannyu 2026-02-1822:041 reply

        I think you're right, but I also think it's worthwhile to look at Edward Bernays in the early 1900s and his specific influence on how companies and governments to this day shape deliberately shape public opinion in their favor. There's an argument that his work and the work of his contemporaries was a critical point in the flooding of the collective consciousness with what we would consider propaganda, misinformation, or covert advertising.

        • By plastic-enjoyer 2026-02-1822:23

          > There's an argument that his work and the work of his contemporaries was a critical point in the flooding of the collective consciousness with what we would consider propaganda

          I would rather say that Bernays was a keen observer and understood mass behavior and the potential of mass media like no one else in his time. Soren Kierkegaard has written about the role of public opinion and mass media in the 19th and had a rather pessimistic outlook on it. You have stuff like the Dreyfuss Affair where mass media already played a role in polarizing people and playing into the ressentiments of the people. There were signs that people were overwhelmed by mass media even before Bernays. I would say that Bernays observed these things and used those observations to develop systematic methods for influencing the masses. The problem was already there, Bernays just exploited it systematically.

    • By meetingthrower 2026-02-1821:503 reply

      Same. New yorker is the other mag I subscribed to.

      Until 3 weeks ago I had a high cortisol inducing morning read: nyt, wsj, axios, politico. I went on a weeklong camping trip with no phone and haven't logged into those yet. It's fine.

      • By KittenInABox 2026-02-1822:01

        I agree with this in general but with caveats. For example I think reading national-sized news every day sucks. But if you're of a specific demographic it might be useful to keep pretty up to date on nuanced issues, like if you're a gun owner you will probably want to keep up to date on gun licensing in your area. Or if you're a trans person it's pretty important nowadays to be very aware of laws being passed to dictate your legally going to whatever bathroom or something.

      • By jihadjihad 2026-02-1821:541 reply

        People think I'm nuts when I tell them I ditched subscriptions for those sites and only check them maybe once a week, if that.

        But what you said is 100% true, it's fine. When things in your life provide net negative value it's in your best interest to ditch them.

        • By aleph_minus_one 2026-02-193:17

          > When things in your life provide net negative value it's in your best interest to ditch them.

          Let's ditch politicians. :-)

      • By paulryanrogers 2026-02-1823:461 reply

        [flagged]

        • By onionisafruit 2026-02-191:191 reply

          you don’t need any of the mentioned periodicals for that. m

          • By paulryanrogers 2026-02-1914:51

            Fair point. But I was addressing leaving the phone at home to "check out". Because without a phone you'll just have to hope you see the masked men before they see you.

  • By dtf 2026-02-1821:425 reply

    "Is Claude Code junk food, though? ... although I have barely written a line of code on my own, the cognitive work of learning the architecture — developing a new epistemological framework for “how developers think” — feels real."

    Might this also apply to learning about writing? If have barely written a line of prose on my own, but spent a year generating a large corpus of it aided by these fabulous machines, might I also come to understand "how writers think"?

    I love the later description of writing as a "special, irreplaceable form of thinking forged from solitary perception and [enormous amounts of] labor", where “style isn’t something you apply later; it’s embedded in your perception" (according to Amis). Could such a statement ever apply to something as crass as software development?

    • By girvo 2026-02-1821:565 reply

      My current bugbear is how art is held up as creativity and worthy of societal protection and scorn against AI muscling in on it

      While the same people in the same comments say it’s fine to replace programming with it

      When pressed they talk about creativity, as if software development has none…

      • By jarjoura 2026-02-1822:43

        I haven't heard writers make any kind of stance on software engineering, but Brandon Sanderson has very publicly renounced AI writing because it lacks any kind of authentic journey of an authors own writing. Just as we would cringe at our first software projects, he cringes at his first published novel.

        I think that's a reasonable argument to make against generative art in any form.

        However, he does celebrate LLM advancements in health and accessibility, and I've seen most "AI haters" handwave away its use there. It's a weird dissonance to me too that its use is perfectly okay if it helps your grandparents live a longer, and higher quality of life, but not okay if your grandparents use that longer life to use AI-assisted writing to write a novel that Brandon would want to read.

      • By SpaceManNabs 2026-02-1822:131 reply

        a lot of artists don't mind use AI for art outside their field

        I was in a fashion show in tokyo in 2024.

        i noticed their fashion was all human designed. but they had a lot of posters, video, and music that was AI generated.

        I point blank asked the curator why he used AI for some stuff but didn't enhance the fashion with AI. I was a bit naive because I was actually curious to see if AI wasn't ready for fashion or maybe they were going for an aesthetic. I genuinely was trying to learn and not point out a hypocrisy.

        he got mad and didn't answer. i guess it is because they didn't want to pay for everything else. big lesson learned in what to ask lol.

        • By shimman 2026-02-190:071 reply

          How do you know he used AI in one area but not another?

          • By SpaceManNabs 2026-02-1920:191 reply

            cause i asked him where he used comfyui and he mentioned the things i mentioned, but he didn't mention the fashion and then i asked my question.

            • By shimman 2026-02-1921:24

              ah that makes sense. I thought it was maybe a scenario where they are just good at fashion designs but make "average" looking posters.

      • By arctic-true 2026-02-1822:06

        The easiest job to automate is someone else’s.

      • By yason 2026-02-1822:122 reply

        Art has two facets. First is if you like it. If you do, you don't need to care where it came from. Second is the art as cultured and defined by the artistic elites. They don't care if art is liked or likable, they care about the pedigree, i.e. where it came from, and that it fits what they consider worthy art. Between these two is what I call filler art: stuff that's rather indifferent and not very notable, but often crosses over some minimum bar that it's accepted by, and maybe popular among average people who aren't that seriously interested in art.

        In the first category, AI is no problem. If you enjoy what you see or hear, it doesn't make a difference if it was created by which kind of artist or AI. In the second category, for the elite, AI art is no less unacceptable than current popular art or, for that matter, anything at all that doesn't fit their own definition of real art. Makes no difference. Then the filler art.. the bar there is not very high but it will likely improve with AI. It's nothing that's been seriously invested in so far, and it's cheaper to let AI create it rather than poorly paid people.

        • By marcus_holmes 2026-02-190:281 reply

          Commercial art has literally nothing to do with art, and everything to do with commerce. Art is not stored in freeport bunkers and used as collateral for loans.

          All art aspires to the condition of music. It evokes an emotional reaction. If it does that, it doesn't matter where it came from.

          • By caseyohara 2026-02-192:311 reply

            > If it does that, it doesn't matter where it came from.

            Personally, it matters to me quite a lot where art comes from, especially music. I have a hard time "separating the art from the artist". If I find out a musician is a creep/abuser/rapist, I can't enjoy their music anymore.

            This belief obviously isn't widespread given artists like Michael Jackson, Chris Brown, R. Kelly, and Jimmy Page are still wildly popular. But I assume I'm not alone in this.

            As for AI music, it's hard for me to imagine an "AI Musician" ever becoming very popular because I reckon most humans want some human-ness in their music. And I think if an existing artist ever put out AI music as their own, they'd lose some fans pretty quickly.

            • By marcus_holmes 2026-02-197:36

              No, fair point. I'm the same, I can't enjoy the music if I know the artist is not a good person. Though I do think this gets taken too far; I can enjoy Pink Floyd even though I have huge disagreements with Roger Waters' politics.

              I'm not sure I could tell the difference between AI and human music already. In a few years I'm pretty sure I couldn't. This is the bit where I'm not sure it matters. I mostly listen to music for the nostalgic emotions now anyway.

        • By habinero 2026-02-199:11

          My dude, there is no artistic elite deciding what art is. I think you just don't understand the critiques around this topic, and so it sounds like snobbery ("real art") to you

      • By zozbot234 2026-02-1823:00

        Maybe that's because AI "art" looks just as cringe as written AI slop.

    • By benbreen 2026-02-1822:21

      Thank you, this sort of insight is exactly why I've felt such kinship with what software engineers like Karpathy and Simon Willison have been writing lately. It seems obvious to me that there is something special and irreplaceable about the thought processes that create good code.

      However, I think there is also something qualitatively different about how work is done in these two domains.

      Example: refactoring a codebase is not really analogous to revising a nonfiction book, even though they both involve rewriting of a sort. Even before AI, the former used far more tooling and automated processes. There is, e.g., no ESLint for prose which can tell you which sentences are going to fail to "compile" (i.e., fail to make sense to a reader).

      The special taste or skillset of a programmer seems to me to involve systems thinking and tool use in a different way than the special taste of a writer, which is more about transmuting personal life experiences and tacit knowledge into words, even if tools (word processor) and systems (editors, informants, primary sources) are used along the way.

      Sort of half formed ideas here but I find this a really rich vein of thought to work through. And one of the points of my post is that writing is about thinking in public and with a readership. Many thanks for helping me do that.

      I don't have a good answer to your question, but I do think it might be comparable, yes. If you had good taste about what to get Opus 4.6 to write, and kept iterating on it in a way that exposes the results to public view, I think you'd definitely develop a more fine grained sense of the epistemological perspective of a writer. But you wouldn't be one any more than I'm a software developer just because I've had Claude Code make a lot of GitHub commits lately (if anyone's interested: https://github.com/benjaminbreen).

    • By randusername 2026-02-1823:04

      > Could such a statement ever apply to something as crass as software development?

      Absolutely. I think like a Python programmer, a very specific kind of Python programmer after a decade of hard lessons from misusing the freedom it gives you in just about every way possible.

      I carry that with me in how I approach C++ and other languages. And then I learned some hard lessons in C++ that informed my Python.

      The tools you have available definitely inform how you think. As your thinking evolves, so does your own style. It's not just the tool, mind, but also the kinds of things you use it for.

    • By raincole 2026-02-192:57

      "My AI usage is justified, but what others are doing is generating slop."

      I'm still waiting for a famous people to say this so we can have a name of this psychological phenomenon.

HackerNews