Thoughts on thinking

2025-05-1619:09721438dcurt.is

I have been stuck. Every time I sit down to write a blog post, code a feature, or start a project, I come to the same realization: in the context of AI, what I’m doing is a waste of time. It’s…

I have been stuck. Every time I sit down to write a blog post, code a feature, or start a project, I come to the same realization: in the context of AI, what I’m doing is a waste of time. It’s horrifying. The fun has been sucked out of the process of creation because nothing I make organically can compete with what AI already produces—or soon will. All of my original thoughts feel like early drafts of better, more complete thoughts that simply haven’t yet formed inside an LLM.

I used to write prolifically. I’d have ideas, write them down, massage them slowly and carefully into cohesive pieces of work over time, and then–when they were ready–share them with the world. I’d obsess for hours before sharing anything, working through the strengths and weaknesses of my thinking. Early in my career, that process brought a lot of external validation. And because I think when I write, and writing is how I form opinions and work through holes in my arguments, my writing would lead to more and better thoughts over time. Thinking is compounding–the more you think, the better your thoughts become.

But now, when my brain spontaneously forms a tiny sliver of a potentially interesting concept or idea, I can just shove a few sloppy words into a prompt and almost instantly get a fully reasoned, researched, and completed thought. Minimal organic thinking required. This has had a dramatic and profound effect on my brain. My thinking systems have atrophied, and I can feel it–I can sense my slightly diminishing intuition, cleverness, and rigor. And because AI can so easily flesh out ideas, I feel less inclined to share my thoughts–no matter how developed.

I thought I was using AI in an incredibly positive and healthy way, as a bicycle for my mind and a way to vastly increase my thinking capacity. But LLMs are insidious–using them to explore ideas feels like work, but it’s not real work. Developing a prompt is like scrolling Netflix, and reading the output is like watching a TV show. Intellectual rigor comes from the journey: the dead ends, the uncertainty, and the internal debate. Skip that, and you might still get the insight–but you’ll have lost the infrastructure for meaningful understanding. Learning by reading LLM output is cheap. Real exercise for your mind comes from building the output yourself.

The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. The output from AI answers questions. It teaches me facts. But it doesn’t really help me know anything new.

While using AI feels like a superhuman brain augmentation, when I look back on the past couple of years and think about how I explore new thoughts and ideas today, it looks a lot like sedation instead.

And I’m still stuck. But at least I’m here, writing this, and conveying my raw thoughts directly into your brain. And that means something, I think, even though an AI could probably have written this post far more quickly, eloquently, and concisely. It’s horrifying.

This post was written entirely by a human, with no assistance from AI. (Other than spell- and grammar-checking.)

221

Kudos

221

Kudos


Read the original article

Comments

  • By abathologist 2025-05-170:107 reply

    I think we are going to be seeing a vast partitioning in society in the next months and years.

    The process of forming expressions just is the process of conceptual and rational articulation (as per Brandom). Those who misunderstand this -- believing that concepts are ready made, then encoded and decoded from permutations of tokens, or, worse, who have no room to think of reasoning or conceptualization at all -- they will be automated away.

    I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic. A robot is just a "person whose work or activities are entirely mechanical" (https://www.etymonline.com/search?q=robot).

    I'm afraid far too many are captive to the ideology of productionism (which is just a corollary of consumerism). Creative activity is not about content production. The aim of our creation is communication and mutual-transformation. Generation of digital artifacts may be useful for these purposes, but most uses seem to assume content production is the point, and that is a dark, sad, dead end.

    • By cameldrv 2025-05-177:3011 reply

      I've personally noticed this as a big trend. For example, I had become more and more reliant on my GPS in the car. I've not really been the outer control loop of the vehicle. An automated system tells me what to do.

      I recently got a running watch. It suggests workouts that will help me improve my speed (which honestly I don't even care about!). If you turn it on it will blare at you if you're going too fast or too slow.

      When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

      Anyhow recently I've been reducing my usage of these things, and it's made me feel much better. Even navigating the car without the GPS makes me feel much more engaged and alive.

      Ultimately one of the core things that makes us human is making decisions for ourselves. When we cede this in the name of efficiency, we gain something but we also lose something.

      Marshall Brain wrote an interesting short book about this called Manna.

      • By empiricus 2025-05-178:152 reply

        For GPS, I start by looking at the overall route, and compare with potential alternatives. Then during the driving the GPS just manages the local details, I still have some understanding and agency over where to go and how to get there.

        • By immibis 2025-05-1712:491 reply

          I start by looking at the map. I go in the direction of the place I want to be. If I want to know the technically fastest route then I let my device calculate that. I don't always take that route. It's an assistant, not a boss. It's more interesting to walk down different streets sometimes. (And while I'm preaching to Americans, it's also good to walk down streets sometimes. It breaks away a few layers of abstraction that you have when driving.)

          Looking at the map actually helps you learn the city layout. As of right now (literally as I'm typing this) the train was delayed, so I chose to get off at the next big station before everyone crowds on, and walk the rest of the way. I can do this without checking a map because I know where it is and where I am, because I don't let the machine think for me.

          I don't drive (non-car-worshipping cities are amazing) but I do this when walking and also with train routes. I don't memorize the bus routes, since the train is better and has fewer routes, so I also sometimes ask my device for a route if I think there's a faster bus route than train (usually not the case).

          • By immibis 2025-05-1815:17

            btw on that walk I happened across a store selling lots of varieties of some of the things I wanted - more than the place I was actually going to. Exploring pays off. Randomly.

        • By liugongqx 2025-05-1716:02

          I am obseesed comparing different routes from GPS for EACH trip. Long or short. Bragging to my wife how much time I saved.

          She doesn't use GPS. Guess what, she alwayes beats me.

          My Guess: 1. Algorithem does not favor me 2. Monitor GPS adds unsensible distraction and pressure which reduces action smoothness compare to brain driven actions.

      • By bartread 2025-05-1713:551 reply

        > I've not really been the outer control loop of the vehicle. An automated system tells me what to do.

        That’s not really true, is it? Who tells the GPS where you’d like to go? You, I imagine. You don’t just follow GPS instructions unless you’ve first told it where you’d like to go. And, indeed, unless you tell it, it won’t give you any instructions (though it might suggest common destinations for you to choose from).

        You are still the outer control loop of the vehicle: you’re just thinking at the wrong level of abstraction, or thinking of the wrong loop as the outer loop.

        • By ajuc 2025-05-1718:011 reply

          It's not a loop. It's the main() function.

          The loop is driven by the system and that makes a lot of difference.

          • By bartread 2025-05-1721:381 reply

            > It's not a loop. It's the main() function.

            No it isn't. I regularly use my car to travel to multiple destinations in a single "session".

            The reason I use GPS is because the apps built on top of it often know about traffic issues along the way - even those that have recently developed - as well as normal patterns of traffic flow of which I may not be aware, or may only have a tenuous grasp of (and don't want to waste a lot of time studying).

            But be in no doubt, when the machine creates a route for me it is very much doing what I tell it to do, not the other way around. I am in control at all times and will sometimes deviate from the prescribed route or choose a different destination along the way (e.g., if I've forgotten an errand I need to run that I remember and which could be conveniently achieved).

            I just don't buy this argument that the car or the GPS system is the one in control, because it's simply not the case.

            • By ajuc 2025-05-189:02

              You are calling the main function with arguments "New York, Chicago, Washington". Then the system runs the loop and you are just an effector managed by the system.

              Running the loop takes hours. Choosing the arguments for the main function takes seconds.

              Which means for vast majority of time you delegate decision-making to the system. Which means your decision-making circuits atrophy. This is the problem people are talking about.

      • By globular-toast 2025-05-178:551 reply

        For road navigation it might be worth seeing if your country has a proper system in place and learning how to use it. In the UK, for example, there is a simple "algorithm" to get you where you need to go. The signage is hierarchical starting from motorways and trunk routes and descending down to primary and secondary local routes. So to navigate anywhere you go via trunk routes and follow the signs to the nearest trunk destination beyond where you are trying to go. Then as you get closer you should start to see your actual destination appear on the signs as a primary route. Once you learn the system it's really quite possible to navigate by yourself anywhere.

        The nice thing is you won't end up routed down some ridiculous difficult road just because the GPS says so and it calculated it would save 0.2 seconds if you were somehow going at the speed limit the whole way. Your brain includes a common sense module, and it's usually right.

        • By robrorcroptrer 2025-05-179:01

          But then again you are relying on an information system to navigate.

      • By bsenftner 2025-05-1713:321 reply

        More people need to read Marshall Brain's book "Manna"; the main character's thoughts examine and put to bed the majority of the sophomore thinking surrounding AI and it's impacts on civilization. Plus, it is one of the rare balanced views with both very positive and very negative outcomes simultaneously coexisting.

      • By boppo1 2025-05-179:513 reply

        >When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

        Not necessarily. I'm into a very particular sort of painting and I have been totalitarian with Instagram about showing me that content and not other stuff. It works splendidly as long as I'm consistent.

        Thanks to Instagram, I have been introduced to tons of painters I would not have been otherwise.

        • By bonoboTP 2025-05-1710:122 reply

          Is it better to be introduced to tons of painters vs fewer but in more detail? Or being told about a painter by someone in person vs by an algorithm?

          In the 90s you only had certain songs if you knew someone who had it on cassette and you borrowed it and put it on your mixtape. Throughout the interaction, you also got initiated deeper into the culture of that thing in person.

          I also notice that families rarely sit together nowadays to look through vacation photos. The pictures are taken, but people either don't have time to sort them and curate them. When film had a price, you only took fewer ones but it was more intentional. Then the fact that you only saw the picture once you were back at home, generated excitement that you could share and relive candid moments. Now people upload stuff on Instagram but it's intended to a generic audience, much unlike browsing through an album on the couch.

          • By throwaway2037 2025-05-1711:211 reply

                > In the 90s you only had certain songs if you knew someone who had it on cassette and you borrowed it and put it on your mixtape.
            
            I knew lots of people who recorded 120 Minutes on MTV and listened to college radio.

            • By bonoboTP 2025-05-1711:35

              I meant the niche long tail stuff, since the commenter mentioned "tons of painters I would not have been otherwise". The equivalent in music would not be on MTV.

          • By tsumnia 2025-05-1714:07

            > Then the fact that you only saw the picture once you were back at home, generated excitement that you could share and relive candid moments

            Or you do like me and go see Interstellar 5 times in IMAX because the story was so good

        • By vidar 2025-05-1710:34

          I applaud your consistency and effort to curatr your feed which is certainly technibally possible but i am quite sure you are the exception to the rule.

        • By throwaway2037 2025-05-1711:191 reply

              > I'm into a very particular sort of painting
          
          Can you share some of your favourites that you follow? This sounds interesting.

      • By abathologist 2025-05-1816:37

        I think it was indeed use of GPS that first made me aware of how much the systems we are building are trying to automate me. From the socials to Jira to slack GitHub to crypto to GenAI... it is now painfully obvious that all these systems are pressing upon us to react to stimulus in predictable ways to execute a part of some larger movement which -- probably -- no one has actually designed or considered the purpose of.

        I think we are in the third act of a very long play:

        > If mechanical thinking and ingenious experiment produced the machine, regimentation gave it a soil to grow in: the social process worked hand in hand with the new ideology and the new technics. Long before the peoples of the Westenr nworld turned to the machine, mechanism as an element in social life had come into existence. Before inventors created engines to take the place of men, the leaders of men had drilled and regimented multitudes of human beings: they had discovered how to reduce men to machines. (https://en.wikipedia.org/wiki/Technics_and_Civilization)

      • By BeFlatXIII 2025-05-1712:351 reply

        The big benefits I find about modern satnav have little to do with route planning. That can be done with maps and dead reckoning. Where it shines are

        1. Having knowledge that cannot be acquired ahead of time, such as traffic conditions

        2. Providing a countdown timer until my next turn

        • By cameldrv 2025-05-184:171 reply

          This is the paradox. The computer knows the best route, better than you do, most of the time. But, you are not in control anymore. You are not the one making the decisions, figuring out where the turn is, and whether you want to turn there or at the next intersection. All I can say is, try leaving your phone at home for a day and get where you're going without it. You'll probably get there a few minutes later, but it will feel completely different.

          • By BeFlatXIII 2025-05-1814:15

            I wish GPS systems had a “shut up until I get to the highway” mode and a “I've entered my neighborhood” button—the former is more important, as the later can be solved with “end route”. I do not need their (inaccurate) instructions on how to get in and out of my house.

      • By huijzer 2025-05-179:40

        > When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

        Yeah it’s crazy. I used to have a commonly held believe until last week. Then I started watching more videos in the opposite viewpoint and boom now my whole YT feed is full of it. I wish the feed would have sprinkled some opposing sides into the mix before last week. (Having said that I am appreciating individual content creator much more since people like Lex can decide to show both sides independent from some algorithm.)

      • By nthingtohide 2025-05-1715:162 reply

        I think this is nothing but how applied science realtime feedback loops should work. Earlier we used to study only planets, atoms and bacteria, now systems are studying us and guiding us to best outcomes.

        • By cameldrv 2025-05-184:13

          Yes. Clearly these systems are smart enough to know what the best outcomes are, and also they are the ones that lead to the maximum ad revenue for the site.

        • By abathologist 2025-05-1720:26

          The surrender is largely voluntary, and is especially enabled by those who think "the systems" somehow know the "best outcomes".

      • By bonoboTP 2025-05-1710:03

        > When you use any social media, you're not really choosing what you're looking at. You just scroll and the site decides what you're going to look at next.

        This was even more true with TV, and especially before there were a million cable channels.

        And it makes me think about the even wider time scale. A few generations ago, "the outer control loop" was also not in the individual's hand, but instead of computers, it was built on social technology. The average person didn't have much to decide about their lives. They likely lived within a few (or few dozen) km of where their ancestors did, in the part of town and a type of home fitting for their social class, likely doing the same job as their father, following a rigid life script, hitting predefined ritualized milestones. Their diet was based on whatever was available at that time of the year based on local production, cooked essentially the same way, as handed down by mothers and grandmothers. There was very little to the tune of letting their inner true self blossom through taking fun colorful decisions. They couldn't choose from some endless repository of stories. It was mostly a rotation of the local folk stories and the stories of the dominant religion.

        Just wanting to "consume" and follow a script without the weight of decision making isn't some modern "disease".

        The key difference is a new kind of fragmentation of culture (and the non-local nature of it). A long time ago, culture was also fractally fragmented, in a way where "neighboring" villages in a mountainous area would have their own dialects. Then with long-distance travel and electronic communication and media, globalization happened where distant parts of the world started to sync up and converge on some shared part of culture (of course fused with a continuation of the local one), everyone wearing T-shirts, listening to Michael Jackson and rooting for their football/soccer team. If you were dropped to some random place on the planet, you could likely converse with them about some fairly recent cultural cornerstones in entertainment and basic global news topics. But you still likely weren't "dropped" there.

        Then the internet appeared and you could suddenly talk to all those people in other parts of the world (or just other parts of your country). But search and discoverability weren't so great so there was friction. You build communities around shared interests and compatibility of personality and it required effort and participation. Usenet, forums, IRC. But these isolate you from your neighbors and local connections. And people often explicitly wanted that. Nosy neighbors and know-it-all gossipy townfolk weren't such a rosy thing, people wanted to escape that to find peers who understand and validate them and can build a shared culture with.

        In schools, subcultures already existed from the 70s and 80s onwards for sure, but they were few, like maybe 2 main camps or 3 or so, and information flow was slow therefore change was slow. Some new album of a popular band was released, then it was the thing for a long time, you didn't get an endless stream shoved in your face, you got the album and listened to it over and over. Today subcultures can't even be meaningfully counted because people follow personalized streams and come together in random configuration in streamer chats etc.

        So basically, in the old internet model, there were lots of opportunities to choose from, but it needed effort to find it and to forge belonging. Then with more commercialization, things started to consolidate on fewer platforms. It made it easier for creators to reach a wider pool of users simultaneously, and made it simpler for users to just learn to use one or a few platforms. But this made it also easier to pick and choose your "content diet", buffet style. A little from here, a little from there, with little friction. But with so much on offer, how do you choose? Discoverability was still an issue until recommendation algorithms became strong enough to know what will drive engagement. Turn that up to 11 and you get the current day where even the front page grid of options is obsolete and you get a single linear feed again, which is like watching TV and channel surfing (pressing the "next channel" button over and over), except it's personalized and never boring.

        Of course this applies to many other things as well, such as dating apps etc, which also feed you an algorithmic stream of options with the goal of maximizing profits for the company.

        I don't think individual people's rejection of the trend due to "makes me feel much better" will make a dent. In many cases the use of these things isn't mere convenience but implicitly mandatory because other things are designed around the assumption that people use them. Schools announcing stuff to parents in Facebook groups. There's less traffic report announcements on the radio, because people use Waze and Google Maps that has real time traffic info and reroutes you automatically.

        ---

        But then what might happen? I think we're seeing glimpses of it in the rejection of AI in certain circles of cultural thought leaders, which might grow towards a rejection of more tech. But instead of "makes me feel better", the only actually working mechanism will be social shame, similar to what often appears nowadays when some product turns out to have used AI. If it becomes established that you're obviously a loser if you Shazaam a song, or open TikTok, it could flip. Of course companies won't sit by watching idle. What's more likely is that the "rejection" of tech will just lead to other levels of meta-grift and engagement optimization. It may just fizzle out in a whimper of angry malaise and meta-ironic apathy.

      • By js8 2025-05-1712:111 reply

        Another example is free market ideology. This was a question I posed to libertarians - how can you claim that free market enhances human freedom, when it always tells you what to do in the name of efficiency?

        • By norome 2025-05-1712:46

          I don't think the claim is that it enhances human freedom necessarily, rather: by giving more freedom to i.e. set prices than people will use their particular knowledge of their area of concern to set those prices correctly.

          It does coincidentally align with John Stuart Mill's reasoning for why Liberty is fundamentally necessary: that only at the level of the individual is it possible to know what is good and right for that individual.

    • By fennecbutt 2025-05-171:2816 reply

      99% if not 100% of human thought and general output is derivative. Everything we create or do is based on something we've experienced or seen.

      Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

      Writers made elves by adding pointy ears to a human. That's it.

      • By parodysbird 2025-05-176:031 reply

        To emphasize again part of the post above: "The aim of our creation is communication and mutual-transformation".

        When I write a poem in a birthday card for my wife to give her on her birthday, very little of the "meaning" that will be communicated to (and more importantly with) her is really from some generic semantic interpretation of the tokens. Instead, almost all of the meaning will come from it being an actual personal expression in a shared social context.

        If I didn't grasp that second part, I might actually think that asking ChatGPT to write the poem and then copying it in my handwriting to give to her is about the same thing as if the same tokens written but from genuine personal creation. Over prolonged interaction, it could lead to a shared social context in which she generally treats certain things I say as little different than if ChatGPT returned them as output. Thus the shared social context and relationship is then degenerated and fairly inhuman (or "robotic" as the above post calls it).

        • By jonplackett 2025-05-177:252 reply

          Someone just the other day told me about how they used to have a group WhatsApp where they’d share these hand made memes. Just a bunch of guys photoshopping dumb stuff. It went on for years.

          One day one of them discovers AI and post anything made with AI - initially it’s great, it’s much better quality than what they could photoshop. Everyone jumps on board.

          But after a day or so, the joke is over. The love has gone. The whole things falls apart and no-one posts anything anymore.

          It turns out - as you say - that the meaning - founded on the insight and EFFORT to create it - was more important than the anccuracy and speed.

          • By fennecbutt 2025-05-191:372 reply

            Do all of you replying to me also weep for expensive hand woven fabrics? You're all wearing the fruits of automatic looms. It's the same thing.

            You own cars, do you weep for the horse industry? Stables, farriers, blacksmiths, leatherworkers/saddlers, ostlers, grooms, etc.

            What is an art now becomes a functional ubiquity later. Everything took "insight and effort" because everything was handmade, it's an innate feature. And now those things are mass manufactured and it's what the people want. Humanity will move on to calling whatever current thing we're manually doing an art.

            • By parodysbird 2025-05-2017:34

              How is what the same thing? Fabric production vs social communication? I don't really see what is the relation here.

            • By jonplackett 2025-05-208:42

              This is also a pointless comparison.

              Horses weren’t art. They were a way to get somewhere.

              The whole problem with AI is it’s trying to automate the things people LIKE to do and take their life meaning from instead of the things that no one likes doing - like pulling a carriage or a plough.

          • By parodysbird 2025-05-177:51

            Oh yeah this is exactly how my group chats went. We still can post some good (in our context) memes and have fun, but not like an avalanche of poorly filtered slop. A joke for a group can still be crafted via an LLM when used judiciously and as intentionally as part of the bit. But by judicious it's important that the human is the one doing the sending and in the right moment, and so the human is still the one communicating.

            When WhatsApp originally inserted their AI bot in the chats, it got very annoying very quickly and we agreed to all never invoke it again. It's just a generative spam machine without the curation.

      • By Tallain 2025-05-172:341 reply

        This is an alarmingly reductionist statement that I cannot believe is made in good faith. If it somehow is, it's based on an abundance of ignorance that only highlights the importance of education.

        Are you genuinely arguing that LLM output is derivative, and human output is derivative, therefore they're equal? Why don't you pop that thesis into ChatGPT and see how it answers.

        • By fennecbutt 2025-05-191:39

          Oh, is agreement binary? Are thoughts binary?

          I didn't realise that by comparing derivative human output to derivative machine output that I was inherently arguing that they are equally derivative at this point in time.

          It is true that human output has higher entropy, but that will not be the case forever.

      • By voidhorse 2025-05-172:472 reply

        It astonishes me sometimes how completely stupid and reductive some HN takes on arts and creativity can be. I am astounded continually at how we can produce humans who are so capable in one sphere of life and so ignorant and oblivious of others...yet all too willing to make dismissive claims about them...

        Creativity is much more than the derivative production of artifacts. What the OP is driving at is that creativity is a process of human connection and communication—you can see this most clearly in the art of interpretation. A single literary work has an almost uncountable number of possible interpretations, and a huge element of its existence in the world as a price of art are the discussions and debates that emerge over those interpretations, and how they shape us as individuals, instill morals, etc etc. Quite a lot more than "making elves by adding pointy ears to humans".

        Your post stinks of the very gross consumerist mindset the OP called out. The creation and preservation of meaning is about way more than the production of fungible decontextualized objects--it's all about the mediation and maintenance of human relationships through artifacts. The fact that the elves have pointy ears doesn't even begin to scratch at their actual meaning (e.g. they exist in a world with very big real problems that effect you and me too, e.g. race relations, and exaggerated features estrange these relations so as to make them more discernible to us and allow us to finally see the water we swim in).

        If humans stop engaging with these processes, it's reasonable to believe that a lot of that semiotic richness, which is much of what, in my opinion, makes us human and not just super smart animals in the first place, will be lost.

        • By krelian 2025-05-179:081 reply

          In full agreement with you on the flagrant incapability of a sizeable part of the HN crowd to understand and value of the arts.

          Throughout history man has been celebrated and distinguished as the rational animal. As master of the earth this animal in our days dedicates its brightest minds to the continual increase of economic growth. Ask the rational man what is growth good for and after a few exchanges they perhaps will say that it ultimately improves our quality of life and even extends it. If might even allow the human race to flourish beyond earth and thus prevail long after resources on earth are depleted. But ask him then why is improving the quality of life a good thing at all? Is it just a meaningless cycle in which we improve the quality of life so that we can then improve the quality of life even further? No. Ask an individual human (in contrast to the ultra rationalist who thinks they represent the human race as a whole) what they work for, what they strive to achieve, what does quality of life ultimately mean to them and you will end up with happy times spent among family and friends. With meaningful moments listening to music, watching a film, reading a book. About time spent in creative endeavors that are totally their own. The rational animal in its hubris forgot what it thinks for and trapped itself in an endless cycle where the true meaning of being human is hidden from the sight of many.

          But I think a wake up call is due very soon. The rational animal is about to discover the rationality it prides itself on was merely a sample of the true possibility. From the rational animal we have been relegated to another animal with some rational capability. As we slowly realize how futile are our attempts at thinking, we'll realize to our horror that the gift we are left with is the ability to recognize the futility and inadequacy of our attempts. Hopefully then we'll decide to retreat back into what truly makes us human, to what is ours, to what quality of life really means.

          • By jofla_net 2025-05-1714:05

            I'm reminded of that My dinner with Andre monologue, and totally agree.

        • By fennecbutt 2025-05-192:12

          No my post reeks of my thinking that machines are or will be capable of the same creativity that we attribute to ourselves.

          I never said that creative thought mostly being derivative means it's not creative, or that it's a bad thing.

          I merely made a point as we see a machine, pointing and laughing at it, that we shouldn't throw stones when we live in glass houses.

          But the reaction is certainly interesting. I wonder how many of you believe that humans are magical and that it's impossible to create a sentient machine, even given millenia, or infinite time. I believe it's possible. I don't believe that we're any more magic than a plant, there are many species with much more to their DNA than we have.

      • By bccdee 2025-05-174:064 reply

        No, that's not true.

        Quick, what's 51 plus 92?

        Now: Did you think back to a time someone else added these numbers together, or are you doing it yourself, right now, in your head? I'm sure it's not the first time these numbers have ever been summed, but that doesn't matter. You're doing it now, independently.

        Just because something isn't unique, doesn't make it derivative. We rediscover things every day.

        • By treebirg 2025-05-1711:201 reply

          But I do know what numbers are. I've also done addition before, so I know what the steps are. The result of 51 + 92 is derivative from (at least) these two concepts, which derive from others and so on. Maybe I'm stretching the meaning of derivative here, but to me derivative doesn't mean strictly recalling something verbatim.

          • By bccdee 2025-05-1714:341 reply

            I do think you're stretching the meaning of derivative. At that point, what can ever be called original? Every idea depends on pre-existing concepts. Even Newton stood on the shoulders of giants.

            • By fennecbutt 2025-05-191:481 reply

              That's my overarching point. That most of human thought is derivative.

              Even emotion, sure you're happy that you ate a tasty food. But you're happy because your body is flooding you with chemicals to reward you for it, as with many humans over time, it's how it was selected.

              I'm not saying it's a bad thing. It's just incredibly interesting to me to compare the way I think we act to the way these early models are acting. That's all.

              • By bccdee 2025-05-1917:04

                You've redefined the word "derivative" so broadly as to make it meaningless. If everything is derivative, then we can no longer draw distinctions in originality. How, then, can we discuss the fact that some things are plainly more original than others?

                This just feels evasive to me.

        • By BeFlatXIII 2025-05-1712:431 reply

          > Just because something isn't unique, doesn't make it derivative. We rediscover things every day.

          This is the argument I use to dunk on ranters who spam conversations with “How can you say Christopher Columbus discovered the new world when there were already people living there?”

          • By bccdee 2025-05-1714:441 reply

            In fairness, Columbus thought he had found India, even after other, smarter people had told him otherwise. You can't give him too much credit, especially given that he was considered a monster even by contemporary monsters like Isabella I of Spain, his sponsor, who founded the Spanish Inquisition and still thought his treatment of the Taino natives was unconscionable. She wanted him to convert them to Christianity, and instead he exterminated them.

            • By BeFlatXIII 2025-05-1814:131 reply

              Those are all good arguments against praising Columbus. Especially his insistence that he had found India is a strong argument against saying he discovered America.

              The specific argument “but it's not a discovery because it was already inhabited” is a particularly literal-minded child applying their teacher’s prohibitions on plagiarism to the real world.

              • By bccdee 2025-05-1917:05

                It just feels like an odd gotcha to me. You're not wrong, but the literal meaning of "discovery" was never what those conversations were about.

        • By fennecbutt 2025-05-191:44

          That is the nature of current models. Don't pretend that it will always be this way.

          Your argument is moot anyway, how exactly is adding numbers together proving that such a thing isn't derivative?

          I add numbers together, because I got taught to in school, because my teachers got taught to, because humans learnt to a very long time ago, because they observed a collection of things of different numbers. Perhaps the first human to count enjoyed an original thought. Or any who did independently, communication is the origin of derivation. And us humans are alllll about communicating.

          Adding numbers is not an original thought, not in this context.

        • By griffzhowl 2025-05-1716:271 reply

          Pointing out that we can mechanically apply an algorithm on novel inputs is possibly the worst defence of human creativity I can think of in this context.

          • By bccdee 2025-05-1821:391 reply

            Good thing I'm not nebulously defending creativity, then.

            I'm directly replying to the notion that something is derivative if it is "based on something we've experienced or seen." The fact that rote math calculations are generally less original than creative expressions makes my argument stronger, not weaker.

            • By griffzhowl 2025-05-1822:00

              Right, but the general context of the thread is a discussion about how human thought or creative activity more generally contrasts with (current) machine capabilities. That's the sense in which I thought that adding two numbers is the worst example. It's notably the paradigmatic thing that machines are better at than any human

      • By Nevermark 2025-05-173:181 reply

        Go with 99.9%. But not 100%.

        Someone imagined space and time could be a deformed fabric. That was new.

        In minor and major ways, new ideas are found or emerge from searches for solutions to problems from science to art. Or exploration of things in new combinations or from a previously untapped viewpoint.

        Most people are not looking hard for anything beyond what they know. So not likely to find anything new.

        But many people try new things, or try to improve or vary something in a direction that is not easy, and learning something nonobvious and new is the “price” they must pay to succeed. Or a bonus they are paid for pushing through a thicket, even if they don’t succeed at what they set out to do.

        • By pwndByDeath 2025-05-177:041 reply

          Not really new it came from observations or imagination of observed things.

          • By Nevermark 2025-05-1716:15

            General patterns don’t actually exist. We create a pattern from multiple observations.

            What is the point in saying there is nothing new,? So the word “new” is like “utopia”, with only a mythical aspirational meaning?

            History has never stopped proving how inane such an utterly unsupported opinion is.

            Just keep up with quantum physics. Constantly finding new strange counterintuitive effects that require intense research and explorations of combinations of ideas and imagination to envision. And then challenging invention of new highly artificially controlled conditions, not found in nature, to create.

            Mathematics as a profession is all about discovery of new things (from exploration) and invention (from imaginative questions and quests).

            Art produces new things all the time, but you have to have familiarity with art to see what is new. If all you see are materials and color you are not literate enough to read it. Anymore than a unique story is “just words”.

            For that matter, in slower timescales, nature has never stopped creating new creatures with new attributes or behaviors.

            Perhaps have you set your own sights much too low. Try doing something nobody has been able to do before, and sticking with it. You will discover something if you really do stick with it. Even understanding the barriers better than before reveals things, and forces new concepts into being.

      • By MadcapJake 2025-05-172:34

        > Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

        This is an outrageous thought experiment. Novelty is creating new connections or perceiving things in new ways, you can't just say "try to have eureka moment, see! impossible". You can't prompt engineer your own brain.

        In fact, there's some research about eureka moments rewiring our brain. https://neurosciencenews.com/insight-memory-neuroscience-289...

      • By bowsamic 2025-05-175:50

        You have fallen into the very trap he is criticising: you are entirely focussed on the product and how it differs from other ones, and have no sense of your individual journey of thinking being relevant

      • By mr_toad 2025-05-171:503 reply

        > Writers made elves by adding pointy ears to a human.

        Now that’s reductionist to the point of being diminutive.

        • By milliams 2025-05-1711:54

          Elves are wonderful. They provoke wonder. Elves are marvellous. They cause marvels. Elves are fantastic. They create fantasies. Elves are glamorous. They project glamour. Elves are enchanting. They weave enchantment. Elves are terrific. They beget terror. The thing about words is that meanings can twist just like a snake, and if you want to find snakes look for them behind words that have changed their meaning. No one ever said elves are nice. Elves are bad.

          ― Terry Pratchett, Lords and Ladies

        • By BobbyTables2 2025-05-172:42

          Hey, no need to get short!

          We should try to be the bigger person.

          That’s really the long and short of it.

        • By TechDebtDevin 2025-05-172:29

          Its not, thats why the term humanoid exists.

      • By musicale 2025-05-171:34

        > Writers made elves by adding pointy ears to a human. That's it.

        Humans have been interested in supernatural beings for thousands of years. Their appearance is usually less important than their powers and abilities.

        The word is present in Old English and Old Norse, and elves appear in Norse mythology.

      • By DavidPiper 2025-05-171:572 reply

        That is a nonsense definition of creativity. The parent also wasn't suggesting - as far as I can read - that creativity is defined solely in the realm of the "truly novel" (or "isn't based on anything you've ever seen before").

        All creativity is a conversation between our own ideas and what already exists.

        Consider the unused soundtrack to James Cameron's Avatar [0][1], where ethnomusicologists set out to create a kind of music that had never been heard before.

        They succeeded. But it was ultimately scrapped for the film because - by virtue of it being so different to any music anyone has ever heard before - it was not remotely accessible to audiences and the movie suffered as a result.

        To argue that work is not creative because it is still based on "music" is absurd.

        [0] https://www.youtube.com/watch?v=tL5sX8VmvB8

        [1] https://ethnomusicologyreview.ucla.edu/journal/volume/17/pie...

        • By myko 2025-05-172:21

          Incredibly interesting, thanks for sharing

        • By fennecbutt 2025-05-191:55

          Where was I arguing that human output is not creative at all?

          Human creativity is for the most part derivative in some way, that is the nature of it and my point. I was contrasting it to illustrate why I think the idea that machines cannot be creative is wrong. Especially given that current models will only get more complex and nuanced in the future.

          I never said that humans cannot be creative because creativity is derivative at any point. We are engaging in another human pastime; tribalism.

      • By jen729w 2025-05-173:061 reply

        > Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

        Pick up an Iain M. Banks book, my friend.

        • By fennecbutt 2025-05-192:00

          I have read plenty of his work. If you like his stuff I would recommend the quantum thief trilogy, and the void trilogy. I find many people haven't read them and they're great scifi.

      • By jonplackett 2025-05-177:21

        I think you misunderstand the point. It’s about intention. Are you creating this thing for the purpose of transforming or communicating? Or are you just making it for some businessy reason.

        Yes, elves are derivative, as was a lot of the Tolkien world in a way - being intentionally based on ww1 - but its intention was to create something beautiful and amazing and communicative and transformational.

      • By vaylian 2025-05-178:36

        > Try to think of an object that doesn't exist, and isn't based on anything you've ever seen before, a completely new object with no basis in our reality. It's impossible.

        That's easy. The hard part is to explain it to other people, because we lack a shared background and terminology to explain it.

      • By alganet 2025-05-171:491 reply

        I'm not so sure about it.

        Maybe it's like that because there aren't many novel opportunities for varied experiences nowadays.

        The pointy ear sounds trivial in our experience, but it is radically different than ordinary everyday thought when observed as a piece of a whole imagined new world.

        Of course, pointy ears now are not a novelty anymore. But that's beyond the point. By the time they were conceived, human experience was already homogenized.

        The idea space for what an object is has been depleted by exploration. People already tried everything. It's kinda the same thing as saying that is impossible to come up with a new platonic solid (also an idea space that has been exhausted).

        Any novel thought is bound to be nameless at first, and it becomes novel by trying to use derivation to define an unknown observation, not as a basis for it.

        • By NobodytheHobbit 2025-05-178:26

          You're trying to expand the human experience instead of individual human experience which is really yours from your perspective and mine from my perspective if I can be redundant by enumerating. The frustration comes from the sacrifice of individual experience to this weird aggregated experience in the machine. It will push the capability of technology but does that service the aim of luxury made easy for the many to acquire as tech is supposed to do? What profit a person to gain the whole world but lose the very thing that makes themselves them? It feels systemically dehumanizing.

      • By musicale 2025-05-171:571 reply

        > Everything we create or do is based on something we've experienced or seen.

        I would add a couple of things to that. First, humans (like other animals) have instincts and feelings; even newborns can exhibit varying personality traits as well as fears and desires. It's certainly useful to fear things like spiders, snakes, or abandonment without prior experience.

        Second, an important part of experience is inner life - how you personally perceive, feel, and experience things. This may be very different from person to person.

        • By Andrex 2025-05-172:263 reply

          What really fascinates me is gender based toy preferences at <2 years old. Very consistent that boys like race cars and action figures, even though it's their first exposure.

          (I do not participate in culture wars, this fact just straight up fascinates me as a non-masculine gay guy.)

          • By socalgal2 2025-05-175:191 reply

            I'd be curious how we know they aren't exposed - 1 year is a long time to see TV shows, TV commercials, toys with pictures of target audience, picture books, etc...

            • By gitremote 2025-05-1713:212 reply

              Cars were invented in the early 1900s and the vast majority of human existence was in a world without cars. There cannot be an innate preference for cars, which were a very recent invention.

              • By musicale 2025-05-1823:031 reply

                Toy cars (tiny wheeled carts) date back at least 5000 years, but tools/technology date to the dawn of humanity.

                • By gitremote 2025-05-190:04

                  The spinning wheel (https://en.m.wikipedia.org/wiki/Spinning_wheel), which creates yarn from threads, existed longer than cars. I doubt you would argue that boys would have a preference for playing with toy versions these wheeled tools, given that women in various Western and Eastern cultures were the main users of these tools.

              • By socalgal2 2025-05-192:10

                so you're agreeing with me that any preference seen is probably from exposure and not innate like the studies claim?

          • By fennecbutt 2025-05-192:06

            I am also gay, but not overly feminine at all. Just a gentle nerd lmao.

            And I think there's the possibility that those signals are there early on in our brains. If not race cars, then it was swords and bows that were lying around and were appealing. We are products of evolution.

            But as to OP's insistence that behaviors innate to our brains as not being derivative. I think in order to qualify, it must be a conscious behaviour. And even so, innate behaviours are derivative in that they came in response directly to our environs. Were they not, those traits would not have been selected as a response to environmental pressure.

          • By musicale 2025-05-1822:53

            > action figures

            "No sir! I did not see you playing with your dolls again."

    • By emporas 2025-05-172:434 reply

      It is knowledge that gets automated, rather than reasoning.

      I was thinking of the first solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere. They build tubes 10 km long, a shot board is put at one end, and the players at the other end. They shoot darts at the board, and each shot takes 5 hours to reach the target. That's their national sport.

      Problem is, I have never played darts, i don't know anyone who plays it, I will ask the LLM to fill in the blanks, of how a story based on that game could be constructed. Then I will add my own story on top of that, I will fix anything that doesn't fit in, add some stuff, remove some other stuff and so on.

      For me it saves time, instead of asking people about something, hearing them talk about it or watching them do it, i do data mining on words. Maybe more shallow than experiencing it myself or asking people who know about it first hand, but the time it takes to get some information good enough collapses down to 5 minutes.

      Depends on how you use it, it can enhance human capabilities, or indeed, mute them.

      • By jen729w 2025-05-173:062 reply

        Oh turns out ChatGPT generates exactly the level of banality that one would expect.

        https://chatgpt.com/canvas/shared/6827fcdd3ec88191ab6a2f3297...

        I don't want to read this story. I probably want to read one that a human author laboured over.

        • By visarga 2025-05-175:133 reply

          It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future. Of course they are going to get better. But that is not the point - it is that in the chat room the human and LLM spark ideas off each other. Humans come with their own unique life experience and large context, LLMs come with their broad knowledge and skills.

          • By aorloff 2025-05-175:54

            There is a Borges short story written in the 1930s about "the Library" a supposed collection of all possible permutations of language, even misspellings and gibberish. In many ways, it is extremely prescient of AI.

            To cut it short, in the end what Borges proposed is that the meaning comes from the stories, and that all the stories are really repetitions and permutations of the same set of humans stories (the Order) and that is what makes meaning.

            So all a successful literary AI needs to do is figure out how to retell the same stories we have been telling but in a different context that is resonant today.

            Simple right ?

          • By bccdee 2025-05-1714:572 reply

            > It would be a mistake to take the banality of current LLM outputs and extrapolate that into the future.

            Imagine a chef, congenitally unable to taste or smell food, who has nevertheless studied a million recipes. Can they reproduce existing recipes? Sure, if they follow the instructions perfectly. Can they improvise original recipes? I doubt it. Judging by the instructions alone, the recipes they invent may be indistinguishable from real recipes, but this chef can never actually try their food to see if it tastes good. The only safe flavour combinations are the ones they reuse. This is a chef who cannot create.

            LLMs are structurally banal. The only plausible route to a machine which can competently produce original art requires the development of a machine which can accurately model human's aesthetic sensibilities—something which humans themselves cannot do and have no need for, since we already have those aesthetic sensibilities built in.

            This is the fundamental error of using an LLM as a ghostwriter. Humans don't only bring inspiration to the table—they also bring the aesthetic judgement which shapes the final product. Sentences written by an LLM are banal sentences, no matter how you prompt it.

            • By imperfect_blue 2025-05-1722:331 reply

              As an amateur home-cook, I find current LLMs incredibly useful as a sounding board for the on-the-fly recipe modifications - for allergies and food sensitivities, adapting preparation methods to available equipment, or substituting produce not available in season. It may not be able to taste the final product, but its reasoning on what's likely to work (and what isn't) has not led me wrong so far.

              • By bccdee 2025-05-1916:56

                Sure, but allergies, substitutions, and such are very different from artistic creativity.

            • By emporas 2025-05-1717:562 reply

              Head over to groq.com, use the qwen-qwq-32b model, and take these examples [1] and put them at the start before the prompt. After that use the following command:

              write chapter 1 for a new Novel in Progress, take inspiration from the example Novel but DO NOT Repeat Example. Add vivid imagery, in a dark comedy style. dial up the humor and irony and use first person narration. Fracture sentences and emphasize the unusual: use unusual word orders, such as placing adjectives after nouns or using nouns as verbs, use linguistic voice pyrotechnics, telegraphically leaned and verbal agility in plot building intention, reflection, dialog, action, and describe solar civilization, which lives totally in space. Near a star, but not in a planet, and no gravitational pull anywhere.

              [1] https://gist.github.com/pramatias/953f6e3420f46f31410e8dd3c8...

              • By bccdee 2025-05-1821:32

                These are illegible.

              • By techno_tsar 2025-05-1718:091 reply

                This is unreadable slop.

                • By emporas 2025-05-1718:45

                  Depending on the story, the examples have to be adjusted. But of course, logical reasoning from humans cannot be replicated just like that, by the machines.

                  The real question is this: Suppose a person was great at reasoning the last 100 years, but with zero knowledge. That person might not attended any school, almost illiterate. But his reasoning is top notch. I don't know if you are familiar with Sultan Khan [1] for example.

                  With no formal training to absorb a lot of knowledge, that person is totally economically crashed. There is no chance of being competitive at anything, not involving muscles anyway. Now suppose that this person can complement his lack of knowledge with a magical knowledge machine. Suddenly he is ahead of a competition, involving people with 10 Phds, or doctors with 30 years of experience.

                  [1] https://en.wikipedia.org/wiki/Sultan_Khan_(chess_player)

          • By parodysbird 2025-05-176:45

            This is basically a contemporary reframing of the core purpose of Renaissance magic. I suppose aspiring to be a 21st century John Dee from talking to some powerful chatbot of the future, rather than angels or elemental beings, does sound a bit exciting, but it is ultimately mysticism all the same.

        • By WhyIsItAlwaysHN 2025-05-176:441 reply

          O3s story is not amazing but it sure is orders of magnitude more interesting than your example:

          https://chatgpt.com/share/68282eb2-e53c-8000-853f-9a03eee128...

          I don't think it's possible to generate an acceptable story without reasoning.

          That is not to say that I disagree with you. I would prefer to read human authors even if the AI was great at writing stories, because there's something alluring about getting a glimpse into a world that somebody else created in their head.

          • By randcraw 2025-05-1717:07

            > I don't think it's possible to generate an acceptable story without reasoning.

            If I look back at any article, book, movie, or conversation that I liked, it always had this essential ingredient: it had to make sense, AND it had to introduce some novel fact (or idea) that led to implications that were entertaining somehow (intriguing, revelatory, amusing, etc).

            Would this be possible without the author having some idea of how reasoning works? Or of what facts are novel or could lead to surprise of some kind? No is the obvious answer to both. Until I see clear evidence that LLMs have mastered both logic and the concept of what knowledge is and is not intriguing to a human, I foresee little creative output from any LLM that will 'move the needle' creatively.

            Until then, LLM-generated fare will remain the uninspired factory produce of infinite monkeys and typewriters...

      • By campers 2025-05-176:46

        There is a huge focus on training the LLMs to reason, that ability will slowly (or not that slowly depending on your timeframe!) but surely improve in the AI models given the gargantuan amount of money and talent being thrown at the problem. To what level we'll have to wait and see.

      • By 8note 2025-05-173:53

        hmm

        ive been thinking that the knowledge isnt written down, so cant be automated, which also makes knowledge sharing hard, but the reasoning is automated

        so, ive been trying to figure out patterns by which the knowledge does get written down, and so can be reasoned about

      • By jrvarela56 2025-05-175:11

        My initial hunch and many answers in this site say ‘it’s boring I wouldn’t read that’.

        There’s something to that: a good author synthesizes experiences into sentences/paragraphs, making the reader feel things via text.

        I have a feeling LLMs can’t do that bc they are trained on all the crap that’s been written and it’s hard to fake being genuine.

        But I agree you can generate any amount of filler/crap. It is useful, but what I got from GP was ‘ultimately, what’s the point of that?’. Hopefully these tools help us wake up to what is important.

    • By whyage 2025-05-177:342 reply

      > The aim of our creation is communication and mutual-transformation

      That's a myopic point of view. Personal transformation is as significant, if not more. Production-oriented pastimes like painting, gardening, or organizing your stamp collection can do wonders for the mind. Their goals can be remaining sane in this crazy world, not producing the best painting ever, growing conversation-starting plants, or showing off your stamp collection. It's about doing for the sake of being.

      • By techno_tsar 2025-05-1718:07

        I was thinking about how the point of reading isn't to literally 'internalize' what you've read. When you're engrossed in a piece of literature, you don't remember the specifics of the last page you read. What's more important is what the book is doing to your mind as you read -- it triggers a set of processes that force you to imagine and therefore form connections that you haven't made before, even if subtle and unrelated to the content of the book.

        It's the same with writing. Writing isn't just a way to produce a good piece of writing, it's what the process of writing does to your brain as you think aloud, connecting words and sentences together. The same with painting, gardening, and organizing your stamp collection. The final 'product' isn't actually important. The significance lies in the process of immersion from the creator and the people who witness it.

      • By abathologist 2025-05-1720:29

        > Personal transformation is as significant, if not more

        I would include personal transformation. I think it should be clear that my point is not against "production-oriented pastimes" in any way.

        > It's about doing for the sake of being.

        Yea, this is totally aligned with my view too. I'd just note that being is not separable from becoming and changing.

        I'm afraid you either didn't read me well, or I didn't write my intended sense well, or both. But I think we are pretty much in agreement in any case, assuming I understand what you've written here :D

    • By bowsamic 2025-05-175:47

      > I don't mean that their jobs will be automated: I mean that they will cede sapience and resign to becoming robotic.

      Exactly, there’s a huge section of humanity that actively wants to give away its humanity. They want to reduce themselves to nothing. Because, as you say, they cannot understand anything as having value other than economic artefacts

    • By Aerbil313 2025-05-1722:501 reply

        ...The industrial-technological system may survive or it may break down. If it survives, it may eventually achieve a low level of physical and psychological suffering, but only after passing through a long and very painful period of adjustment and only at the cost of permanently reducing human beings and many other living organisms to engineered products and mere cogs in the social machine. Furthermore, if the system survives, the consequences will be inevitable: There is no way of reforming or modifying the system so as to prevent it from depriving people of dignity and autonomy.
      
      - Industrial Society And Its Future, Ted Kaczynski (1975)

      • By abathologist 2025-05-193:071 reply

        As a matter of fact, I don't think its true that

        > if the system survives, the consequences will be inevitable: There is no way of reforming or modifying the system so as to prevent it from depriving people of dignity and autonomy.

        But I also think that this claim is (1) practically impossible to prove and (2) a claim we morally ought to attempt to disprove.

        • By Aerbil313 2025-05-2022:28

          Highly recommend to read the full manifesto. The reason is game theory for why human civilization will increasingly automate until human autonomy is reduced to nil, which is the state of suffering he is talking about. If you don't automate, $ENEMY will. So you automate. If you still choose to not automate, $ENEMY wins and controls you, and they will automate ever more with your resources. In fact, these 3 sentences are a good summary of past 1000 years of history, possibly more. See the recent AI 2027 paper (https://ai-2027.com/) for a more detailed approach. And Kaczynski talked about this stuff in the 70s.

          For the (2), I want to ask where did you get the idea that people have a moral responsibility to unconditionally defend the progress of technology?

          Do you think new technologies' good or bad impact depend entirely on the virtue of the people using them? Do you think new tools dictate their own usage on society when they make contact with constants like immediate-reward-seeking human neurological system?

    • By zaphirplane 2025-05-1823:191 reply

      There are a lot simpler ways of expressing what you are trying to say. I played a bit taking the text feeding it to LLM to simplify then converting the simple version into a more dense complex wording.

      What’s the point of my post you ask ? I like stories - Homer Simpson

      • By abathologist 2025-05-2015:05

        How do you know that generated text is actually a compression of what the author is trying to say?

  • By don_neufeld 2025-05-1619:296 reply

    Completely agree.

    From all of my observations, the impact of LLMs on human thought quality appears largely corrosive.

    I’m very glad my kid’s school has hardcore banned them. In some class they only allow students to turn in work that was done in class, under the direct observation of the teacher. There has also been a significant increase in “on paper” work vs work done on computer.

    Lest you wonder “what does this guy know anyways?”, I’ll share that I grew up in a household where both parents were professors of education.

    Understanding the effectiveness of different methods of learning (my dad literally taught Science Methods) were a frequent topic. Active learning (creating things using what you’re learning about) is so much more effective than passive, reception oriented methods. I think LLMs largely are supporting the latter.

    • By zdragnar 2025-05-1620:127 reply

      Anyone who has learned a second language can tell you that you aren't proficient just by memorizing vocabulary and grammar. Having a conversation and forming sentences on the fly just feels different- either as a different skill or using a different part of the brain.

      I also don't think the nature of LLMs being a negative crutch is new knowledge per se; when I was in school, calculus class required a graphing calculator but the higher end models (TI-92 etc) that had symbolic equation solvers were also banned, for exactly the same reason. Having something that can give an answer for you fundamentally undermines the value of the exercise in the first place, and cripples your growth while you use it.

      • By JackFr 2025-05-1623:048 reply

        Well I can extract a square root by hand. We all had to learn it and got tested on it.

        No one to day learns that anymore. The vast, vast majority have no idea and I don’t think people are dumber because of it.

        That is to say, I think it’s not cut-and-dried. I agree you need to learn something, but something’s it’s okay use a tool.

        • By zdragnar 2025-05-170:31

          Comparing extracting a square root my hand is rather different in scope than reducing / simplifying equations entirely. The TI-92 could basically do all of your coursework for you up to college level, if memory serves.

          The real question isn't "is it okay to use a tool" but "how does using a tool affect what you learn".

          In the cases of both LLMs and symbolic solving calculators, I believe the answer is "highly detrimental".

        • By mistercow 2025-05-1711:521 reply

          > No one to day learns that anymore. The vast, vast majority have no idea and I don’t think people are dumber because of it.

          Arguably, the kind of person who was helped by learning to do that by hand still learns to do it by hand, but because of curiosity rather than because a teacher told them to.

          I remember being thirteen and trying to brute force methods for computing the square root. I didn’t have the tools yet to figure out how to do it in any systematic way, and the internet wasn’t at a point yet where it would have even occurred to me to just search online. Wikipedia wouldn’t exist for another two years.

          I probably finally looked it up at some point in high school. I’m not sure exactly when, but I remember spending a lot of time practicing doing a few iterations in my head as a parlor trick (not that I ever had the opportunity to show it off).

          If I were thirteen and curious about that now, I’d probably just ask ChatGPT. Then I’d have a whole follow up conversation about how it was derived. It would spit a lot of intimidating LaTeX at me, but unlike with Wikipedia, I’d be able to ask it to explain what those things meant.

          This is the thing I don’t get when people talk about LLMs’ impact on education. Everybody focuses on cheating, like learning is inherently a chore that all students hate and must be carefully herded into doing despite themselves.

          But that’s a problem with school, not learning. If your actual, self-motivated goal is to learn something, LLM’s are an incredible tool, not a hindrance.

          • By gofreddygo 2025-05-1715:191 reply

            you put my thoughts into words I couldn't.

            Any school's #1 job is to motivate learning. Schools clearly suck at this.

            LLMs are a fascinating effective learning tool. early learning would be better off embracing it as such.

            i imagine a combination of a video of a good tutor explaining a concept followed up with an llm to quiz and explain the concept seems far better than what we have today.

            • By mistercow 2025-05-1715:341 reply

              I think we need to find a way to teach without grades. That is, we do know how to teach without grades at younger levels, and many schools do it successfully.

              The problem is that eventually you need to measure for placement, and Goodhart’s Law kicks in and destroys the enjoyment of learning. It’s very hard to be intrinsically motivated when the external pressure dominates.

              The fact that “study for the test” is basically synonymous with “study” for most people is indicative that we’re doing something extremely wrong in education.

              • By Grosvenor 2025-05-1720:16

                > The problem is that eventually you need to measure for placement Why do you need to be "placed"? I know the answer - resources.

                But why can't each individual have sort of their own bespoke governess tutoring them all through school? Pay Gilbert Strang a million dollars to do a 32 week course in LinAlg. Yes I know a normal course is shorter, expand it to cover everything in exquisite detail, so even I can keep up. Alan Kay, Donald Knuth, and John Carmack can teach computer science. Continue on for each subject.

                Then let students go as fast as they want. I'd have finished some subjects of HS by the time I was in 9th grade, and others I might still be trying. ¯\_(ツ)_/¯

                Where you'd still need to be "placed" is social development. Which I think should actually be a subject.

        • By smcleod 2025-05-170:31

          I very much agree with your sentiment here.

          I tried to encapsulate that to some degree when writing something (perhaps poorly?) recently actually - https://smcleod.net/2025/03/the-democratisation-paradox-what...

        • By mattigames 2025-05-175:20

          Yes they are dumber because of it, not in the mental retardation kind of way but a more nuanced way, among others the mental work you put into trying to find another simpler way than the one the professor is teaching you, and the understanding about numbers such attempts can give you, even if they are unsuccessful.

        • By Mikhail_Edoshin 2025-05-173:09

          Using a tool like that is opposite to mastering the skill. There's no royal road to mastery and never will be. One does not have to master all skills, of course, and may do well not mastering any (or mastering dark ones).

        • By BobbyTables2 2025-05-172:481 reply

          The manual methods are also the foundation for higher approaches involving approximation and iterative solutions. These are widely used in engineering and science.

          Pressing a calculator key doesn’t give the same insight.

          • By SoftTalker 2025-05-1717:08

            Memorization and manual methods also help develop intuition about what is a plausible answer and what is not. It helps build a brain that is able to sanity-check what it is being told.

        • By drdeca 2025-05-172:141 reply

          Huh? While I essentially never have need to compute a square root by hand (unless it is a perfect square of course), shouldn’t one know how one would?

          • By johnmaguire 2025-05-174:132 reply

            Why should one? Perhaps they should if it's relevant to their work, daily routine, or interests. But if they have no need for it?

            • By mattigames 2025-05-175:26

              Needs are all fabricated, Ludwig Wittgenstein said "the limits of my language are the limits of my world", the same thing happens with logical thinking and all its tools including math.

            • By drdeca 2025-05-1721:59

              Well, I think one should be able to come up with a way of doing so on the fly just from knowing that the square root function is monotonically increasing, and knowing binary search? Of course, doing it another way might be more efficient.

      • By skydhash 2025-05-1621:44

        Same with drawing which is easy to teach, but hard to master because of the coordination between eyes and hand. You can trace a photograph, but that just bypass the whole point and you don’t exercise any of the knowledge.

      • By socalgal2 2025-05-175:231 reply

        I am waiting for the day (maybe it's already here) when I can talk to an LLM to practice my 2nd language. It can correct everything I say, it can talk forever, it can challenge me to use new grammar or vocabularly. Note: I can speak all day in my 2nd language with friends but I wouldn't give a business presentation nor could I explain, as a native, how something technical works. If I watch a TV show I might understand 30%-99% but the more lawyers/military/goverment/science parts there are the more it's beyond my current level.

        Getting exposure there is hard. Talking to friends just means more practice with what I already know but an LLM could help me practice things outside that area.

        • By edanm 2025-05-176:07

          For many languages, this is already something you can do.

      • By makeitdouble 2025-05-173:502 reply

        > the higher end models (TI-92 etc) that had symbolic equation solvers were also banned

        I'm surprised it was a problem in the first place. Don't equation solving exercises require you to leave intermediary steps, and you can't just put a "x=5" as a one liner answer ?

        • By max51 2025-05-195:59

          We used TI-Nspire CX CAS at my university but it was banned in the first half of the calculus 1 course. It would really make no sense to allow you to bring one to an exam where many questions are one liners, or close to it (eg. "find the derivative of f(x)=7x^5+10"). IIRC, they also banned it in a calc 2 exam. Almost all other courses allowed them and they exams where designed for it (eg. 8 big problems instead of 2).

        • By nbernard 2025-05-179:55

          I don't remember if it was the case for the TI-92+, but some calculators can show the intermediate steps, or at least some of them.

      • By fennecbutt 2025-05-171:31

        Feels different, comes naturally, without conscious thought, just like we don't focus on beating our hearts.

        And agree about learning by practicing a skill being best. But you and I both know the school system has worked on rote memorisation for hundreds of years at least and still is now.

      • By flysand7 2025-05-1622:432 reply

        Another case in point is that memorizing vocabularies and grammar, although could seem like an efficient way to learn a language, is incredibly unrewarding. I've been learning japanese from scratch, using only real speech to absorb new words, without using dictionaries and anything else much. The first feeling of reward came immediately when I learned that "arigatou" means thanks (although I terribly misheard how the word sounded, but hey, at least I heard it). Then after 6 month, when I could catch and understand some simple phrases. After 6-7 years I can understand about 80% of any given speech, which is still far, but I gotta say it was a good experience.

        With LLM's giving you ready-made answers I feel like it's the same. It's not as rewarding because you haven't obtained the answer yourself. Although it did feel rewarding when I was interrogating an LLM about how CSRF works and it said I asked a great question when I asked whether it only applies to forms because it seems like fetch has a different kind of browser protection.

        • By BlueTemplar 2025-05-1711:13

          If you used subtitles over audio then why would you avoid dictionaries too ? Purely for the reward of treating it as a puzzle ? (Since you would have to figure out which word corresponds to a which concept in a phrase.)

        • By layer8 2025-05-170:261 reply

          How much hours would you estimate did you watch (I assume it was video, not just audio) in those years? What kind of material? Just curious.

          • By flysand7 2025-05-1712:07

            Mostly anime. Surprisingly, not that much, I think somewhere in the ballpark of 100 titles. In the beginning I was also watching some grammar tutorials on YouTube to get started with grammar quicker (Otherwise convergence on solution would be too slow).

            Contrary to what I said I actually did use dictionaries, but the point I was trying to make is rather than memorizing phrases in advance, I used it to translate something I thought I heard.

      • By genewitch 2025-05-173:59

        Yeah and I'm of the age when teachers in all grades would say "you're not going to carry around a calculator your whole adult life"

        Hilarious miscalculation.

    • By avaika 2025-05-1622:242 reply

      This reminds me how back in my school days I was not allowed to use the internet to prepare research on some random topics (e g. history essay). It was the late 90s when the internet started to spread. Anyway teachers forced us to use offline libraries only.

      Later in the university I was studying engineering. And we were forced to prepare all the technical drawings manually in the first year of study. Like literally with pencil and ruler. Even though computer graphics were widely used and we're de facto standard.

      Personally I don't believe hardcore ban will help with any sort of thing. It won't stop the progress either. It's much better to help people learn how to use things instead of forcing them to deal with "old school" stuff only.

      • By don_neufeld 2025-05-1623:401 reply

        I was expecting some response like this, because schools have “banned” things in the past.

        While this is superficially similar, I believe we are talking about substantially different things.

        Learning (the goal) is a process. In the case of an assignment, the resulting answer / work product, while it is what is requested, is critically not the goal. However, it is what is evaluated, so many confuse it with the goal (“I want to get a good grade”)

        Anything which bypasses the process makes the goal (learning) less likely to be achieved.

        So, I think it is fine to use a calculator to accelerate your use of operations you have already learned and understand.

        However, I don’t think you should give 3rd graders calculators that just give them the answer to a multiplication or division when they are learning how those things work in the first place.

        Similarly, I think it’s fine to do research using the internet to read sources you use to create your own work.

        Meanwhile, I don’t think it’s fine to do research using the internet to find a site where you can buy a paper you can submit as your own work.

        Right now, LLMs can be used to bypass a great deal of process, which is why I support them not being used.

        It’s possible, maybe even likely that we’ll end up with a “supervised learning by AI” approach where the assignment is replaced by “proof of process”, a record of how the student explored the topic interactively. I could see that working if done right.

        • By pca006132 2025-05-179:51

          Yeah, I remember reading someone saying you won't use a fork lift in a gym. I think this is the same idea.

          The problem is really about how to evaluate performance or incentivize students to actually work on their exercise.

    • By hammock 2025-05-1623:394 reply

      > I’m very glad my kid’s school has hardcore banned them.

      What does that mean, I’m curious?

      The schools and university I grew up in had a “single-sanction honor code” which meant if you were caught lying or cheating even once you would be expelled. And you signed the honor code at the top of every test.

      My more progressive friends at other schools who didn’t have an honor code happily poo-pooed it as a repugnantly harsh old fashioned standard. But I don’t see a better way today of enforcing “don’t use AI” in schools, than it.

      • By don_neufeld 2025-05-170:582 reply

        The school has an academic honestly policy which explicitly bans it, under “Cheating”, which includes:

        “Falsifying or inventing any academic work, including the use of AI (ChatGPT, etc)”

        Additionally, as mentioned, the school is taking actions to change how work is done to ensure students are actually doing their own work - such as requiring written assignments be completed during class time, or giving homework on physical paper that is to be marked up by hand and returned.

        Apparently this is the first year they have been doing this, as last year they had significant problems with submitted work not being authored by students.

        This is in an extremely competitive Bay Area school, so there can be a lot of pressure from parents on students to make top grades, and sometimes that has negative side effects.

        • By djhn 2025-05-177:23

          Asking as a non-American non-school-pupil-parent: what does it mean for a school to be competitive in this context? Competitive entry into a school I understand, but that threshold has been cleared. Isn’t US college admission based on essays and standardised tests like GMAT, SAT, GRE?

        • By BlueTemplar 2025-05-1713:27

          Physical paper isn't going to save them.

          (Also, typing was only appropriate for only some classes anyway.)

      • By garrickvanburen 2025-05-1623:563 reply

        I don’t see the problem.

        I’m not sure how LLMs output is indistinguishable from Wikipedia or World Book.

        Maybe? and if the question is “did the student actually write this?” (which is different than “do they understand it?” there are lots of different ways to assess if a given student understands the material…that don’t involve submitting typed text but still involve communicating clearly.

        If we allow LLMs- like we allow calculators, just how poor LLMs are will become far more obvious.

        • By bccdee 2025-05-1917:11

          Do you really not see the problem? A student who pastes an essay prompt into an input box and copies out the response has learned nothing. Even direct plagiarism from Wikipedia would typically need to be reworked; there will rarely be a Wikipedia page corresponding to your teacher's specific essay prompt.

          Students are also poor writers. Often LLM-generated essays can be spotted in elementary school because they write too well for that grade level. A good student will surpass a chatbot, but not if they use it as a crutch while it's still a stronger writer than they are.

        • By BobbyTables2 2025-05-172:51

          Oral presentation without notes and a live Q&A would be some ways…

        • By hammock 2025-05-170:08

          If LLMs are allowed then sure. However, when LLMs are explicitly banned from use, is the case I am talking about.

      • By StefanBatory 2025-05-1713:001 reply

        That's an surprisingly "strict" (in quotes for obvious reason) honor code.

        I'm at some uni in Poland, not top tier, but at the same time - not bad either, slighly above average.

        The amount of cheating I saw - it's almost mundane. Teachers know this, so do we...

        • By hammock 2025-05-1717:451 reply

          It worked. There was still cheating (caught and uncaught), but 50-100x less than what I saw at other schools.

          And it gave students a sense of pride in their education

          • By StefanBatory 2025-05-1914:28

            I'm genuinely happy to hear that, that it can be done. Really.

            I have a huge respect for you and your university.

      • By BobbyTables2 2025-05-172:50

        Today such infractions might result in a verbal warning…

    • By johnisgood 2025-05-1622:435 reply

      You can learn a lot from LLMs though, same with, say, Wikipedia. You need curiosity. You need the desire to learn. If you do not have it, then of course you will get nowhere, LLMs or no LLMs.

      • By layer8 2025-05-170:301 reply

        From the article:

        “The irony is that I now know more than I ever would have before AI. But I feel slightly dumber. A bit more dull. LLMs give me finished thoughts, polished and convincing, but none of the intellectual growth that comes from developing them myself. The output from AI answers questions. It teaches me facts. But it doesn’t really help me know anything new.”

        I think the thesis is that with AI there is less need and incentive to “put the work in” instead of just consuming what the AI outputs, and that in consequence we do the needed work less and atrophy.

        • By johnisgood 2025-05-178:291 reply

          I know, that is why you need the desire, the will to learn. I have been using LLMs for this, so I know it is possible. I understand what you are saying though, and it is indeed a sad state of affairs, but then again, this was the case due to search engines, Wikipedia, and so forth, long before LLMs.

          Again, you can truly learn a lot using LLMs, but you have to approach it properly. It does not have to be just "facts", and sometimes, even learning "facts" is learning.

          I can use LLMs and learn nothing, but I can use LLMs to learn, too!

          • By layer8 2025-05-1713:441 reply

            Yes, but previously you didn’t need the desire that much, because you were more forced to it, there was no easy way. The fact that now you need that internal motivation means that it will happen less, where previously it happened by default.

            • By johnisgood 2025-05-1714:32

              I agree, it is sort of like a double-edged sword, I would say.

      • By creata 2025-05-171:42

        Honestly, I doubt that LLMs are great for learning. Too often, they output plausible-sounding things that turn out to be completely wrong. I know Wikipedia can have its problems with factuality, but this is on an entirely different level. (And yes, they do this even when they're allowed to do web searches and "reason".)

        The effort of verifying everything it claims may or may not outweigh the effort of other means of learning.

      • By azinman2 2025-05-1622:464 reply

        Never underestimate laziness, or willingness to take something 80% as good for 1% of the work.

        So most are not curious. So what do you do for them?

        • By johnisgood 2025-05-1622:561 reply

          You have to somehow figure out the root cause of the laziness, or if it really is laziness, and not something else, e.g. a mental health issue.

          Plus, many kids fail school not because of laziness, but because of their toxic environment.

          • By Swizec 2025-05-170:211 reply

            > if it really is laziness, and not something else, e.g. a mental health issue.

            Kids optimize. When I was in high school I was fully capable of getting straight F's in a class I didn't care about and straight A's in a class I enjoyed.

            Why bother learning chemistry when you could instead spend that time coding cool plugins and websites in PHP that thousands of internet strangers are using? I really did build one of the most popular phpBB plugins and knew I was gonna be a software engineer. Not that my chemistry professor cared about any of that or even understood what I'm talking about.

            • By johnisgood 2025-05-1711:451 reply

              What you just described is irrelevant to what we are discussing.

              As for what you said, yeah, I got 1s (Fs) because I was too busy coding and reading books on philosophy, as a 14 years old.

              • By Swizec 2025-05-1714:241 reply

                How is it irrelevant? Kids will always cheat their way through classes they feel are a distraction. Even the super smart Type A kids.

                Hell, all humans do that. You use every resource available to get out of dealing with things that are not your priority. This means you will never be good at those things and that’s fine. You can’t be good at everything.

                • By johnisgood 2025-05-1714:36

                  They will, but we were talking about the will or motivation to learn. If someone has a curious mind, and actually wants to learn, then they can definitely use LLMs to do that.

                  I don't disagree with you though.

        • By BobbyTables2 2025-05-172:58

          Realistically, putting them into trades sooner could almost be a good thing. Kids who don’t want to learn end up dragging down the class and distracting those who do.

          But, these are kids… Hard to argue that adults should selectively deny education when it is their responsibility to do otherwise.

          We don’t neglect the handicapped because it is inconvenient to provide them with assistance.

        • By BeFlatXIII 2025-05-1712:521 reply

          Leave ‘em behind and win the race.

          • By azinman2 2025-05-1819:14

            Why would you advocate for society failing apart? What race is there to win if most lose?

        • By AstroBen 2025-05-1715:211 reply

          that's on them? Why do I have to take responsibility for someone else's growth?

          • By azinman2 2025-05-1819:161 reply

            Because we, as a society, need to create a future of mutual benefit. If you don’t, only terrible things happen, including to you.

            • By AstroBen 2025-05-1819:311 reply

              Maybe my comment was a little harsh. I do believe we should have the structure in place for people to learn from as they wish. We should show them it's available, and the benefits of it

              But the lazy, uncurious person that wants to shortcut everything, that you described? They're not even trying to help themselves. Maybe the best thing to do there is to let a little time pass and for them to see the result of their actions. You can lead a horse to water..

              • By azinman2 2025-05-1820:31

                > Maybe the best thing to do there is to let a little time pass and for them to see the result of their actions

                This is most kids, myself included. I’m curious and have done well, but if I could have had essays written for me as a kid for sure I’d have tried. And I’d be off worse for it.

                Kids also just want to eat sugar and play video games. As parents and a society, we both provide and force alternatives because as adults we know better.

                You can lead a horse to water, sure, but you could also lead it to something filled with their worst instincts. Let’s avoid the later.

      • By hooverd 2025-05-173:131 reply

        Wikipedia isn't going to write your paper for you. I don't see the difference between an LLM and one of those paper writing services in this context.

        • By johnisgood 2025-05-178:47

          We are talking about learning. You can learn much more from LLMs than Wikipedia, because if you do not understand something, you can always ask an LLM about it, and it would reply to you in any way you want; whatever helps you learn better.

      • By snackernews 2025-05-1622:482 reply

        Can you learn a lot? Or do you get instant answers to every question without learning anything, as OP suggests?

        • By calebkaiser 2025-05-170:161 reply

          You can learn an incredible amount. I do quite a bit of research as a core part of my job, and LLMs are amazing at helping me find relevant research to help me explore ideas. Something like "I'm thinking of X. Does this make sense and do you know of any similar research?" I also mentor some students whose educational journey has been fundamentally changed by them.

          Like any other tool, it's more a question of how they're used. For example, I've seen incredible results for students who use ChatGPT to interrogate ideas as they synthesize them. So, for example, "I'm reading this passage PASSAGE and I'm confused about phrase X. The core idea seems similar to Y, which I am familiar with. if I had to explain X, I'd put it like this ATTEMPT Can you help me understand what I'm missing?"

          The results are very impressive. I'd encourage you to try it out if you haven't.

          • By vendiddy 2025-05-170:261 reply

            I've used it these past few months to better understand the PDF format, Nix, and a few other technical concepts.

            I try to use AI to automate things I already know and force myself to learn things I don't know.

            It takes discipline/curiosity but it can be a net positive.

            • By johnisgood 2025-05-178:33

              Thank you, and the previous commenter. I am tired of trying to convince people that LLM can be a really good tool for learning. :/

              They should just simply try it. Start with something you actually know to see how useful it might be to you with your prompts.

        • By johnisgood 2025-05-1622:55

          You can learn a lot, if you want to. I can ask it a question with regarding to pharmacodynamics of some medication, and I can ask more and more questions, and learn. Similarly, I could pick up a book on pharmacology, but LLMs can definitely make learning easier.

    • By BobbyTables2 2025-05-172:463 reply

      Ironically, states now use AI to grade student essays in standardized tests.

      English teachers even recommend grammarly..

      Students are given a “prompt” for writing.

      I wish other schools had the conviction you describe…

      • By guyfhuo 2025-05-173:19

        > Students are given a “prompt” for writing

        Students were always given a “prompt” for writing.

        That’s why tech companies used that term; rather than the other way around.

      • By GeoAtreides 2025-05-1710:241 reply

        > states now use AI to grade student essays in standardized tests.

        citation needed

        • By BrawnyBadger53 2025-05-1718:23

          This was already happening a decade ago lol. Happened to me at least.

      • By raincole 2025-05-175:09

        > Students are given a “prompt” for writing.

        What do you think "prompt" mean?

        Or you're saying the students are asked to mimic AI's style?

    • By mr_toad 2025-05-171:54

      > I’m very glad my kid’s school has hardcore banned them.

      Schools will ban anything they think of as sinister.

  • By jebarker 2025-05-1619:523 reply

    > nothing I make organically can compete with what AI already produces—or soon will.

    No LLM can ever express your unique human experience (or even speak from experience), so on that axis of competition you win by default.

    Regurgitating facts and the mean opinion on topics is no replacement for the thoughts of a unique human. The idea that you're competing with AI on some absolute scale of the quality of your thought is a sad way to live.

    • By steamrolled 2025-05-1620:062 reply

      More generally, prior to LLMs, you were competing with 8 billion people alive (plus all of our notable dead). Any novel you could write probably had some precedent. Any personal story you could tell probably happened to someone else too. Any skill you wanted to develop, there probably was another person more capable of doing the same.

      It was never a useful metric to begin with. If your life goal is to be #1 on the planet, the odds are not in your favor. And if you get there, it's almost certainly going to be unfulfilling. Who is the #1 Java programmer in the world? The #1 topologist? Do they get a lot of recognition and love?

      • By harrison_clarke 2025-05-1622:181 reply

        a fun thing about having a high-dimensional fitness function is that it's pretty easy to not be strictly worse than anyone

      • By musicale 2025-05-172:06

        > Who is the #1 Java programmer in the world?

        James Gosling, of course[1]. Next question...

        > The #1 topologist?

        I'm not a mathematician, but... maybe Akshay Venkatesh, who won the Fields Medal in 2018?

        [1] https://news.ycombinator.com/item?id=44005008

    • By imhoguy 2025-05-1713:14

      But inevitably you lose in the flood of enshitified creations made with LLMs.

      I think we will come back to roots, the simple in person creation: pen and paper, declamation, theatre, live performance, hand painting, improvisation, handmade work.

      Maybe not everybody but it will be for (mentally) free people.

    • By computerthings 2025-05-1622:29

      [dead]

HackerNews