The L in "LLM" Stands for Lying

2026-03-054:02666477acko.net

Questioning the frame of inevitability in use of AI

So it's no wonder artists would denounce generative AI as mass-plagiarism when it showed up. It's also no wonder that a bunch of tech entrepreneurs and data janitors wouldn't understand this at all, and would in fact embrace the plagiarism wholesale, training their models on every pirated shadow library they can get. Or indeed, every code repository out there.

If the output of this is generic, gross and suspicious, there's a very obvious reason for it. The different training samples in the source material are themselves just slop for the machine. Whatever makes the weights go brrr during training.

This just so happens to create the plausible deniability that makes it impossible to say what's a citation, what's a hallucination, and what, if anything, could be considered novel or creative. This is what keeps those shadow libraries illegal, but ChatGPT "legal".

Labeling AI content as AI generated, or watermarking it, is thus largely an exercise in ass-covering, and not in any way responsible disclosure.

It's also what provides the fig leaf that allows many a developer to knock-off for early lunch and early dinner every day, while keeping the meter running, without ever questioning whether the intellectual property clauses in their contract still mean anything at all.

This leaves the engineers in question in an awkward spot however. In order for vibe-coding to be acceptable and justifiable, they have to consider their own output disposable, highly uncreative, and not worthy of credit.

* * *

If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.

The solution to the LLM conundrum is then as obvious as it is elusive: the only way to separate the gold from the slop is for LLMs to perform correct source attribution along with inference.

This wouldn't just help with the artistic side of things. It would also reveal how much vibe code is merely just copy/pasted from an existing codebase, while conveniently omitting the original author, license and link.

With today's models, real attribution is a technical impossibility. The fact that an LLM can even mention and cite sources at all is an emergent property of the data that's been ingested, and the prompt being completed. It can only do so when appropriate according to the current position in the text.

There's no reason to think that this is generalizable, rather, it is far more likely that LLMs are merely good at citing things that are frequently and correctly cited. It's citation role-play.

The implications of sourcing-as-a-requirement are vast. What does backpropagation even look like if the weights have to be attributable, and the forward pass auditable? You won't be able to fit that in an int4, that's for sure.

Nevertheless, I think this would be quite revealing, as this is what "AI detection tools" are really trying to solve for backwards. It's crazy that the next big thing after the World Wide Web, and the Google-scale search engine to make use of it, was a technology that cannot tell you where the information comes from, by design. It's... sloppy.

To stop the machines from lying, they have to cite their sources properly. And spoiler, so do the AI companies.


Read the original article

Comments

  • By raincole 2026-03-058:3427 reply

    > Video games stand out as one market where consumers have pushed back effectively

    No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.

    If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:

    > content such as artwork, sound, narrative, localization, etc.

    No 'code' or 'programming.'

    If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.

    > This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.

    Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.

    > Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.

    Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.

    • By bartread 2026-03-0512:045 reply

      > Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.

      Yes, this is a wildly uneducated perspective.

      Procedural generation has often been a key component of some incredibly successful, and even iconic games going back decades. Elite is a canonical example here, with its galaxies being procedurally generated. Powermonger, from Bulldog, likewise used fractal generation for its maps.

      More recently, the prevalence of procedurally generated rogue-likes and Metroidvanias is another point against. Granted, people have got a bit bored of these now, but that's because there were so many of them, not because they were unsuccessful or "failed to deliver".

      • By bombcar 2026-03-0512:302 reply

        Procedural generation underlies the most popular game of all time (Minecraft) and is foundational for numerous other games of a similar type - Dwarf Fortress, et al.

        And it's used to power effect where you might not expect it (Stardew Valley mines).

        What procedural generation does NOT work at is generating "story elements" though perhaps even that can fall, Dwarf Fortress already does decently enough given that the player will fill in the blanks.

        • By optionalsquid 2026-03-0512:421 reply

          > And it's used to power effect where you might not expect it (Stardew Valley mines).

          Apparently Stardew Valley's mines are not procedurally generated, but rather hand-crafted. Per their recent 10 year anniversary video, the developer did try to implement procedural generation for the mines, but ended up scrapping it:

          https://www.stardewvalley.net/stardew-valley-10-year-anniver...

          • By bombcar 2026-03-0512:441 reply

            They're quasi-generated with random elements and fixed elements - similarly to early Diablo procedural generation.

            • By xerox13ster 2026-03-0515:271 reply

              That’s not the same procedural generation as GPT or diffusion and you know it.

              It’s not even in the same ballpark as Elite, NMS, terraria, or Minecraft.

              The levels are all hand drawn, not generated by an algorithm, even if they’re shuffled. Eric Barone, the developer, has publicly said as much. Are you calling him a liar?

              It’s like the difference between sudoku/crossword and conways game of life

              • By cweagans 2026-03-060:211 reply

                The parent comment didn't seem to say anything offensive. Why so hostile?

                • By xerox13ster 2026-03-071:401 reply

                  Honestly, just white knighting for one of my favorite developers and biggest inspirations.

                  Someone lying about the pseudo randomization of the hand drawn efforts to make it seem entirely algorithmically generated rubs me really wrong, especially when that dev has publicly broadcast the reasoning of the decision to eschew procedurally generated mines.

                  • By cweagans 2026-03-072:37

                    I think that's what I was confused about: I don't see the lie in the comments above. optionalsquid said "[...]did try to implement procedural generation for the mines, but ended up scrapping it"

                    bombcar said "They're quasi-generated with random elements and fixed elements - similarly to early Diablo procedural generation." (which is true - you confirmed as much in the very next comment - "The levels are all hand drawn, not generated by an algorithm, even if they’re shuffled.". That's all early Diablo was doing.)

                    "Quasi-generated" seems like an appropriate descriptor here - stringing together level building blocks algorithmically is still "generating" a level in a sense. You're right - it's not correct to say that they were generated in the same way that an LLM generates things, but a) nobody claimed that and b) there is an undeniable element of procedural generation here.

        • By morissette 2026-03-0514:322 reply

          And here I thought the most popular game of all time was Soccer or Super Mario Bros 3

          • By bee_rider 2026-03-0516:221 reply

            I think they meant videogame, ruling out soccer.

            It looks like the Super Mario Bros series has a good showing, but it is the first one. I bet 3 falls into an unlucky valley where the game-playing population was not quite as large as it is now, but it isn’t early enough to get the extreme nostalgia of the first one.

            https://en.wikipedia.org/wiki/List_of_best-selling_video_gam...

            Of course this assumes sales=popularity, but the latter is too hard to measure.

            • By bartread 2026-03-0523:27

              I dunno: I think SMB 3 gets plenty of plaudits and is widely agreed to be the best SMB on the NES.

              Frankly, all three of the NES games hold up well even today, but 3 is for me the pinnacle even though very late in the life of the console, particularly in North America and Europe. This wouldn’t have helped sales with the SNES about to drop but it nevertheless was very successful.

              Absolutely one of the greats.

          • By 6510 2026-03-0516:02

            Quality is the same thing as popularity. That is why mcdonalds has 12 Michelin stars.

      • By dkersten 2026-03-0516:35

        Almost every 3D game in the past 20 years uses procedural foliage generation (eg SpeedTree and similar). Many use procedural terrain painting. Many use tools like Houdini.

        So procedural generation is extremely prevalent in most AAA games and has been for a long time.

      • By nikitau 2026-03-0512:371 reply

        Roguelike/lites are is of the most popular genres of indie games nowadays. One of it's main characteristics is randomization and procedural generation.

        • By tanjtanjtanj 2026-03-0514:071 reply

          While there are many Roguelikes with procedural generation, I think the most popular ones do not. Slay the Spire, Risk of Rain 2, Hades 1/2, BoE etc are all handmade stages with a random order with randomized player powers rather than procedurally generated.

          • By banannaise 2026-03-0516:501 reply

            I've seen a couple roguelike developers report that they played around with procedural generation, but it was difficult to prevent it from creating dungeons that were bad, unfun, or just straight-up killscreens. Turns out it's often easier to simply hand-draw good maps than to get the machine to generate okay-to-good ones.

            Procedural generation is good when variety matters more than quality, which is a relatively rare occurrence.

            • By htek 2026-03-0518:12

              That says more about the developer than procedural generation as a whole. Using procedural generation IS difficult, it requires understanding how to set up constraints on your p-random generated elements and ensuring the code validates that you have a "good" level/puzzle/whatever before dumping the PC into it.

      • By techpression 2026-03-0512:463 reply

        I’m a hard core rogue-like player (easily over a thousand hours at least in all the games I’ve played) but even so I can admit that hey have nothing compared to a well crafted world like you’d find in From Software titles or Expedition 33, or classic Zelda games for that matter. Making a great world is an incredibly hard task though and few studios have the capabilities to do so.

        • By angry_octet 2026-03-0518:431 reply

          Rogue-like games use the most simple randomisation to generate the next room, and I burnt hundreds of hours in Mines of Moria before I forced myself to quit.

          Now with an LLM I could have AD&D-like campaigns, photorealistic renders of my character and the NPCs. I could give it the text of an AD&D campaign as a DM and have it generate walking and talking NOCs.

          The art of those great fantasy artists is definitely being stolen in generated images, and application of VLMs should require payment into some sort of art funding pool. But modern artists could well profit by being the intermediary between user and VLM, crafting prompts, both visual and textual, to give a consistent look and feel to a game.

          The essay author is smoking crack.

          • By tadfisher 2026-03-0522:411 reply

            Artists want to create. They do not want to tweak prompts and click "Generate" repeatedly until the output matches their vision. I would find this maddening.

            But this wouldn't make sense anyway. Game companies won't foot the bill for real-time renders of your character, let alone a world of generated NPCs. If/when costs are low enough, and players accept a recurring subscription to play games, then this could happen, sure. No way in hell will artists be available in real-time to keep the generated imagery consistent.

            • By angry_octet 2026-03-0610:01

              Why would game companies be paying for rendering on my computer? My computer can fantasise player specific details, in a palette created by game artists, and render them itself.

              Game artists could indeed be working in real time in MMORPGs to tweak the world, impresarios of the shared experience. Paying for live human shaped performance art is a great way to keep human creativity central to the experience.

        • By bee_rider 2026-03-0516:35

          It’s a different type of thing, really. I like rogue-likes because they are a… pretty basic… story about my character, rather than a perfectly crafted story about somebody else’s.

          Even when I play a game like Expedition 33 or Elden Ring, my brain (for whatever reason) makes a solid split between the cutscene versions of the characters and the gameplay version. I mean, in some games the gameplay characters is a wandering murderer, while the cutscene characters have all sorts of moral compunctions about killing the big-bad. They are clearly different dudes.

        • By b0rsuk 2026-03-0514:49

          [dead]

      • By Dumblydorr 2026-03-0512:312 reply

        Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games? Wouldn’t a better example be No Man’s Sky, if we’re talking procedural gen and eventually a good game.

        In any case, I agree that gamers by and large don’t care to what extent the game creation was automated. They are happy to use automated enemies, automated allies, automated armies and pre-made cut scenes. Why would they stop short at automated code gen? I genuinely think 90% wouldn’t mind if humans are still in the loop but the product overall is better.

        • By Ensorceled 2026-03-0512:531 reply

          > Is it wildly uneducated to not know any of the games you mentioned? I didn’t realize education covered less known video games?

          Yes. It is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you are unfamiliar with large parts of that field.

          • By Almondsetat 2026-03-0513:212 reply

            Large? That's your opinion

            • By mikkupikku 2026-03-0514:37

              If you haven't heard of the modern roguelike genre you've probably been living under a rock, it seems like every other game these days at least calls itself such. Usually the resemblance to Rogue is so remote that it strains the meaning of the term, but procedural generation of levels is almost universal in this loosely defined genre.

              Elite is a bit more obscure, but really anybody who aims to be familiar with the history of games should recognize the name at least. Metroidvania isn't a game, but is a combination of the names of Metroid and Castlevania and you absolutely should know about both of those.

              Powermonger is new to me.

              And while the comment in question didn't mention it, others have: Minecraft. If you're not familiar with Minecraft you must be Rip Van Winkle. This should be the foremost game that comes to mind when anybody talks about procedural generation.

            • By Ensorceled 2026-03-0514:221 reply

              Of course it is.

              • By Almondsetat 2026-03-0514:371 reply

                Then it is "wildly uneducated" to have, and express, strong opinions about ANY field of endeavour where you cannot substantiate your claims.

                • By Ensorceled 2026-03-0520:281 reply

                  Honest question: are you enjoying this? I looked at your comment history and you don't seem like a troll. What is going on right now?

                  • By antonvs 2026-03-064:581 reply

                    The person you’re replying to has only posted two short comments in this thread.

                    The reason a few different people are arguing this point is because it is in fact wrong, or at least poorly expressed, to refer to someone’s unfamiliarity with some aspect of a field like the gaming market as “wildly uneducated.”

                    Ironically, the person using that phrase is demonstrating a lack of understanding of its common meaning, suggesting that they may be a better fit for the word “uneducated”. See e.g: https://www.merriam-webster.com/dictionary/uneducated

                    > What is going on right now?

                    As Wittgenstein put it, we’re playing language games.

                    • By Ensorceled 2026-03-0612:561 reply

                      > The person you’re replying to has only posted two short comments in this thread.

                      FYI: You can click on the user name and from there see their full comment history on hacker news.

                      Wild that you even mentioned this, considering the context.

                      > As Wittgenstein put it, we’re playing language games.

                      Yes.

                      • By antonvs 2026-03-0614:30

                        I can see how you allowed your own stalking tendencies to confuse yourself, but I wasn't referring to that.

                        Your comment made it sound as though they were being unreasonable in this thread. I don't see that in the two short comments you responded to. Perhaps you were having a bad day.

    • By Sharlin 2026-03-0510:553 reply

      > Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.

      Before LLMs we did already have a way to "save developers time from writing the same thing that has been done by other developers for a thousand times", you know? A LLM doing the same thing the 1001st time is not code reuse. Code reuse is code reuse.

      • By raincole 2026-03-0511:585 reply

        Because code reuse is hard. Like, really hard. If it weren't we wouldn't be laughing at left-pad. If it weren't hard we wouldn't have so many front-end JavaScript frameworks. If it weren't Unreal wouldn't still have their own GC and std-like implementation today. Java wouldn't have been reinventing build system every five years.

        The whole history of programming tool is exploring how to properly reuse code: are functions or objects the fundamental unit of reuse? is diamond inheritance okay? should a language have an official package management? build system? should C++ std have network support? how about gui support? should editors implement their own parsers or rely on language server? And none of these questions has a clear answer after thousands if not millions of smart people attempted. (well perhaps except the function vs object one)

        Electron is the ultimate effort of code reuse: we reuse the tens of thousands of human-years invested to make a markup-based render engine that covers 99% of use case. And everyone complains about it, the author of OP article included.

        LLM-coding is not code reuse. It's more like throwing hands up and admitting humans are yet not smart enough to properly reuse code except for some well-defined low level cases like compiling C into different ISA. And I'm all for that.

        • By Garlef 2026-03-0514:081 reply

          I think you could also argue that LLMs in coding are actually just a novel approach at code reuse: At the microscopic level, they excel at replicating known patterns in a new context.

          (Many small dependencies can be avoided by letting the LLM just re-implememt the desired behavior; ~ with tradeoffs, of course)

          The issue is orchestrating this local reuse into a coherent global codebase.

          • By bluefirebrand 2026-03-0517:011 reply

            LLMs in coding are like code reuse in the same way your neighbor hotwiring your car you parked in your driveway is just borrowing it

            You didn't park your car in your driveway so anyone could take it to get groceries

            • By plagiarist 2026-03-0519:19

              I didn't accept "copyright infringement is literal property theft" when the corporations were trying to convince us it was.

        • By layer8 2026-03-0514:24

          The problems with leftpad are a problem with the NPM ecosystem, not with code reuse as such. There are other dependency ecosystems that don't have these problems.

        • By thunky 2026-03-061:24

          > Because code reuse is hard. > humans are yet not smart enough to properly reuse code

          All of these difficulties you outline are because program designs cater to the human developer experience over the machine.

          If this weren't the case Python wouldn't exist because it's clearly not made with mechanical sympathy in mind. Dynamic languages as a whole would probably not exist.

          It's because humans will never agree with eachother or with machines and that's what has led us to the proliferation of complexity and lack of reuse we see everywhere.

        • By FpUser 2026-03-0514:051 reply

          >"well perhaps except the function vs object one"

          If this is what I think it is, I consider it very lopsided view, failure to recognize what model fits for what case and looking at everything from a hammer point of view

          • By raincole 2026-03-0515:321 reply

            I think function is the fundamental unit and object is an extra level over it (it doesn't mean there is no use for object). Thinking objects/classes are the fundamental/minimal level is straight up wrong.

            Of course it's just my opinion.

            • By FpUser 2026-03-0518:01

              My opinion: Fundamental levels are data and operations (your functions). Not my view that class is a foundation. It is a representation convenient for some cases and not so much for other

        • By bandrami 2026-03-0514:33

          I have terrible news: LLMs don't actually make it easier, though it feels like they do at first

      • By foobarbecue 2026-03-0511:413 reply

        Hard agree. Before LLMs, if there was some bit of code needed across the industry, somebody would put the effort into writing a library and we'd all benefit. Now, instead of standardizing and working together we get a million slightly different incompatible piles of stochastic slop.

        • By edgyquant 2026-03-0514:312 reply

          This was happening before llms in webdev

          • By deltaburnt 2026-03-0515:30

            I don't think we should use webdev as an example of why lossy copy and paste works for the industry.

          • By lenkite 2026-03-0619:10

            webdev is a special world with people going nuts and creating JS libraries for every color code.

        • By remich 2026-03-0516:45

          Yeah and then when that library stops being maintained or gets taken over, everything breaks.

        • By mexicocitinluez 2026-03-0514:02

          Before LLMs companies and people were forced to use one-size-fits-all solutions and now they can build custom, bespoke software that fits their needs.

          See how it's a matter of what you're looking at?

      • By porridgeraisin 2026-03-0511:271 reply

        Oh come on, you don't have to be condescending about function calls.

        https://news.ycombinator.com/item?id=47260385

        • By Sharlin 2026-03-0511:471 reply

          I was talking about libraries, higher-level units of reuse than individual functions. And your "syntactic" vs "semantic" reuse makes zero sense. Functions are literally written and invoked for their semantics – what they make happen. "Syntactic reuse" would be macros if anything, and indeed macros are very good at reducing boilerplate.

          You might have a more compelling argument if instead of syntax and semantics you contrasted semantics and pragmatics.

          • By porridgeraisin 2026-03-0512:051 reply

            A library is a collection of data structures functions. My argument still holds.

            > Syntactic reuse would be macros

            Well sure. My point is that what can be reused is decided ahead of time and encoded in the syntax. Whereas with LLMs it is not, and is encoded in the semantics.

            > Pragmatics

            Didn't know what that is. Consider my post updated with the better terms.

            • By runarberg 2026-03-0514:082 reply

              I’m not sure your logic is sound. It sounds like you are insisting on some nuance which simply isn’t there. LLM generates unmaintainable slop, which is extremely difficult to reason about, uses wrong abstractions, violates DRY, violates cohesion, etc.

              The industry has known how to reuse codes for two decades now (npm was released 16 years ago; pip 18 years ago). Using LLMs for code reuse is a step in the wrong direction, at least if you care about maintaining your code.

              • By porridgeraisin 2026-03-0515:56

                Oh sure the quality is extremely unreliable and I am not a fan of its style of coding either. Requires quite a bit of hand holding and sometimes it truly enrages me. I am just saying that LLM technology opens up another dimension of code reuse which is broader. Still a ways to go, not in the foundation model, those have plateaued, but in refining them for coding.

              • By naasking 2026-03-0514:33

                > LLM generates unmaintainable slop

                LLMs generate what you tell them to, which means it will be slop if you're careless and good if you're careful, just like programming in general.

    • By dannersy 2026-03-0511:173 reply

      You're cherry picking. The open world games aren't as compelling anymore since the novelty is wearing off. I can cherry pick, too. For example, Starfield in all its grandeur is pretty boring.

      And the users may not care about code directly, but they definitely do indirectly. The less optimized and more off-the-shelf solutions have seen a stark decrease in performance but allowing game development to be more approachable.

      LLMs saving engineers and developers time is an unfounded claim because immediate results does not mean net positive. Actually, I'd argue that any software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.

      • By whywhywhywhy 2026-03-0513:382 reply

        Startfield is boring because of the bad writing and they made a space exploration game where there are loading screens between the planet and space and you don’t actually explore space.

        They fundamentally misunderstood what they were promising, it’s the same as making a pirate game where you never steer the ship or drop anchor.

        You can prove people are not bored with the concept as new gamers still start playing fallout new Vegas or skyrim today despite them being old and janky.

        • By alexpotato 2026-03-0518:351 reply

          This is why Sid Meier's Pirates [0] remains such a great game.

          It was really a combination of mini-games:

          - you got steer a ship (or fleet of ships) around the Caribbean

          - ship to ship combat

          - fencing

          - dancing (with the Governors' daughters)

          - trading (from port to port or with captured goods0

          - side quests

          Each time I played it with my oldest, it felt like a brand new game.

          https://en.wikipedia.org/wiki/Sid_Meier%27s_Pirates!

          • By whywhywhywhy 2026-03-0610:03

            Played this as a kid, genuinely great gameplay loop and felt very immersive at the time.

        • By dannersy 2026-03-0518:402 reply

          I think my point stands. Procedural generation is a tool that usually works best when it is supplementary. What makes New Vegas an amazing game is all the hand built narratives and intricate storylines. So yeah, I agree, Starfield is boring because of the story. But if the procedural vastness was interesting enough to not be boring, then we wouldn't be talking about this to begin with.

          • By whywhywhywhy 2026-03-0614:23

            Starfield wasn't procedural vastness though, No Man's Sky is but what Starfield was is handmade content then a loading screen then a minigame then a loading screen then a small procedural "instance"/"dungeon" not a vast seamless world to explore.

          • By pojzon 2026-03-0521:48

            Im inclined to say that if Bathesta used LLMs for story based on known best seller books - it would be better than the garbage created by so called “modern script writers”.

            The same could be said about Hollywood movies and series.

            When agenda is more important than fun, books, movies, games are not labour of love but neglet.

      • By Zarathruster 2026-03-0516:221 reply

        Yeah I mean, I think procgen is cool tech, but there's a reason we don't talk about Daggerfall the same way we talk about Morrowind

      • By mexicocitinluez 2026-03-0514:091 reply

        > Starfield in all its grandeur is pretty boring.

        And yet "No Mans Sky" is massively popular.

        > ny software engineer worth their salt knows intimately that more immediate results is usually at the expense of long term sustainability.

        And any software engineer worth their salt realizes there are 100s if not 1000s of problems to be solved and trying to paint a broad picture of development is naive. You have only seen 1% (at best) of the current software development field and yet you're confidently saying that a tool that is being used by a large part of it isn't actually useful. You'd have to have a massive ego to be able to categorically tell thousands of other people that what they're doing is both wrong and not useful and that they things they are seeing aren't actually true.

        • By dannersy 2026-03-0518:361 reply

          No Man's Sky got better as they were more intentional with their content. The game has more substance and a lot of that had to be added by hand. It is dropped in procedurally but they had to touch it up, manually, to make it interesting. Let's not revise history.

          I don't think it has anything to do with ego. There are studies on the topic of AI and productivity and I assume we have a way to go before we can say anything concretely. Software workflows permeate the industry you're in. You're putting words in my mouth, I said nothing about what people are doing is wrong or not useful. I said the claim that generative AI is making engineers more productive is an unfounded one. What code you shit out isn't where the work starts or ends. Using expedient solutions and having to face potentially more work in the future isn't even something that is a claim about software, I can make that claim about life.

          You need to evaluate what you read rather than putting your own twist on what I've said.

          • By mexicocitinluez 2026-03-0518:541 reply

            You said:

            > LLMs saving engineers and developers time is an unfounded claim

            By whom exactly? If I say it saves me time, and another developer says the same, and so on, than it is categorically not unfounded. In fact, it's the opposite.

            You've completely missed the point if you don't understand how telling other people that their own experience in such a large field is "unfounded" simply because it doesn't line up with your experience.

            > we have a way to go before we can say anything concretely

            No YOU do. It's quite apparent to me how it can save time in the myriad of things I need to perform as a software developer (and have been doing).

            • By dannersy 2026-03-066:051 reply

              Anecdotal evidence, how scientific of you. When I say it's unfounded, I'm saying it hasn't been proven with actual research and data. So when you ask, "by whom?", that's exactly my point, it is unfounded. That's what that word means, no one has made a claim, backed by data, that AI is making significant waves on productivity. I don't think I've missed the point at all, but it seems I hit an emotional nerve with you though, so the conversation is over.

              • By mexicocitinluez 2026-03-0611:45

                Do I have to explain to another adult (presumably) what the word "unfounded" means? Are you purposely ignoring the hundreds of articles popping up on this site demonstrating the capabilities of these tools? Are they all lying?

    • By dec0dedab0de 2026-03-0514:433 reply

      No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.

      This reminded me of a conversation about AI I had with an artist last year. She was furious and cursing and saying how awful it is for stealing from artists, but then admitted she uses it for writing descriptions and marketing posts to sell her art.

      • By Lord-Jobo 2026-03-0515:491 reply

        Which I would point out isn’t necessarily hypocrisy on their part.

        I can rage against guns and gun manufacturers for their negative effects on our nation and hate when they are used for monstrous evil, but also believe that police should have firearms and that the second amendment is important. It’s a tool. You can hate the way it’s made and marketed, and hate many of its popular use cases, and still think there are acceptable ways to use and market it without requiring a total abolition.

        • By Aushin 2026-03-0517:18

          I mean, the police probably shouldn't have firearms and the second amendment is one of the worst legal creations in human history.

      • By WarmWash 2026-03-0516:402 reply

        Everyone is in it for themselves.

        The world makes waay more sense when you really internalize that. It doesn't necessarily mean people are selfish, large groups often have aligned interests, but when an individuals interest alignment changes, then their group membership almost always changes too.

        I'd bet she has a bunch of pirated content and anti-copyright remarks from the golden age of piracy as well.

        • By tovej 2026-03-0517:371 reply

          That's not true. Most people are interested in fostering a community, even when it means sacrifice.

          There _have_ however been studies that show that this attitude is prevalent in (neoclassical) economics students and others who are exposed to (neoclassical) economic thinking: https://www.sciencedirect.com/science/article/abs/pii/S22148...

          It's very effective propaganda. And we have a good example of it here. (Mot saying you're spreading it maliciously, but you are spreading it).

          • By Throaway1985123 2026-03-0519:501 reply

            People are in it for themselves...when it comes to participating in our capitalist economic system. The 2nd part is often left unsaid.

            • By WarmWash 2026-03-0520:242 reply

              Humans overwhelming group themselves with groups that provide themselves with the best value prop. When the individuals circumstances change, which causes the value prop of the group to change, people overwhelming move to a new group. It's not a capitalist or socialist thing.

              • By tovej 2026-03-066:40

                I'm in the groups I am because I like the people and what they're doing. I don't generally hang out with people because I expect it to benefit me.

              • By ozmodiar 2026-03-0523:25

                I'm having trouble thinking of groups of people who are even able to change members outside of modern capitalism. Through most of human history we've been stuck with our group or tribe. Heck, I see people stick with groups that are toxic to them just out of the sense of connection it gives.

        • By shadowgovt 2026-03-0518:11

          If she's a practicing artist, she almost certainly cut her teeth doing tracing at some point. And if a digital artist, she almost certainly used a cracked copy of a tool.

          The big eye-opener for me in college was taking a class that put me up-close with artists and learning that there were, in the whole class, a grand total of two students who hadn't started doing 3D modeling on a cracked copy of Maya (and the two, if memory serves, learned on Blender).

      • By raincole 2026-03-0515:26

        Sinix even explicitly says that AI is an IP theft machine but it's okay to use AI to generate 360 rotation video to market your 2D works[0].

        To summarize this era we live in: my AI usage is justified but all the other people are generating slop.

        [0]: https://www.youtube.com/watch?v=z8fFM6kjZUk

        [1]: Disclaimer: I deeply respect Sinix as an art educator. If it weren't him I wouldn't have learnt digital painting. But it's still quite a weird take of him.

    • By theshrike79 2026-03-058:532 reply

      Also "AI" has been in gaming, especially mobile gaming, for a literal decade already.

      Household name game studios have had custom AI art asset tooling for a long time that can create art quickly, using their specific style.

      AI is a tool and as Steve Jobs said, you can hold it wrong. It's like plastic surgery, you only notice the bad ones and object to them. An expert might detect the better jobs, but the regular folk don't know and for the most part don't care unless someone else tells them to care.

      And then they go around blaming EVERYTHING as AI.

      • By keyringlight 2026-03-0511:241 reply

        Another example is upscaled texture mods, which has been a trend for a long while before 'large language' took off as a trend. Mods to improve textures in a game are definitely not new and that probably means including from other sources, but the ability to automate/industrialize that (and presumably a lot of training material available) meant there was a big wave of that mod category a few years back. My impression is that gamers will overlook a lot so long as it's 'free' or at least are very anti-business (even if the industry they enjoy relies upon it), the moment money is involved they suddenly care a lot about the whole fabric being hand made and need verification that everyone involved was handsomely rewarded.

        • By KellyCriterion 2026-03-0511:532 reply

          This should be completely crushed by Nano Banana models?

          • By theshrike79 2026-03-0511:571 reply

            The issue isn't objective quality or realism, it's sticking to a specific style consistently.

            _Everyone_ (and their grandmother) can instantly tell a ChatGPT generated image, it has a very distinct style - and in my experience no amount of prompting will make it go away. Same for Grok and to a smaller degree Google's stuff.

            What the industry needs (and uses) is something they can feed a, say, wall texture into and the AI workflow will produce a summer, winter and fall variant of that - in the exact style the specific game is using.

            • By mejutoco 2026-03-0513:461 reply

              I think txt2img and img2img are terms to find those uses.

              • By bavell 2026-03-0514:432 reply

                And comfyUI workflows. People have been doing this for awhile now.

                • By mejutoco 2026-03-069:20

                  And stablediffusion-web-ui before that and others, yes.

                  When googling, txt2img and img2img, or txt2video img2video etc. (for video) are useful terms, since they encapsulate the usage in a few terms. One could search img2video comfyui workflows, for example.

                  I thought it would be useful for the conversation to provide these terms, not mentioned before in the thread.

                • By theshrike79 2026-03-0517:18

                  ComfyUI is relatively new, but pretty good at what it does

          • By raincole 2026-03-0512:42

            If we're talking about texture upscaling alone (I suppose that's what the parent comment means), Nano Banana is a huge overkill.

      • By delaminator 2026-03-059:542 reply

        "I hate CGI video"

        "So you hated the TV Series Ugly Betty then?"

        "What? that's not CGI!"

        This video is 15 years old

        https://www.youtube.com/watch?v=rDjorAhcnbY

        • By wormpilled 2026-03-0510:342 reply

          I think that's a different category, though. Those backgrounds are actual video recordings of real places, not 3D environments modeled from scratch. It looks 'real' because the background actually exists.

        • By runarberg 2026-03-0514:201 reply

          Your case would have been better if you had used Mad Max: Fury Road, or even Titanic as examples, rather then a mediocre TV show nobody remembers. Ugly Betty used green screens to make production cheaper, that did not improve the show (although it may have improved the profit margins). Mad Max: Fury Road on the other hand used CGI to significantly improve the visual experience. The added CGI probably increased the cost of the production, and subsequently it is one of the greatest, most awesome, movie ever made.

          Actually if you look at the scene from Greys Anatomy [0:54] you can see where CGI is used to improve the scene (rather then cut costs), and you get this amazing scene of the Washington State Ferry crash.

          I think you can see the parallels here. When people say they hate AI they are generally referring to the sloppy stuff it generates. It has enabled a proliferation of cheap slop. And with few exception it seems like generating cheap slop is all it does (these exception being specialized tools e.g. in image processing software).

          • By delaminator 2026-03-0515:001 reply

            > mediocre TV show

            Won 3 Primetime Emmys

            52 wins & 124 nominations total

            https://www.imdb.com/title/tt0805669/awards/

            I guess it's just too lowbrow for you.

            • By runarberg 2026-03-0516:01

              Award winning shows and movies does not exclude forgettable cash grabs.

              However, my counter examples included Grey’s Anatomy, Mad Max, and Titanic. None of these are considered high literature exactly (and all of them are award winning as well).

    • By trashymctrash 2026-03-058:52

      If you read the next couple of paragraphs, the author addresses this:

      > That said, Steam's policy has been recently updated to exclude dev tools used for "efficiency gains", but which are not used to generate content presented to players.

      I only quoted the first paragraph, but there is more.

    • By tovej 2026-03-059:163 reply

      An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.

      I can type up what I want much faster and be sure it's at least solving the right problem, even if it may have bugs.

      There are also tools to generate boilerplate that work much much better than LLMs. And they're deterministic.

      • By dntrshnthngjxct 2026-03-0510:233 reply

        If you do not plan out the architecture soundly, no amount of prompting will fix it if it is bad. I know this because my "handmade" project made with backward compatibility and horrible architecture keeps being badly fixed by LLM while the ones that rely on preemptive planning of the features and architecture, end up working right.

        • By dncornholio 2026-03-0511:43

          LLM's keep messing up even on a plain Laravel codebase..

        • By mikkupikku 2026-03-0511:171 reply

          I think that's true, but something even more subtle is going on. The quality of the LLM output depends on how it was prompted in a way more profound than I think most people realize. If you prompt the LLM using jargon and lingo that indicate you are already well experienced with the domain space, the LLM will rollplay an experienced developer. If you prompt it like you're a clueless PHB who's never coded, the LLM will output shitty code to match the style of your prompt. This extends to architecture, if your prompts are written with a mature understanding of the architecture that should be used, the LLM will follow suit, but if not then the LLM will just slap together something that looks like it might work, but isn't well thought out.

          • By simonask 2026-03-0514:103 reply

            This is magical thinking.

            LLMs are physically incapable of generating something “well thought out”, because they are physically incapable of thinking.

            • By mikkupikku 2026-03-0517:32

              I don't care if the machine has a soul, I only care what the machine can produce. With good prompting, the machine produces more ""thoughtful"" results. As an engineer, that's all I care about.

            • By Marha01 2026-03-0516:461 reply

              It is magical thinking to claim that LLMs are definitely physically incapable of thinking. You don't know that. No one knows that, since such large neural networks are opaque blackboxes that resist interpretation and we don't really know how they function internally.

              You are just repeating that because you read that before somewhere else. Like a stochastic parrot. Quite ironic. ;)

              • By tovej 2026-03-0520:20

                They really aren't that mysterious. We can confidently say that they function at the lexical level, using Monte Carlo principles to carve out a likely path in lexical space. The output depends on the distribution of n-grams in the training set, and the composition of the text in it's context window.

                This process cannot produce reasoning.

                1) an LLM cannot represent the truth value of statements, only their likelihood of being found in its training data.

                2) because it uses lexical data, an LLM will answer differently based on the names / terms used in a prompt.

                Both of these facts contradict the idea that the LLM is reasoning, or "thinking".

                This isn't really a very hit take either, I don't think I've talked to a single researcher who thinks that LLMs are thinking.

        • By tovej 2026-03-0520:111 reply

          You're just strawmanning now. I've prompted extremely well-specced, contained features, and the LLM has failed nonetheless.

          In fact, the more details I give it about a specific problem, the more it seems to hallucinate. Presumably because it is more outside the training set.

          • By dntrshnthngjxct 2026-03-139:47

            because you need to consider the context window, thus separate the prompts by task. Separating by tasks and planning things out is still your own work, no AI can replace that. assuming you do that properly, AI-generating the code may save you up to 15% of your full work time. Please reread my comment: "If you do not plan out the architecture soundly", planning includes breaking the task down and make multiple prompts.

            Our job is to break problems down into simpler ones until they are easily solvable, and if a machine simplifies the last steps, it is fine.

      • By bendmorris 2026-03-0516:09

        You're going to get a lot of "skill issue" comments but your experience basically matches mine. I've only found LLMs to be useful for quick demos where I explicitly didn't care about the quality of implementation. For my core responsibility it has never met my quality bar and after getting it there has not saved me time. What I'm learning is different people and domains have very different standards for that.

      • By vntok 2026-03-0510:001 reply

        > An LLM has never saved me time. It has always produced something that doesn't quite work, has the rough shape of what I want, but somehow always gets all the details wrong.

        This reads like a skill issue on your end, in part at least in the prompting side.

        It does take time to reach a point where you can prompt an LLM sufficiently well to get a correct answer in one shot, developing an intuitive understanding of what absolutely needs to be written out and what can be inferred by the model.

        • By Jooror 2026-03-0510:164 reply

          I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”. Or, to be a bit less generous, consider the possibility that the code you’re generating is boilerplate, marshaling, and/or API calls. A facade of perceived complexity over something that’s as complex as a filter-map or two.

          • By 3371 2026-03-0510:492 reply

            Sharing my 2 cents.

            In the past 2 months I've been using all the SOTA models to help me design a new DSL for narrative scripting (such as game story telling) and a c# runtime implementation o the script player engine.

            The language spec and design is about 95% authored by me up to this point; I have the LLMs work on the 2nd layer: the implementation specs/guidelines and the 3rd layer: concrete c# implementation.

            Since it's a new language, I consider it's somewhat new/novel tasks for LLMs (at least, not like boilerplate stuff like HTTP API or CRUD service). I'd say, these LLMs have been very helpful - you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design - but they are mostly smart enough to carry out the objectives, and they get better and better after the project got on track and has plenty of files/resources to read and reference.

            And I'd also say "prompt better" is a important factor, just much more nuanced/complicated. I started with 0 experience with LLM agents and have learned a lot about how to tame them, and developed a protocol to collaborate with agents, these all comes from countless trial and errors, but in the end get boiled down to "prompt better".

            • By Jooror 2026-03-0514:561 reply

              I wonder if my intuition here is correct; I would posit that “PL implementation” is a far more popular and well-explored field than it seems. How many toy/small/labor-of-love langs make it to Show HN? How many more simply don’t?

              I’ve never personally caught the language implementation bug. I appreciate your perspective here.

              • By 3371 2026-03-0515:061 reply

                I totally agree, and I was fully aware of how common people make language for fun when I replied.

                But I feel like the rationale would still stands: Considering LLMs' natures, common boilerplate tasks are easy because they can kind of just "decompress" from training data. But for a new language design, unless the language is almost identical to some other captured by the model, "decompression" would just fail.

                • By tovej 2026-03-0516:441 reply

                  As someone who has implemented a fair few DSLs, lexical and syntactic analysis is pretty much the same anywhere, and the structure of the lexer/parser does not really depend on the grammar of the language.

                  And even semantic analysis is at least very similar in most PLs. Even DSLs. Assuming you're using concepts like variables and functions.

                  When it comes to codegen / interpreter runtimes, things start to diverge. But this also depends on the use case. More often than not a DSL is a one-to-one map to an existing language, with syntactic sugar on top.

                  I'm curious, what's the DSL you're working on?

                  • By 3371 2026-03-0518:541 reply

                    It's pretty much WIP but if you are interested here is the repo. https://github.com/No3371/zoh

                    The points you brought up all are valid. Lexer, parser and general concepts are not language-specific, yes, and I wasn't talking about how the implementation is different.

                    When I said "you can tell they sometimes get confused and have trouble to comply to the foreign language spec and design", I was thinking about the many times they just fail to write in my language even when provided will full language specs. LLMs don't "think" and boilerplate is easy for LLMs because highly similar syntax structure even identical code exist in their training data, they are kind of just copying stuff. But that doesn't work that well when they are tasked to write in a original language that is... too creative.

                    • By tovej 2026-03-066:49

                      I see, so the LLM was generating text _in_ the DSL. Makes sense.

            • By tovej 2026-03-0516:36

              I am prompting better. It doesn't help the LLM be more productive than me on a regular tuesday.

              Sure, I can get the task done by delegating everything to an agentic workflow, but it just adds a bunch of useless overhead to my work.

              I still need to know what the code does at the end of the day, so I can document it and reason about it. If I write the code myself, it's easy. If an LLM does it, it's a chore.

              And even without those concerns, the LLM is still slower than me. Unless it's trivial boilerplate, in which case other tools serve me better and cheaper.

              I'll note that a compiler is one of the most well understood and implemented software projects, much of it open source, which means the LLM has a lot of prior art that it can copy.

          • By rybosworld 2026-03-0511:321 reply

            When web search first arrived, the same thing happened. That is, some people didn't like using the tool because it wasn't finding what they wanted. This is still true for a lot of folks today, actually.

            It's less "git gud; prompt better", and more, "be able to explain (well) what you want as the output". If someone messages the IT guy and says "hey my computer is broken" - what sort of helpful information can the IT guy offer beyond "turn it on and off again"?

            • By tovej 2026-03-0516:451 reply

              I can assure you I give LLMs all the information they need. Including hints to what kind of solution to use. They still fail.

              • By rybosworld 2026-03-0517:502 reply

                So how do you rectify your anecdotal experience against those made by public figures in the industry who we can all agree are at least pretty good engineers? I think that's important because if we want to stay ~anonymous, neither you nor I can verify the reputation of one another (and therefore, one another's relative master of the "Craft").

                Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:

                - Donald Knuth: https://www-cs-faculty.stanford.edu/%7Eknuth/papers/claude-c...

                - Linus Torvalds: https://arstechnica.com/ai/2026/01/hobby-github-repo-shows-l...

                - John Carmack: https://x.com/ID_AA_Carmack/status/1909311174845329874

                My point being - some random guy on the internet says LLM's have never been useful for them and they only output garbage vs. some of the best engineers in the field using the same tools, and saying the exact opposite of what you are.

                • By bendmorris 2026-03-0520:37

                  >Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:

                  This is a huge overstatement that isn't supported by your own links.

                  - Donald Knuth: the link is him acknowledging someone else solved one of his open problems with Claude. Quote: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."

                  - Linus Torvalds: used it to write a tool in Python because "I know more about analog filters—and that’s not saying much—than I do about python" and he doesn't care to learn. He's using it as a copy-paste replacement, not to write the kernel.

                  - John Carmack: he's literally just opining on what he thinks will happen in the future.

                • By tovej 2026-03-0520:37

                  You are overstating those sources. That alone makes me doubt that you're engaging in this discussion in good faith.

                  I read them all, and in none of them do any of the three say that they "regularly use LLMs for development".

                  Carmack is speculating about how the technology will develop. And Carmack has a vested interest in AI, so I would not put any value on this as an "engineers opinion".

                  Torvalds has vibe coded one visualizer for a hobby project. That's within what I might use to test out LLM output: simple, inconsequential, contained. There's no indication in that article that Linus is using LLMs for any serious development work.

                  Knuth is reporting about somebody else using LLMs for mathematical proofs. The domain of mathematical proofs is much more suitable for LLM work, because the LLM can be guided by checking the correctness of proofs.

                  And Knuth himself only used the partial proof sent in by someone else as inspiration for a handcrafted proof.

                  I don't mind arguing this case with you, but please don't fabricate facts. That's dishonest

          • By mikkupikku 2026-03-0511:211 reply

            > I’m curious about how you landed “git gud; prompt better” and not “maybe the domain I work in is a better fit for LLM code”.

            1. Personal experience. Lazy prompting vs careful prompting.

            2. They're coincidentally good at things I'm good at, and shit at things I don't understand.

            3. Following from 2, when used by somebody who does understand a problem space which I do not, they easily succeed. That dog vibe coding games succeeded in getting claude to write games because his master knew a thing or two about it. I on the other hand have no game Dev experience, even almost no hobby experience with games specifically, so I struggle to get any game code that even remotely works.

            • By Jooror 2026-03-0514:462 reply

              Irrespective of the domain you specifically listed in 3 (game dev is, believe it or not, one of the “more complex” domains), you have completely failed to miss the point.

              > 2. They're coincidentally good at things I'm good at, and shit at things I don't understand.

              This may well be! In the perfect world this would be balanced with the knowledge that maybe “the things you’re good at” are objectively* easier than “things you don’t understand”. Speaking for myself, I’m proficient in many more easy things than hard things.

              *inasmuch as anything can be “objectively” easier

              • By mikkupikku 2026-03-0517:25

                I have definitely considered the possibility that I'm simply good at easy things and the LLM is good at easy things, and that hard things are hard for both of us. And there certainly must be some element of that going on, but I keep noticing that different people get different quality results for the same kind of problems, and it seems to line up with how good they themselves would be at that task. If you know the problem space well, you can describe the problem (and approaches to it) with a precision that people unfamiliar with the problem space will struggle with.

                I think you can observe this in action by making vague requests, seeing how it does, then roll back that work and make a more precise request using relevant jargon and compare the results. For example, I asked claude to make a system that recommends files with similar tags. It gave me a recommender that just orders files by how many tags they had in common with the query file. This is the kind of solution that somebody may think up quick but it doesn't actually work great in practice. Then I reverted all of that and instead specified that it should use a vector space model with cosine similarity. It did pretty good but there was something subtly off. That is however about the limit of my expertise in this direction, so I tabbed over to a session with ChatGPT and discussed the problem on a high level for about 20 minutes, then asked ChatGPT to write up a single terse technically precise paragraph describing the problem. I told ChatGPT to use no bullet points and write no psuedocode, telling it the coding agent was already an expert in the codebase so let it worry about the coding. I give that paragraph to claude and suddenly it clicks, it bangs out a working solution without any drama. So I conclude the quality of the prompting determined the quality of the results.

          • By vntok 2026-03-0511:181 reply

            The parent is specifically talking about producing boilerplate code -a domain in which LLM excell at- and not having had any success at that. It's therefore not a leap of logic to assume they haven't put (enough) effort into getting better at prompting first, which is perfectly fine per se but leans towards a skill issue and not an immutable property of gen AI.

            The uncomfortable fact remains that one cannot really expect to get much better results from an LLM without putting some work themselves. They aren't magical oracles.

            • By tovej 2026-03-0516:47

              That is not at all what I said, please read my post more carefully before speculating.

              I am talking about using LLMs in general, not for boiler plate specifically.

              My point about boilerplate is that I have tools that solve this for me already, and do it in a more predictable way.

    • By BloondAndDoom 2026-03-0513:061 reply

      One the topic procedural generation; rogue likes are all about it and new generation Diablo like games have definitely similar things, well respected new games like Blue Prince. There has never been such as successful period of time for procedural generation in games like now, and all of these are pre-AI. AI powered procedural generation is wet dream of rogue-like lovers

      • By hiddevb 2026-03-0513:231 reply

        I don't think I agree with this take.

        I love procedural generation, and there is definitely a craft to it. Creating a process that generates a playable level or world is just very interesting to explore as an emergent system. I don't think LLMs will make these system more interesting by default. Of course there are still things to explore in this new space.

        It's similar to generative/plotter art compared to a midjourney piece of slop. The craft that goes into creating the code for the plotter is what makes it interesting.

        • By 1899-12-30 2026-03-0515:02

          The key to non-disruptive LLM integration is using it in a purely additive way, supplementing a feature with functionality that couldn't be done before rather than replacing an existing part. Like adding ai generated images to accompany the dwarf fortress artifact descriptions. It could completely togglable and doesn't disrupt any existing mechanics, but would provide value to those that don't mind the slop.

    • By Nursie 2026-03-0511:20

      > Spore is well acclaimed.

      Spore was fun (IMHO) but at the time of release was considered a disappointment compared to its hype.

    • By Izkata 2026-03-0516:24

      > Spore is well acclaimed.

      Its creature creator was, but as a game it was always mediocre to bad. They had to drop something like 90% of the features and extremely dumb down the stages to get it released.

      It was also what introduced a lot of us to SecuROM DRM - it bricked my laptop in the middle of a semester.

    • By fzeroracer 2026-03-0510:131 reply

      > Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing

      Do you ever ask why you're writing the same thing over and over again? That's literally the foundational piece of being an engineer; understanding when you're reinventing the wheel when there's a perfectly good wheel nearby.

      • By porridgeraisin 2026-03-0511:261 reply

        When you make a function

          f(a, b, c)
         
        It is reusable only if simply changing a, b, c is enough to give the function that you want. Options object etc _parameterise_ that function. It is useful only if the variability in reuse you desire is spanned by the parameters. This is syntactic reuse.

        With LLMs, the parameterisation goes into semantic space. This makes code more reusable.

        A model trained on all of GitHub can reuse all that code regardless of whether they are syntactically reusable or not. This is semantic reuse, which is naturally much broader.

        • By fzeroracer 2026-03-0511:442 reply

          There are two important failures I see with this logic:

          First, I am not arguing for reusability. Reusability is one of the most common mistakes you can make as a software engineer because you are over-generalizing what you need before you need it. Code should be written for your specific use case, and only generalized as problems appear. But if you can recognize that your specific use case fits a known problem, then you can find the best way to solve that problem, faster.

          Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.

          Coding is a constant balance between understanding what you're solving for and what can solve it. Using LLMs takes the worst of both worlds, by offloading both your understanding of the problem and your understanding of the solution.

          • By raw_anon_1111 2026-03-0514:30

            > Second, when you're using an LLM to make your code more 'reusable' you are taking full responsibility for everything that LLM vomits out. You're no longer assembling a car from well known parts, taking care to tailor it to your use case as needed. You're now building everything in said car, from the tires to the engine and the rearview mirror.

            If you are anything above a mid level ticket taker, your responsibility exceeds what you personally write. When I was an “architect” responsible for the implementation and integration work of multiple teams at product companies - mostly startups - and now a tech lead in consulting, I’m responsible for knowing how a lot of code works that I further write and I’m the person called to carpet by the director/CTO then and the customer now.

            I was responsible for what the more junior developers “vomit out”, the outside consulting company doing the Salesforce integration or god forbid for a little while the contractors in India. I no more cars about whether the LLM decided to use a for loop or while loop than I cared about the OSQL (not a typo) that the Salesforce consultants used. I care about does the resulting implementation meet the functional and non functional requirements.

            On my latest two projects, I understand the customer from talking to sales before I started, I understand the business requirements from multiple calls with the customer, I understand the architecture because I designed it myself from the diagrams and 8 years of working with and (in a former life at AWS) and reviewing it with the customer.

            As far as reusability? I’ve used the same base internal management web app across multiple clients.

            I built it (with AI) for one client. Extracted the reusable parts and removed the client specific parts and deployed a demo internally (with AI) and modified it and added features (with AI) for another client. I haven’t done web development since 2002 seriously except a little copy paste work. I didn’t look at a line of code. I used AWS Cognito for authentication. I verified the database user permissions.

            Absolutely no one in the value chain cares if the project was handcrafted or written by AI - as long as it was done on time, on budget and meets requirements.

            Before the gatekeeping starts, I’ve been working for 30 years across 10 jobs and before that I was a hobbyist for a decade who started programming in 65C02 assembly in 1986.

          • By porridgeraisin 2026-03-0512:08

            I am not talking about using an LLM to make code reusable in the sense youre arguing.

            My point is that the very act of training an LLM on any corpus of code, automatically makes all of that code reusable, in a much broader semantic way rather than through syntax. Because the LLM uses a compressed representation of all that code to generate the function you ask it to. It is like having an npm where it already has compressed the code specific to your situation (like you were saying) that you want to write.

    • By larodi 2026-03-0512:581 reply

      > No one cares about how the code is written.

      I would overstate:

      No one even cares how architecture is done. Unless you are the one fixing it or maintaining it.

      Sorry, no one. We all know Apple did some great stuff with their code, but we care more about the awful work done on the UI, right? I mean - the UI seems to not be breaking in these new OSs which is amazing feature... for a game perhaps, and most likely the code is top notch. But we care about other things.

      This is the reality, and the blind notion that so-many people care about code is super untrue. Perhaps someone putting money on developers care, but we have so many examples already of money put on implementation no matter what the code is. We can see everywhere funds thrown at obnoxious implementations, and particularly in large enterprises, that are only sustained by the weird ecosystem of white-collar jobs that sustains this impression.

      Very few people care about the code in total, and this can be observed very easy, perhaps it can be proved no other way around is possible.

      • By TimTheTinker 2026-03-0513:201 reply

        This is overstating it. Computers are amazing machines, and modern operating systems are also amazing. But even they cannot completely mask the downstream effects of poor quality code.

        You say you don't care, but I bet you do when you're dealing with a problem caused by poor code quality or bad choices made by the developer.

        • By miningape 2026-03-0516:491 reply

          Yep willing to bet that the majority of people saying "users don't care how well the code is written" will crash out when some software they're using is slow and buggy, even more extremely if it glitches and deletes their work.

          Just like how most people don't care how well a bridge is designed... until it collapses.

          • By larodi 2026-03-0521:23

            yes, they will hit the TV or through the remote, or bang the mouse, or the box. I mean - if you tell me this is 'care', no... this is outrage, and it would probably not even be focused to one particular component.

            and for what is worth - the reason of failure may not be because of particular nut, but the combination of them all.

            the whole idea that most software is done in good faith is just plainly wrong, and most of the software we rely on a daily basis - all the enterprise bullshit - is very very very often not done in good faith, but rather just made, seamed, helped into some weird equilibrium of temporary performance.

            perhaps hardware is done right more than software ever is.

    • By simianparrot 2026-03-068:52

      Let's not forget No Man's Sky here. Or Elite Dangerous' planet-scale procedural generation using solar system properties to fuel the deterministic but procedural generation of tectonic plates that again seed how a planet's surface is deterministically generated, even down to impact craters over millennia, for a universe of billions of consistent deterministically generated full-scale planets you can land on. Something you couldn't do without proc-gen because there's not enough disk space to store it.

    • By larsiusprime 2026-03-0514:03

      Also RE: procgen, one of the hit games right now, Mewgenics, is doing super well and uses it extensively. Obviously it's old school procgen that makes use of tons of authored content, but it's still procgen.

    • By AnotherGoodName 2026-03-0516:54

      Games with ai art assets are some of the most popular right now in any case. Arc raiders being a great example where some of the voice assets are AI generated.

      Be careful of reading any viewpoint on the internet. Apparently no one used facebook or instagram and everyone boycotts anything with ai in it.

      In reality i think you’d be foolish not to make use of the tools available. Arc Raiders did the right thing by completely ignoring those sorts of comments. There may be a market for 100% organic video games but there’s also a market for mainstream ‘uses every ai tool available’ type of games.

    • By Throaway1985123 2026-03-0519:47

      Spore was not well-acclaimed precisely because it failed to live up to its promises as a world-builder. Only the 1st two stages were any good.

    • By vor_ 2026-03-060:16

      > If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well.

      And if you actually read the article, you'd see it addressed that.

      > Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times.

      Like a library?

    • By krige 2026-03-0511:051 reply

      > Spore is well acclaimed

      And yet it also effectively ended Will Wright's career. Rave press reviews are not a good indicator of anything, really.

      • By h2zizzle 2026-03-0513:04

        Tbf Spore's acclaim comes with the caveat that it completely failed to live up to years of pre-release hype. Much of the goodwill it's garnered since, which is reflected in review scores, only came after the storm of controversy over Spore not being "the ultimate simulator which would mark the 'end of history' for gaming" died down.

        And you wouldn't really have any idea this was the case if you weren't there when it happened.

    • By amiga386 2026-03-0514:241 reply

      > Players only object against AI art assets. And only when they're painfully obvious.

      Restaurant-goers only object against you spitting in their food if it's painfully obvious (i.e. they see you do it, or they taste it)

      Players are buying your art. They are valuing it based on how you say you made it. They came down hard on asset-flipping shovelware before the rise of AI (where someone else made the art and you just shoved it together... and the combination didn't add up to much) and they come down hard on AI slop today, especially if you don't disclose it and you get caught.

      • By riversflow 2026-03-0515:391 reply

        > They came down hard on asset-flipping shovelware before the rise of AI

        That’s not what I remember, I remember PUBG being a viral hit that extensively used asset flipping.

        • By amiga386 2026-03-0516:31

          The more nuanced take is that, if somehow your game is actually good or interesting despite being full of other people's assets, players will see the value that you created (e.g. making a fun game). This is missing in most "asset-flip" games.

          Another example comes from Getting Over It with Bennett Foddy, which despite the fact it uses a lot of pre-bought art assets, the entire game has the indisputable hallmark of Bennett Foddy -- it has a ridiculously tricky control mechanism, and the whole game world you play in, should you make any mistakes, has a strong likelyhood of dropping you right back at the start, and it's all your own fault for not being able to recover from your mistakes under pressure. You can see this theme in his other games like QWOP and Baby Steps

    • By pojzon 2026-03-0521:40

      3/4 pf all code written now is auto-complete. Code was never the hard part.

    • By SirMaster 2026-03-0514:172 reply

      >No one cares about how the code is written.

      People definitely do care. Nobody wants vibe-coded buggy slop code for their game.

      They want well designed and optimized code that runs the game smoothly on reasonable hardware and without a bunch of bugs.

      • By llm_nerd 2026-03-0514:23

        Your second paragraph does not follow, at all, from the first. These are completely orthogonal demands.

        The gaming industry is absolutely overwhelmed with outrageously inefficient, garbage, crash-prone code. It has become the norm, and it has absolutely nothing to do with AI.

        Like https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times.... That something so outrageously trash made it to a hundreds-of-million dollar game, cursing millions to 10+ minute waits, should shame everyone involved. It's actually completely normal in that industry. Trash code, thoughtless and lazily implemented, is the norm.

        Most game studios would likely hugely improve their game, har har, if they leveraged AI a lot more.

      • By roesel 2026-03-0514:232 reply

        No one wants _buggy slop code_ for their game, but ultimately no one cares whether is has been hand crafted or vibe-coded.

        As proof, ask yourself which of the following two options you would prefer:

        1. buggy code that was hand-written 2. optimized code that was vibe-coded

        I'll bet most people will choose 2.

        • By SirMaster 2026-03-0514:271 reply

          I've never seen something as complex as a video game vibe coded that was actually well optimized. Especially when the person doing the prompting is not a software developer.

          So I personally do care and I am someone, so the answer is not no one.

          • By NeutralCrane 2026-03-0522:40

            Vibe coding as we know it has only been a thing for the last 12-18 months. So by definition the vibe-coded games you have seen are the ones being rushed.

        • By bigstrat2003 2026-03-0523:25

          > No one wants _buggy slop code_ for their game, but ultimately no one cares whether is has been hand crafted or vibe-coded.

          Right. And vibe coding is only capable of producing buggy slop code, therefore people won't want something which is vibe coded.

    • By mathgradthrow 2026-03-0513:193 reply

      localization? Why would you oppose LLMs doing localization?

      • By fhd2 2026-03-0514:01

        I guess the chain of reasoning would be: AI for art is bad -> Writing is art -> Translation is writing.

        Personally, I do appreciate good localisation, Nintendo usually does a pretty impressive job there. I play games in their original language as long as I actually speak that language, so I don't have too many touch points with translations though.

      • By JadeNB 2026-03-0514:03

        In case they hallucinate? There's no point having content in a wide variety of languages if it's unpredictably different from the original-language content.

      • By teamonkey 2026-03-0517:00

        It’s bad at it. At least, it can’t be guaranteed to get nuance or context correct in a way that doesn’t feel artificial to a fluent speaker.

        My favourite example I saw was where Google translated an information page of the Italian branch of a large multinational as “this is the UK branch of [multinational]”, presumably because the LLM thought that was more contextually appropriate in English.

    • By hamdingers 2026-03-0515:11

      At least to some extent, the anti-ai folks don't care about ai assisted programming because they see programmers as the "techbro" boogieman pushing ai into their lives, not fellow creatives who are also at a crossroads.

    • By lxgr 2026-03-0510:092 reply

      > I don't know how one can spins this as a bad thing.

      People spin all kinds of things if they believe (accurately or not) that their livelihood is on the line. The knee-jerk "AI universally bad" movement seems just as absurd to me as the "AGI is already here" one.

      > Spore is well acclaimed. Minecraft is literally the most sold game ever.

      Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.

      As I see it, it's all a matter of how well it's executed. In the best case, a skilled artist uses automation to fill in mechanical rote work (in the same way that e.g. renaissance artists didn't make every single brushstroke of their masterpieces themselves).

      In the worst (or maybe even average? time will tell) case, there are only minimal human-made artistic decisions flowing into a work and the output is a mediocre average of everything that's already been done before, which is then rightfully perceived as slop.

      • By mikkupikku 2026-03-0511:121 reply

        > Counterpoint: Oblivion, one of the first high-profile games to use procedural terrain/landscape generation, seemed very soulless to me at the time.

        Is that even a counter point? Nobody in their right mind would ever claim that procedural generation is impossible to fuck up. The reason Minecraft/etc are good examples is because they prove procedural generation can work, not that it always works.

        • By lxgr 2026-03-0511:26

          True, I should have said "counterexample". Procedural generation is just another tool, in the end, and it can be used for great or mediocre results like any other.

      • By zimpenfish 2026-03-0510:541 reply

        > Oblivion, one of the first high-profile games to use procedural terrain/landscape generation

        I might be misremembering but wasn't the Oblivion proc-gen entirely in the development process, not "live" in the game, which means...

        > "In the best case, a skilled artist uses automation to fill in mechanical rote work"

        ...is what Bethesda did, no?

        • By lxgr 2026-03-0511:261 reply

          Yes, but I beg to differ on the "skilled" part. I find the result very jarring somehow; the scale of the world didn't seem right. (Probably because it was too realistic; part of the art of game terrain design is reconciling the inherently unrealistic scales.)

          • By bombcar 2026-03-0512:42

            WoW had this but you never really thought about it - even the massive capital cities were a few blocks at most.

            The problem with procedural generation is it's hard to make it as action-packed and desirable as WoW zones, and even those quickly become fly-over territory.

  • By wolvesechoes 2026-03-0512:278 reply

    I am bit tired of such discussions.

    I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.

    Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.

    • By slibhb 2026-03-0515:053 reply

      > What matters in the end is that this tech is not to empower a common person (although it could).

      How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person". It lowers barriers!

      A lot of the anti-AI sentiment on HN concerns people losing their jobs. I don't think this will happen: programmers who know what they're doing are going to be way, way more effective at using AIs to generate code than others.

      But even if it is true and we do see job losses in tech: are software devs really "in a precarious position"? Do they really qualify as "those that have little"? Seems like a fantasy to me. Computer programmers have done great over the past 30 years.

      More broadly, anti-AI sentiment comes from people who dislike change. It's hard to argue someone out of that position. You're allowed to prefer stasis. But the world moves on and I think it's best to remain optimistic, keep an open mind, and make the most of it.

      • By bunderbunder 2026-03-0516:322 reply

        It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.

        $20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."

        Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.

        That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.

        • By slibhb 2026-03-0517:241 reply

          > It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.

          Can you share those studies? I'm pretty skeptical of this effect. I find that AI has made my job easier and less stressful.

          In general, I think your atittude is not realistic, it's just general pessimism about the world ("everything new is bad") that is basically unfounded.

        • By pigpop 2026-03-0612:39

          >More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.

          Similar things happened with the adoption of computers in the workplace. Perhaps there's a case for banning all digital technology and hiring typists and other assistants to perform the work using typewriters and mechanical calculators? There would certainly be less multitasking when you have 8 hours worth of documents to retype and file/mail. Perhaps there would be less overtime when your boss can see you have a high workload by the state of papers piled upon your desk. Or maybe we can solve these problems in a different way.

      • By ezst 2026-03-0611:53

        > How do you figure? 20 dollars/month is insanely cheap for what OpenAI/Anthropic/Google offer. That absolutely qualifies as "empowering a common person".

        This must be sarcasm. This has to be.

      • By vips7L 2026-03-0515:173 reply

        > I don't think this will happen

        Block just laid off 40% of their company citing AI.

        • By CPLX 2026-03-0515:25

          > Block just laid off 40% of their company

          Because the company was being horribly run and over hired and "pivoted to blockchain" for no fucking reason.

          > citing AI.

          Because it's 2026 and they thought that would work to bullshit a few people about point one, which apparently it did.

        • By slibhb 2026-03-0515:303 reply

          Tech companies have been laying off employees for a while now. I think it's mostly due to pandemic overhiring and higher interest rates but I suppose we'll see.

          • By vips7L 2026-03-0516:17

            I agree that AI was not the _actual_ reason, however, it did allow them to do massive layoffs without admitting they are doing poorly and not taking a massive hit to their stock price.

          • By miyoji 2026-03-0517:411 reply

            > I think it's mostly due to pandemic overhiring and higher interest rates

            It's not because of pandemic overhiring, and if that were true, the layoffs in 2021-2022 would have handled it. It's 2026. The people getting laid off (on average) haven't worked at these companies since before the pandemic, they got hired in ~2023 (average tenure at a tech company is ~3 years).

            It's not because of AI either. Nobody is replacing jobs with AI, AI can't do anyone's job.

            It's not because of interest rates. People hired like crazy when interest rates were this high in the oughts.

            It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine. It's not true, but the downsides come later and the cash influx comes now, so they're doing it anyway.

            • By glitch13 2026-03-0519:17

              > It's because Elon Musk's Twitter purchase and subsequent management convinced every executive in tech that you can cut to the bone, fuck your product's quality completely, and be totally fine.

              I agreed with you up to this point. Twitter largely operated in the red for its entire existence prior to his "restructuring" to make it leaner and profitable. In my opinion, twitter went to shit when the incentive for creating engagement switched from gaining social capital to gaining... erm... actual capital. The laissez-faire attitude about allowing fairly terrible behavior on there gave it a PR black eye that probably didn't help either in the eyes of advertisers.

              If I had to guess what happened with Block (and that's what we're all doing, guessing): a CEO's job is to make the line go up, and saying you introduced tools to increase productivity with half the staff (especially if you're overstaffed) seems to me a pretty easy way to do that. I saw someone on here refer to it as "Vibe CEOing", which I think is pretty on point. Again, just my opinion/guess.

          • By FuckButtons 2026-03-066:57

            junior dev hiring is down 60%, that’s not just a post pandemic correction.

        • By Lumping6371 2026-03-1118:39

          "Juan said his dad works at Nintendo"

    • By spacecadet 2026-03-0512:321 reply

      Demand full automation. Demand universal basic income. Notice how the later is nearly absent from the conversation.

      Another distraction is AGI that which is a danger to humanity- the only danger is people...

      • By pixl97 2026-03-0514:541 reply

        > the only danger is people...

        Simply put, no it is not.

        But on the reverse, the first danger with AI is people.

        Over the longer term it will look like this. The rich 'win' the world by using AI to enslave the rest of mankind and claim ownership over everything. This will suck and a lot of us will die.

        The problem is this doesn't solve the greed that cause the problem in the first place. The world will still be limited in a resources of something which will end with the rich in a dick measuring contest and to win that contest they will put more and more power in AI and they connive and fight each other. Eventually the AI has enough power that it kills us all, intentionally or not.

        We'll achieve nearly unlimited capability long before we solve the problem of unlimited greed and that will spell our end.

        • By spacecadet 2026-03-0518:031 reply

          This is entirely assumptions about a future that has not happened.

          Ive worked in "AI" for 20 years, through 2 winters, and run an alignment shop and AIRT... The problem is people. People will use the problem as a scapegoat.

          • By pixl97 2026-03-0520:101 reply

            Dinosaurs lived 100 million years, before they didn't.

            And walls between France and Germany were effective, until they weren't.

            Hell, even the 'people' is the problem doesn't work well for things like Moloch problems. Which people? The problem can no longer be pointed at any individual but a super-organizational response. Once you have an issue that is abstracted from it's base components, then any agent capable of parsing the abstraction can be part of influencing it and becoming part of Moloch.

            • By spacecadet 2026-03-0522:371 reply

              Im familiar. Our group employs game theory in our research... In practice, if you are at the point of blame- yes you have failed.

              • By pixl97 2026-03-060:451 reply

                God damnit eukaryotes, I told you this bullshit was going to have long term ramifications 2 billion years ago!

                • By spacecadet 2026-03-0613:29

                  Now thats good! :)

                  I think you and I likely agree more than we disagree. Its just out of context...

    • By wepple 2026-03-0513:124 reply

      > It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position

      Could be. It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.

      It’s here, so I don’t know where you’re going with “I’m unhappy this is happening and someone should do something”

      • By idopmstuff 2026-03-0514:261 reply

        It's also worth nothing that the "our" in that sentence is just SWEs, who are a pretty small group in the grand scheme of things. I recognize that's a lot of HN, but still bears considering in terms of the broader impact outside of that group.

        I'm a small business owner, and AI has drastically increased my agency. I can do so much more - I've built so many internal tools and automated so many processes that allow me to spend my time on things I care about (both within the business but also spending time with my kids).

        It is, fortunately, and unfortunately, the nature of a lot of technology to disempower some people while making lives better for others. The internet disempowered librarians.

        • By wolvesechoes 2026-03-0514:371 reply

          > It's also worth nothing that the "our" in that sentence is just SWEs

          It isn't, it just a matter of seeing ahead of the curve. Delegating stuff to AI and agents by necessity leads to atrophy of skills that are being delegated. Using AI to write code leads to reduced capability to write code (among people). Using AI for decision-making reduces capability for making decisions. Using AI for math reduces capability for doing math. Using AI to formulate opinions reduces capability to formulate opinions. Using AI to write summaries reduces capability to summarize. And so on. And, by nature, less capability means less agency.

          Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them

          Not to mention utilizing AI for control, spying, invigilation and coercion. Do I need to explain how control is opposed to agency?

          • By idopmstuff 2026-03-0519:271 reply

            I'll grant that it does extend beyond SWEs, but whether AI atrophies skills is entirely up to the user.

            I used to use a bookkeeper, but I got Claude a QuickBooks API key and have had it doing my books since then. I give it the same inputs and it generates all the various journal entries, etc. that I need. The difference between using it and my bookkeeper is I can ask it all kinds of questions about why it's doing things and how bookkeeping conventions work. It's much better at explaining than my bookkeeper and also doesn't charge me by the hour to answer. I've learned more about bookkeeping in the past month than in my entire life prior - very much the opposite of skill atrophy.

            Claude does a bunch of low-skill tasks in my business, like copying numbers from reports into different systems into a centralized Google Sheet. My muscle memory at running reports and pulling out the info I want has certainly atrophied, but who cares? It was a skill I used because I needed the outcome, not because the skill was useful.

            You say that using AI reduces all these skills as though that's an unavoidable outcome over which people have no control, but it's not. You can mindlessly hand tasks off to AI, or you can engage with it as an expert and learn something. In many cases the former is fine. Before AI ever existed, you saw the same thing as people progressed in their careers. The investment banking analyst gets promoted a few times and suddenly her skill at making slide decks has atrophied, because she's delegating that to analysts. That's a desirable outcome, not a tragedy.

            Less capability doesn't necessarily mean less agency. If you choose to delegate a task you don't want to do so you can focus on other things, then you are becoming less capable at that skill precisely because you are exercising agency.

            Now in fairness I get that I am very lucky in that I have full control of when and how I use AI, while others are going to be forced to use it in order to keep up with peers. But that's the way technology has always been - people who decided they didn't want to move from a typewriter to a word processor couldn't keep up and got left behind. The world changes, and we're forced to adapt to it. You can't go back, but within the current technological paradigm there remains plenty of agency to be had.

            • By wolvesechoes 2026-03-0614:08

              > but whether AI atrophies skills is entirely up to the user

              Thing with society is that we cannot simply rely on self-discipline and self-control of individuals. For the same reason we have universal and legally enforced education system. We would still live in mostly illiterate society if people were not forced to learn or not forced to send their children to school.

              Analogies to past inventions are limited due to the fact that AI doesn't automate physical-labor, hard or light - it automates, or at least its overlords claim it automates, lot of cognitive and creative labor. Thinking itself, at least in some of its aspects.

              From sociological and political perspective there is a huge difference between majority of population losing capability to forge swords or sew dresses by hand and capability to formulate coherent opinions and communicate them.

      • By wolvesechoes 2026-03-0513:181 reply

        > It could also end up freeing us from every commercial dependency we have

        Yeah, companies that develop and push this tech definitely have this in mind.

        > I don’t know where you’re going with “I’m unhappy this is happening and someone should do something

        I am not surprised because I didn't write anything like it.

        • By margalabargala 2026-03-0515:31

          > > I don’t know where you’re going with “I’m unhappy this is happening and someone should do something

          > I am not surprised because I didn't write anything like it.

          You're right, there was no "someone should do something" call to action in your original comment.

      • By LetsGetTechnicl 2026-03-0514:372 reply

          It could also end up freeing us from every commercial dependency we have. Write your own OS, your own mail app, design your own machinery to farm with.
        
        
        Lmfao LLM's can barely count rows in a spreadsheet accurately, this is just batshit crazy.

        edit: also the solution here isn't that every one writes their own software (based on open source code available on the internet no doubt) we just use that open source software, and people learn to code and improve it themselves instead of off-loading it to a machine

        • By margalabargala 2026-03-0515:331 reply

          This is one of those things where people who don't know how to use tools think they're bad, like people who would write whole sentences into search engines in the 90s.

          LLMs are bad at counting the number of rows in a spreadsheet. LLMs are great at "write a Python script that counts the number of rows in this spreadsheet".

          • By teolandon 2026-03-0515:432 reply

            Do you think asking any LLM in the next 100 years to "write a Python script that generates an OS" will work?

            • By antonyh 2026-03-0516:031 reply

              Yes, for some definition of OS. It could build a DOS-like or other TUI, or a list of installed apps that you pick from. Devices are built on specifications, so that's all possible. System API it could define and refine as it goes. General utilities like file management are basically a list of objects with actions attached. And so on... the more that is rigidly specified, the better it will do.

              It'll fail miserably at making it human-friendly though, and attempt to pilfer existing popular designs. If it builds a GUI, it's be a horrible mashup of Windows 7/8/10/11, various versions of OSX / MacOS, iOS, and Android. It won't 'get' the difference between desktop, laptop, mobile, or tablet. It might apply HIG rules, but that would end up with a clone at best.

              In short, it would most likely make something technically passable but nightmareish to use.

              • By margalabargala 2026-03-0516:38

                Given 100 years though? 100 years ago we barely had vacuum tubes and airplanes.

                Given a century the only unreasonable part is oneshotting with no details, context, or follow up questions. If you tell Linus Torvalds "write a python script that generates and OS", his response won't be the script, it'll be "who are you and how did you get into my house".

            • By margalabargala 2026-03-0516:22

              Considering how simple "an OS" can be, yes, and in the 2020s.

              If you're expecting OSX, AI will certainly be able to make that and better "in the next 100 years". Though perhaps not oneshotting off something as vague as "make an OS" without followup questions about target architecture and desired features.

        • By wepple 2026-03-0517:522 reply

          Batshit crazy?

          3 years ago LLMs couldn’t solve 7x8.

          Now they’re building complex applications in one shot, solving previously unsolved math and science problems.

          Heck, one company built a (prototype but functional) web browser

          And you say it’s crazy that in the future it’ll be able to build a mail app or OS?

          • By ezst 2026-03-069:271 reply

            JFYI, LLMs still can't solve 7x8, and well possibly never will. A more rudimentary text processor shoves that into a calculator for consumption by the LLM. There's a lot going on behind the scenes to keep the illusion flying, and that lot is a patchwork of conventional CS techniques that has nothing to do with cutting edge research.

            To many interested in actual AI research, LLMs are known as the very flawed and limiting technique they are, and the increasing narrative disconnect between this and the table stakes where they are front and center of every AI shop, carrying a big chunk of the global GDP on its back, is annoying and borderline scary.

            • By stratos123 2026-03-0614:09

              This is false. You can run a small open-weights model in ollama and check for yourself that it can multiply three-digit numbers correctly without having access to any tools. There's even quite a bit of interpretability research into how exactly LLMs multiply numbers under the hood. [1]

              When an LLM does have access to an appropriate tool, it's trained to use the tool* instead of wasting hundreds of tokens on drudgery. If that's enough to make you think of them as a "flawed and limiting technique", consider instead evaluating them on capabilities there aren't any tools for, like theorem proving.

              * Which, incidentally, I wouldn't describe as invoking a "more rudimentary text processor" - it's still the LLM that generates the text of the tool call.

              [1] https://transformer-circuits.pub/2025/attribution-graphs/bio...

          • By bigstrat2003 2026-03-0523:281 reply

            > Heck, one company built a (prototype but functional) web browser

            No, they built something which claimed to be a web browser but which didn't even compile. Every time someone says "look an LLM did this impressive sounding thing" it has turned out to be some kind of fraud. So yeah, the idea that these slop machines could build an OS is insane.

            • By wepple 2026-03-060:29

              I personally observe AI creation phenomenally good code, much better than I can write. At insane speed, with minimal oversight. And today’s AI is the worst we will ever have.

              Progress in AI can easily be measured by the speed at which the goalposts move - from “it can’t count” to “yeah but the entire browser it wrote didnt compile in the CI pipeline”

      • By ModernMech 2026-03-0515:261 reply

        What happens when they decide it's a national security threat and an act of domestic terrorism to use AI to undermine commercial dependencies? We're all acting like AI isn't being invented within the context of and used by a fascist regime.

        • By ajewhere 2026-03-067:431 reply

          Look, from a point of view of a person outside of US, you are all fascists, "democrats" and trumpists. Dont take this as "trolling", but as a sincere opinion (I dont care about your internal brawls, I care for what you do to others.)

          • By ModernMech 2026-03-0617:28

            I think it's a widely held understanding that liberals will always side with fascists over leftists when it comes down to it.

    • By tptacek 2026-03-0516:324 reply

      This argument can be used, and has been used, about every innovation in automation since the dawn of the industrial revolution.

      • By ezst 2026-03-0612:06

        No it hasn't. Work mechanisation throughout history has resulted in a shift from manual labour to one that's more intellectual in nature. Modern AI believers pretend that it will take over those jobs soon as well.

        This would essentially bring us to a cross-roads between, on one hand, a utopia with UBI and people not needing to work (because their labour is unnecessary), or a dystopia, where few technocratic "lords" own the means of work automation and rule over a submissive world.

        I don't think it takes a genius to guess where this is heading in our current political climate.

        Personally, I'm not scared about any of that, because I don't believe LLMs to be very potent as an AI tool. Robotic militias (remotely controlled by BI or AI) seem a much more tangible threat.

      • By GreenWatermelon 2026-03-074:191 reply

        Yeah, and for good reason. The invitation of the light bulb meant factory owners could force workers into 16 hours shifts. The main beneficiaries of new tech were always the capital owners. Workers had to literally fight and die for our 8-hour workdays and 5-days workweek.

        This is still going on today: the massive gains from automation are being hoarded by the wealthy capital owners, while workers struggle to make ends meet.

        • By tptacek 2026-03-074:21

          Ok but also I want to light my house with lightbulbs, not with animal-fat candles.

      • By ducttapecrown 2026-03-0516:57

        It is not the technology that sucks ever more money out of the populace, it's the people at the top!

    • By phyzix5761 2026-03-0514:282 reply

      At some point, if most people lose their jobs, you have no market to sell your services to. So, either, new jobs have to be created in order to keep the capitalism machine running, or you have to provide for the needs of every human being from whatever you're doing with your AI. Otherwise, a lot of hungry people revolt and you have violence against these businesses.

      I think new jobs will be created because AI is always limited by hardware and its current capabilities. Businesses, in order to compete, want to do things their competitors aren't currently doing. Those business needs always go beyond the current technological capabilities until the tech catches up and then they lather, rinse, repeat.

    • By Gagarin1917 2026-03-0516:43

      >It is there to reduce our agency

      Complete bullshit.

      The individual has never had as much ability to take on large projects as they do now. They’ve never been able to learn as easily as they can now.

      >to make it easier to fire us

      As of now, the technology increases productivity in the average user. The companies that take advantage of that and expand their offering will outperform the ones that simply replace workers and don’t expand or improve offerings.

      More capable employees make companies more money in general. Productivity increases lead to richer societies and yes, even more jobs, just as it always has.

    • By simmerup 2026-03-0512:291 reply

      I guess you didn't read the article?

    • By visarga 2026-03-0517:01

      > It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.

      You could say this is the story of society, it makes us dependent on each other, reduces our agency, puts us in precarious positions (like WW2). But nobody would argue against society like that.

      What happens here is that we become empowered by AI and gain some advantages which we immediately use and become dependent on, eventually not being able to function without them - like computers and even thermostats.

      Does anyone think how would economy operate without thermostats? No fridges, no data centers, no engines... they all need thermostats. We have lost some freedom by depending on them. But also gained.

  • By noemit 2026-03-0511:0814 reply

    Many people don't know this, but the Luddites were right. I studied Art History and this particular movement. One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)

    I was able to feel wool scarves made in europe from the middle ages. (In museum storage, under the guidance of a curator) They are a fundamentally different product than what is produced in woolen mills. A handmade (in the old traditiona) woolen scarf can be pulled through a ring, because it is so thin and fine. Not so for a modern mill-made scarf.

    Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.

    Weavers in Kashmir work a similar level of quality, but their wool is different, their needs and techniques are different, so while we still have craftsman that can produce wool by hand, most of the traditions and techniques are lost.

    Is it a tragedy? I go back and forth. Obviously the heritage fabrics are phenomenal and luxurious. Part of me wishes that the tradition could have been maintained through a luxury sector.

    Automation is never a 1:1 improvement. It's not just about the speed or process. The process itself changes the product. I don't know where we will net out on software, and I do think the complaints are justified - but the Luddites were also justified. They were *Right*. Their whole argument was that the mills could not product fabric of the same quality. But being right is not enough.

    I'm already seeing vibe-coded internal tools at an org I consult at saving employees hundreds of hours a month, because a non-technical person was empowered to build their own solution. It was a mess, and I stepped in to help optimize it, but I only optimized it partially, making it faster. I let it be the spaghetti mess it was for the most part - why? because it was making an impact already. The product was succeeding. And it was a fundamentally different product than what internal tools were 10 years ago.

    • By forinti 2026-03-0512:343 reply

      Your comment made me think of the Japanese. They have a highly industrialised society, but they also value greatly hand-made products from food and clothes to woodwork and houses.

      And they also like to emphasise how long it takes for someone to become a master at a given trade.

      • By layer8 2026-03-0514:29

        Japan is struggling to get new blood into the traditional crafts, unfortunately. They are slowly losing that as well.

      • By casey2 2026-03-078:04

        That's because they spent a couple hundred years with a society structured around low/no growth and enforced familial occupations. There isn't much to do in such a society other than refine your craft aesthetically and wait for the growth-structured countries to come knocking.

        LLMs are yet another example of science and technology leading to growth. Which means it's STUPID to restructure society into a low/no growth model like the smelly Europeans degrowthers want. On our current trajectory the only time it makes sense to degrowth is few hundred years after we make a dyson sphere or until FTL travel becomes possible.

      • By aosnsbbz 2026-03-0512:523 reply

        It was really eye opening seeing they’re able to eat raw eggs and (to maybe a lesser degree of safety) raw chicken because their society requires high standards of cleanliness in food production. We are literal cattle over here in the states.

        Though, given Amodei and Altman’s behavior (along with the rest of the billionaire class) that shouldn’t be a surprise to anyone.

        • By Aurornis 2026-03-0515:25

          You could eat raw eggs in most modern countries and be mostly fine. It’s not as uncommon as you would think. There are many drink recipes with raw eggs as an ingredient. You just happened to be exposed to it in Japan.

          Eating raw chicken is risky even in Japan. There are cultures that eat raw chicken, pork, and other meat products by choice but it’s always a risk. There are outbreaks of serious food borne illness in Japan from raw chicken: https://pubmed.ncbi.nlm.nih.gov/18406474/

        • By anonymous_sorry 2026-03-0513:121 reply

          Eggs in the UK are safe to eat raw (and I presume the EU as well [but please verify before doing so!]).

          • By Ekaros 2026-03-0513:49

            Finland too. Fundamentally it is putting food safety over profits. Eggs being salmonella free is based on regular testing and culling infected flocks. It is a process that need constant work.

        • By raincole 2026-03-0514:11

          Germans even eat raw pork. Plus it has nothing to do with the parent comment.

    • By hermannj314 2026-03-0511:532 reply

      only code anyone will be touching in a museum in 800 years will be the good code. I hope they don't talk about what great craftsmen we all were because someone saw an original Fabrice Bellard at the Louvre.

      Survivor bias plays a role in glorifying the past.

      • By noemit 2026-03-0513:16

        You're right in that we kept the best examples (as coding museums will do in the future) but the best of something is a benchmark. It is striking that modern automation, even hundreds of years later, can't touch what a skilled craftsman could do in the past.

        With programming, we documented a lot of it, so it's unlikely to go the way of fine weaving. People will always be able to learn to think and be great programmers.

        Maybe if the wool weavers had internet, they could have blogged, made youtube videos, and cataloged their profession so it could last Millenia.

      • By salad-tycoon 2026-03-0512:512 reply

        Agreed, I think the good gained by wool mills is greater in that little Timmy is less likely to lose a leg to frostbite than the bad loss of my scarf not passing through a ring.

        Long term though, I’ve always wondered if the Amish turn out to be the only survivors.

        • By kevstev 2026-03-0516:561 reply

          I am kind of lost here on this whole scarf through a ring thing as well. This is just a function of the thickness of the scarf? My wife went through a scarf phase about a decade ago, and I am pretty sure a Pucci scarf could easily fit through a typical sized ring meant to go on a finger?

          Its entirely possible that old manufacturing methods produced things that are different, but I would be entirely surprised if they are entirely better overall. If the defining metric for scarves is how well they fit through rings, I am sure they would all be made so you could fit 3 through a ring if people were willing to pony up for that. If you look at a lot of old clothes, they are generally a lot heavier, but I am not sure I would really want to wear them, they look quite uncomfortable. I also think its wonderful that today you can get a set of clothes for a few hours of minimum wage work while in the past this was a major investment. You can also choose to pay thousands for a shirt if you wish, but from 10 feet away its going to be hard to tell the difference.

          • By noemit 2026-03-0517:551 reply

            A full size wool scarf cannot go thorough a ring. You are probably thinking of a silk scarf. I have a wool scarf next to me from Kashmir and it went down about 25 cm. The full scarf is a bit over a meter.

            Looked up Pucci - looks like a designer that makes silk scarves. Silk is a totally different material. The Luddites were wool and cotton weavers.

            Making wool thin enough for a meter long scarf to go thorough a ring requires the individual strand of wool to be very thin. Both making it thin and weaving that thin strand is the craft that was lost. Go look at wool yarn next time you are at a store and see how thin they can get it.

            As for "Are they better?" Yes. Thinner wool is incredible, soft. High quality merino wool is one of the most expensive fabrics. Look up this brand "Made in Rosia Montana" if you are curious. It's not like what the Luddites made, but its as good as it gets in the modern world. Getting stuff from the Kashmir region is difficult - I got mine because I knew someone who ran a school in the area. Most "Cashmere" stuff in department stores is fake/chemically processed for fake softness which makes it nice but it doesn't last. Real quality wool lasts a lifetime. The chemically processed stuff is ok if you want to see how it "feels"

            EDIT: also, wool is naturally waterproof! I can walk in the rain with my scarf from kashmir on my head, its pretty thin but absolutely no water goes through even in heavy rain. it has to do with the springiness of the fibers and its natural oils. I will stop nerding out on fibers now!!

            • By kevstev 2026-03-0521:18

              Appreciate the clarification. I guess its a case of I don't know what I don't know, but the choice of metric around quality was just an odd one. And yeah I assumed silk because I can't imagine a wool scarf going through a ring.

        • By GeoAtreides 2026-03-0518:51

          oh man wait until you find out what happened to little timmy in the textile mills...

    • By stanko 2026-03-0511:161 reply

      I think you are going to enjoy this talk by Jonathan Blow - Preventing the Collapse of Civilization:

      https://www.youtube.com/watch?v=ZSRHeXYDLko

      • By sph 2026-03-0511:23

        I've had this talk in mind during the past 2/3 years of AI boom, and it feels like rewatching a video from the 80s about the dangers of global warming. Prescient, and perhaps a bit quaint in its optimism that somehow we won't make things even worse for ourselves.

        Now we're way past the point of no return.

    • By oxag3n 2026-03-0519:271 reply

      Software engineers are anything but Luddites.

      This labeling tactic became pretty common and tries to build a narrative that software engineers are going away. Artisan coders, craftsmen,

      First and foremost, wool craftsmen are not engineers (which doesn't make their work less valuable).

      Second, most software engineers, especially not in FAANG-like companies, don't engineer a shit. My spouse worked at a large telecom company in US and employees with "software engineer" title were doing mechanical tasks following some scripts, like daily system reload - run the script, verify status, open a ticket for a sub-contractor if anything is wrong, support the contractor via the ticket system until it's resolved. To be fair, two of my close family members work in FAANG and say COVID over-hire created a similar landscape there too.

      My point is, creating CRUD internal tools was not an engineering to begin with, it was a craft, matching most craftsmen features such as small-scale, bespoke work, hands-on practice, tacit knowledge, apprenticeship-like learning (even if it's SO or tutorial), iterative refinement, tool mastery, adaptation during build.

      • By Lumping6371 2026-03-1118:47

        So only FAANG does engineering now? Pretty elitist take. Would you happen to have a FAANG company in your resume, by any chance?

        Yes, non tech companies tend to care less about the technical end of things. They, we, "don't do engineering" in the sense of dealing with large scale systems, optimization, etc. Still have to understand the product, and translate business requirements into code and systems, often running with budget constraints. If that's not engineering, I don't know what is.

    • By MagicMoonlight 2026-03-0513:551 reply

      Before mass production, the women of the household would be forced to spend every free moment of their day, outside of their other work, making fabric.

      Before mechanised farming, the men were forced to spend all day in the fields.

      Never again.

      • By vips7L 2026-03-0514:474 reply

        And yet many of us would prefer to be in a field instead of behind a laptop screen all day.

        • By nayroclade 2026-03-0515:08

          There's a big difference between being in a field versus working in a field, from dawn to dusk, every day, regardless of the weather or sickness, in order to produce just enough food to feed you and your family, knowing that a single failed harvest (due to conditions like weather and pests that you have no control over) will leave you starving, watching helplessly as your children, spouse, friends and neighbours slowly weaken and die, knowing that even if you survive, you will face the same thing again the next year, and the next, for the rest of your lives, which will likely be short, due to the constant, exhausting labour, frequent bouts of malnutrition, and nonexistent medical care.

        • By Vegenoid 2026-03-0519:01

          This is just unrealistic day dreaming. You can go be in a field picking produce for work - we have a shortage of these laborers. Most people don’t actually want to do that, they want a cushy office job that doesn’t wear down their body and that offers them the ability to increase their skill and value over time.

          The software engineer who thinks they’d be happier working in a field is largely just a grass is always greener phenomenon. It turns out that for most people, they don’t like work whatever it is, because work is done not by choice but by necessity.

        • By mritterhoff 2026-03-0515:11

          No one is stopping you, and maybe it's worth trying out!

        • By moffkalast 2026-03-0517:36

          Have you ever tried it? I did, and sure as hell wouldn't.

          Nobody's really stopping you from studying agriculture and working in that proverbial field either.

    • By angry_octet 2026-03-0519:04

      A set of clothing used to cost a month's wages. Yearning for the pre-industrialised era is an unintended pean to aristocracy, whitewashed by fiction and movies to be clean and virtuous.

      At the moment, a single line of production code costs hundreds of dollars. I'm not talking about the bedrock of technology, like compilers, mysql, the Linux kernel, which represent hundreds of billions of value. I'm talking about the shitty code that powers Salesforce and ERP integration, Drupal modules, intranet customisation, insurance company call centre agent policy workflows, the thrice cursed apps that ship with cheap Chinese android phones, the putrid code to analyse our shopping loyalty card purchases and turn it into business insights.

      All that code is shit, and it costs a fortune. Meanwhile regular people have no code. Even I run my life on almost no code, I have to use SaaS (like Gmail and Docs). If I want something like a financial analysis to be understood by my family I don't code it in python, I use Excel. I use whatever automation comes in my car. But once simple code becomes a process of thinking about what you want rather than knowing esoterica like calling conventions, allocation lifetimes etc, then we have made custom software accessible to billions of people, people who are clever and industrious.

      So stay in your cathedral and illuminate your manuscript if you like, there is a need for excellent code, and tooling like Lean that can define what correct means, but let the people eat.

    • By whazor 2026-03-0512:481 reply

      I found that it normally takes one prompt early-on to go from 'vibe-coded spaghetti' to something having a decent architecture.

      • By noemit 2026-03-0513:19

        I cap my effort at 2-3 prompts. One to investigate obvious mistakes with a top model, 1-2 to try to fix them.

    • By jamesjolliffe 2026-03-0515:59

      I love this comment. Thank you for your provocative first sentence, esoteric historical anecdote, and nuanced take. Goddamn Hacker News rules.

    • By Aurornis 2026-03-0514:501 reply

      > Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.

      This is a rather extreme failure on their part. There’s nothing admirable about hoarding knowledge and forcing it to only be passed down in person.

      I don’t see this as the Luddites being right at all because they were clearly incredibly wrong about their chosen method of knowledge storage. This was a highly predictable and preventable outcome. If we were talking about a company today that forgot how to manage their servers because they refused to document anything and only passed it down from person to person we wouldn’t be speaking in awe and wonder, we’d be rightly criticizing their terrible decision making.

      That aside, every time I hear that knowledge has been lost forever it turns out to be an exaggeration from those trying to amplify the mystique of the past. If we wanted to make ultra-thin scarves we could do it. We could study those ultra thin museum pieces with our endless array of modern tools and then use our vast quantities of modern wool to experiment until we got something similar.

      But you missed the reason why we wouldn’t want to: An ultra-thin scarf isn’t going to work as well as a thicker one for keeping someone warm. It will be less durable. It would be a fundamentally inferior product. It’s interesting to see as a museum antique that has to be treated with utmost delicacy, but not so much as a practical garment.

      • By aszen 2026-03-0515:41

        If you buy real handcrafted scarves they are both thinner and warmer than anything factory made bcz of their choice of pashmina wool.

    • By aosnsbbz 2026-03-0512:492 reply

      A big difference is cutting quality for the sake of mass production when it enables creating more necessities for people to live is a good thing. It is a good tradeoff. Cutting quality to make previously deterministic software more non deterministic does not improve anyones life except Sam Altman, Dario Amodei and the rest of the billionaire class.

      I have no doubt in the future there will be a class of vibe software and it will be known as distinctly lower quality than human understood software. I do think the example you describe is a good use of vibing. I also think tech orgs mandating 100% LLM code generation are short sighted and stupid.

      A lot of this push for “slop” is downstream of our K shaped economy. Give the people more money and quality becomes a lot more important. Give them less, and you’re selling to their boss who is often insulated from the effects of low quality.

      • By moffkalast 2026-03-0517:43

        Agreed. Those expensive silk scarves are worth exactly zilch to the average person if they'll never see one in their entire life. They might as well not exist.

        Mass production makes things accesible, and if the handmade product cannot compete relative to it, then it's clearly not that much better or some people would still pay that premium to keep it around.

        With programming that's very much the case, nobody's gonna vibe code a self driving car stack or a production grade DBMS. Even the cheapest scarf still works as a scarf in 99.9% of use cases though, if maybe not for as long.

      • By adeelk93 2026-03-0514:201 reply

        Was software authorship ever deterministic? Whether it be human or AI, the output can vary wildly, and is constrained by the finite specifications provided.

        • By aosnsbbz 2026-03-0520:09

          Sure, technically nothing is. Adding AI reduces the level of determinism is my primary point.

    • By spicyusername 2026-03-0512:252 reply

          luddites were right
      
      And yet in the 200 years since human civilization has improved by every imaginable metric, in most cases by orders of magnitude. The difference between 2026 and 1826 is nearly incomprehensible. I suspect most people scarcely imagine how horrific the average life was in 1826, relatively speaking. And between then and now were the industrial revolution, multiple world wars, and generally some of the most terrible events, crooked politicians, and life changing technological forces. And here we are, mostly fine in most places.

      I get there are many things happening today that are frustrating or moving some element of human life in negative or ambiguous directions, but we really have to keep perspective on these things.

      Nearly every problem today is a problem with a solution.

      The feelings of panic we have that things are going wrong are useful signals to help guide and motivate us to implement those solutions, but we really must avoid letting the doomerism dominate. Just because we hear constant negative news doesn't mean things are lost. Doesn't even mean things are bad.

      It just means we have been hearing a lot of negative news.

      This is what it looks like for progress to not be monotonically increasing.

      • By lopis 2026-03-0513:33

        If progress had been limited to solving people's problems, we would be fine.

        > The feelings of panic > It just means we have been hearing a lot of negative news.

        This is part of the problem at hand, not just a footnote.

      • By boesboes 2026-03-0512:511 reply

        try reading :)

    • By imiric 2026-03-0512:43

      You're right. Automation often trades quality for speed and quantity.

      The difference between automating the creation of software and automating the creation of physical products is that software is everywhere. It is relied on for most tools and processes that keep our civilization alive. Cutting corners on that front, and deciding to entrust our collective future to tech bros and VC firms fiending for their next payout, seems like an incredibly dumb and risky proposition.

    • By mr_toad 2026-03-0511:522 reply

      > One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)

      Sounds like a tautology. If you deliberately hoard knowledge of course it’s going to be hard to obtain.

      • By ulbu 2026-03-0511:59

        closed source

      • By recursive 2026-03-0517:45

        You talking about the billion dollar model training efforts done in secret?

    • By llm_nerd 2026-03-0514:27

      >the Luddites were right

      The Luddites were right in the sense that the social order had changed in a negative way. In a careless way.

      In the same way that we look at America now that has effectively put a plutocracy in absolute control of the country, at the same time that there is going to be a massive devaluing of labour. Elon Musk likes to talk about the coming golden age of automation, but I hope Americans realize that unless they happen to be a billionaire, they will enjoy zero fruits of that advance. Quite contrary, plump yourself up to be Soylent Green because it turns out that giving a bunch of psychopaths/sociopaths absolute control of government isn't good for the average person.

      >One of the claims of the Luddites is that quality would go down

      Then people will choose the better quality items and it will be easy for them to compete? Right?

HackerNews