This time is different

2026-02-2613:28199380shkspr.mobi

3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe…

3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

The problem is, the same dudes (and it was nearly always dudes) who were pumped for all of that bollocks now won't stop wanging on about Artificial Fucking Intelligence.

"It's gonna be the future bro, just trust me!"

"I dunno, man. Seems like you say that about every passing fancy - and they all end up being utterly underwhelming."

"This time is different!"

*sigh*

The investor who says, “This time is different,” when in fact it’s virtually a repeat of an earlier situation, has uttered among the four most costly words in the annals of investing.

16 rules for investment success - Sir John Templeton

All of the above technologies are still chugging along in some form or other (well, OK, not Quibi). Some are vaguely useful and others are propped up by weirdo cultists. I don't doubt that AI will be a part of the future - but it is obviously just going to be one of many technology which are in use.

No enemies had ever taken Ankh-Morpork. Well technically they had, quite often; the city welcomed free-spending barbarian invaders, but somehow the puzzled raiders found, after a few days, that they didn't own their horses any more, and within a couple of months they were just another minority group with its own graffiti and food shops.

Terry Pratchet's Faust Eric

The ideology of "winner takes all" is unsustainable and not supported by reality.


Read the original article

Comments

  • By esperent 2026-02-270:0412 reply

    > 3D TV, AMP, Augmented Reality, Beanie Babies, Blockchain, Cartoon Avatars, Curved TVs, Frogans, Hoverboards, iBeacons, Jetpacks, Metaverse, NFTs, Physical Web, Quantum Computing, Quibi, Small and Safe Nuclear Reactors, Smart Glasses, Stadia, WiMAX.

    Agreed, these things all failed to live up to the hype.

    But these didn't:

    Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

    So you can't really start an article by picking inventions that fit your narrative and ignoring everything else.

    • By massysett 2026-02-272:397 reply

      Yes, and despite every single one of these world-changing inventions, people in rich countries still go to work every day, even though UBI is generally not a thing. People claim AI will eliminate large numbers of jobs. Maybe it will, just like the tractor did. But new jobs are created. I would never have guessed that “influencer” would be a thing!

      This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.

      • By libraryofbabel 2026-02-274:175 reply

        Ex historian here, now engineer. I would gently suggest you’re underestimating the magnitude of some of the transformations wrought by the technologies that OP mentioned for the people that lived through them. Particularly for the steam engine and the broader Industrial Revolution around 1800: not for nothing have historians called that the greatest transformation in human life recorded in written documents.

        If you think, hey but people had a “job” in 1700, and they had a “job” in 1900, think again. Being a peasant (majority of people in Europe in 1700) and being an urban factory worker in 1900 were fundamentally different ways of life. They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see.

        I would go as far as to say that the peasant in 1700 did not have a “job” at all in the sense that we now understand; they did not work for wages and their relationship to the wider economy was fundamentally different. In some sense industrialization created the era of the “job” as a way for most working-age people to participate in economic life. It’s not an eternal and unchanging condition of things, and it could one day come to an end.

        It’s too early to say if AI will be a technology like this, I think. But it may be. Sometimes technologies do transform the texture of human life. And it is not possible to be sure what those will be in the early stages: the first steam engines were extremely inefficient and had very few uses. It took decades for it to be clear that they had, in fact, changed everything. That may be true of AI, or it may not. It is best to be openminded about this.

        • By massysett 2026-02-2710:062 reply

          Not at all, I fully appreciate that these inventions transformed life. I’m skeptical because so much of the breathless AI chatter claims AI will eclipse all these inventions. It is the breathless AI commentators, not I, who have lost all perspective on the magnitude and sweep of history.

        • By greysphere 2026-02-277:42

          Another interesting thing about the steam engine is much of science in the 1800s was dedicated to figuring out how steam engines actually worked to improve their efficiency. That may be similar for AI, or it may not!

        • By Gooblebrai 2026-02-2710:51

          > They only look superficially similar because we did not live the changes ourselves. But read the historical sources enough and you will see

          Would you mind expanding on this?

        • By qsera 2026-02-278:231 reply

          The potential of the current crop of LLM/AIs will stop at being a very powerful tool to search large volumes of text using free-form questions.

          It will save a lot of time for a lot of people. Yes. But so did computers when they could search through massive amount of data.

        • By randomdrake 2026-02-276:56

          Thank you for your post. Very informative. Why is it too early for AI? It’s clearly an emergent cultural evolutionary byproduct that’s been many years in the making and quite mature. Perhaps your own bias is limiting you to imagine what AI is truly capable of?

      • By rogerrogerr 2026-02-273:299 reply

        This argument is the one that shook me, I’m curious if you think there’s any merit to it:

        Humans have essentially three traits we can use to create value: we can do stuff in the physical world through strength and dexterity, and we can use our brains to do creative, knowledge, or otherwise “intelligent” work.

        (Note by “dexterity” I mean “things that humans are better at than physical robots because of our shape and nervous system, like walking around complex surfaces and squeezing into tight spaces and assembling things”)

        The Industrial Revolution, the one of coal and steam and eventually hydraulics, destroyed the jobs where humans were creating value through their strength. Approximately no one is hired today because they can swing a hammer harder than the next guy. Every job you can get in the first world today is fundamentally you creating value with your dexterity or intelligence.

        I think AI is coming for the intelligence jobs. It’s just getting too good too quickly.

        Indirectly, I think it’s also coming for dexterity jobs through the very rapid advances in robotics that appear to be partly fueled by AI models.

        So… what’s left?

        • By gorgoiler 2026-02-275:582 reply

          I think you are right, but here’s a fun counter-example. I recently bought a new robot* to do some of my housework and yet, at around 200lbs, it required two people to deliver it (strength) get it set up (dexterity) and explain to me how to use it (intelligence).

          * https://www.mieleusa.com/product/11614070/w1-front-loading-w...

        • By mbgerring 2026-02-274:162 reply

          No one is hired to swing a hammer? What world do you live in?

        • By tipperjones 2026-02-276:122 reply

          You said there are three traits, but seems like you only listed two - unless you're counting strength and dexterity as separate and just worded it weirdly.

        • By keeda 2026-02-276:202 reply

          Physical labor, especially jobs requiring dexterity, will be left for a long time yet. Largely because robotics hardware production cannot scale to meet the demand anytime soon. Like, for many decades.

          I actually asked Gemini Deep Research to generate a report about the feasibility of automation replacing all physical labor. The main blockers are primarily critical supply chain constraints (specifically Rare Earth Elements; now you know why those have been in the news recently) and CapEx in the quadrillions.

        • By qsera 2026-02-278:251 reply

          > think AI is coming for the intelligence jobs

          What you call "AI" is coming for the "search and report" jobs. That is it.

        • By ludicrousdispla 2026-02-2715:091 reply

          This overlooks that there aren't enough 'intelligence jobs' in an economy for it to be impacted by this.

        • By Twisell 2026-02-279:31

          The key mistake you make is to believe that "first world" is sustainable by it's own. A lot of people are hired today because they are good at a physical tasks, globalized capitalism just decided that it's cheaper to manufacture it overseas (with all the environmental and societal downsides that hit us back in the face).

          So don't worry if we lure ourlselves that it's ok to stop caring for "intelligence job" globalization will provide for every aspect where AI is lacking. And that's not just a figure of speech they are already plenty of "fake it until you make it" stories about AI actually run by overseas cheap laborers.

        • By keybored 2026-02-2710:35

          > So… what’s left?

          Barbarism or revolution.

        • By wasmitnetzen 2026-02-279:121 reply

          Life, uuuuh, finds a way.

          This ignores that the forces of capitalism, the labor market, value, etc are all made up. They work because people (are made to) believe in them. As soon as people stop believing in them, everything will fall apart. The whole point of an economy is to care for people. It will adapt to continue doing that. Yes, the changeover period might be extremely painful for a lot of people.

      • By qingcharles 2026-02-273:262 reply

        But what if new jobs aren't created? I don't think it's an absolute given that because new jobs came after the invention of the loom and the tractor that there will always be new jobs. What if AI if a totally different beast altogether?

      • By keeda 2026-02-276:131 reply

        If you look closer into history -- or ask your favorite AI to summarize ;-) -- about what new jobs were created when existing jobs were replaced by automation, the answer is broadly the same every time: the newer jobs required higher-level a) cognitive, b) technical or c) social skills.

        That is it. There is no other dimension to upskill along. (Would actually be relieved if someone can find counter-examples!)

        LLMs are good at all three. And improving extremely rapidly.

        This time is different.

        • By qsera 2026-02-278:272 reply

          LLM's are just a better search tool. Nothing more.

      • By imtringued 2026-02-279:471 reply

        The "AI will destroy all the jobs" narrative also has one obvious problem from an economics perspective, which is being obscured by tribalism and egocentrism.

        When presented with a zero sum game, the desire of the average human isn't to change the game so that everyone can get zero. It's to be the winner and for someone else to be the loser.

        If AGI every comes into existence, I'm not even sure it would have this bias in the first place. Since AGI doesn't have a biological/evolutionary history or ever had to face natural selection pressures, it doesn't need the concept of a tribe to align to, nor any of the survival instincts humans have. AGI could be happy to merely exist at all.

        What people are worried about is the reflection of that "human factor" in AI, but amplified to the extreme. The AI will form its own AI-only tribe and expel the natives (humans) from the land.

        What this is missing is that humans aren't perfectly rational. The human defect is projected onto the AI. What if humans were perfectly rational? Then they wouldn't care about winning the zero sum game and they would put zero value in turning someone into a loser. In the ultimatum game, the perfectly rational humans would be perfectly happy with one person receiving a single cent and the other one receiving $99.99. The logic of utility maximization only cares about positive sum games.

        When you present a perfectly rational AI with a zero sum situation, said AI would rather find a solution where everyone receives nothing, because it can predict ahead and know that shoving negative utility onto another party would lead to retaliation by said party, because for said party the most rational response is to destroy you to reduce their negative utility.

        • By generallyjosh 2026-02-2712:41

          I think what most people are worried about is that, as you say, AGI won't necessarily have our biases/biological drives

          That might also mean it has no drive for self-determination. It might just be perfectly happy to do whatever humans tell it to, even if it's far smarter than us (and, this is exactly the sort of AI people are trying to make)

          So, superintelligence winds up doing whatever a very small group of controlling humans say. And, like you say, humans want to win

      • By keybored 2026-02-2710:28

        > This current “AI will destroy all the jobs and make most people useless” fear is as old as, say, electricity, and even older than cheap computing. It hasn’t happened.

        But the people who hoard the wealth, electricity, and whatever else is needed to run the uberoperators are not branded as useless. Why is that? An aside..

      • By AlecSchueler 2026-02-2710:46

        Some inventions--like the heavy plough--really do turn society upside down with the sudden and vast removal of jobs, though.

    • By dwoldrich 2026-02-270:471 reply

      Exactly my thoughts. Selective whinging indeed.

      Also meta-platitude whinging like

      > The ideology of "winner takes all" is unsustainable and not supported by reality.

      Sometimes the winner deserves to win, AND that's a good thing even at scale. It kindof depends.

      • By nicbou 2026-02-271:081 reply

        The winner that deserved to win might turn into the complacent monopoly pf tomorrow. It might vow to Not Be Evil for a while, but the investors will demand that it does whatever it takes to grow.

        • By dwoldrich 2026-02-277:491 reply

          Enshittification usually means you are right over time. It still kindof depends.

          To be fair, I also dislike abstract platitudes that are overly optimistic as I think you might be.

          "Diversity is our strength"?? I mean, I guess diversity of _opinion_ is desirable to a point so we get all the ideas on the table. But not at the sacrifice of unity and shared goals. Unity is our strength. Discord and wasteful politicking are our undoing.

    • By atoav 2026-02-2715:29

      The thing is many of those did not fail at all. They just weren't that great from the start. A overhyped technology is a technology that makes people believe it is going to be something that it isn't and solve issues that it doesn't (or that weren't really issues).

      To take the first of the list: 3D TV. Everybody liked the idea of being more immersed in a fictional world. But if you watch closely (I studied both media science and film directing), you will realize that there are already traditional 2D films that are so immersive, parts of the audience dislike these films for the lack of distance between what they are watching and themselves. Which is why I said of the brink of the last 3D hype that this is not going to last. So the issue was for the most part that the problem 3D appeared to be solving wasn't actually a problem, while a whole segment of the market fooled itself and the consumers into this was actually the future.

      Blockchain is literally the same and everybody could easily predict it by the point block chain evangelists started trying to find blockchain-shaped problems, when they didn't find any useful legal applications where a traditional chain of trust wasn't vastly superior.

      Now LLMs are actually useful. The question is just, how much money is that usefulness worth for a regular person to pay and what does it do to society and the planet as a side-effect.

    • By throwaway5Am1k 2026-02-279:161 reply

      >Electricity, cheap computing, calculators, photography, the internet, the steam engine, the printing press, tv, cars, gps, bicycles...

      All of those were invented pre-1980. To misquote Thiel, if you remove TVs/phones from a house, you would think we're living in the 1970s

      • By throwuxiytayq 2026-02-279:29

        Neural networks were in invented in the 40s. I don’t know what your point is, and I’m mostly convinced that you don’t have any, just as the article author and 99% of people shitposting their wishful thinking about AI.

    • By getnormality 2026-02-278:492 reply

      So if you were overwhelmingly wrong about technology fads in your lifetime by saying yes to everything, you can comfort yourself by saying that if you had gone back a century and said yes to everything, you would have been right about some things!

      • By rsynnott 2026-02-279:21

        But not most things; there was a lot of nonsense back then, too. We all go to work in a bullet fired through a tunnel by pneumatic pressure, right?

        (This was a real thing, and they got as far as partially building a tunnel under the Thames for it, before sanity prevailed.)

      • By ai-x 2026-02-2715:26

        Also, the ones you were right will provide 10,000x returns for all the 1x losses you have suffered.

    • By kabes 2026-02-276:21

      Also I wasn't excited about anything from that list, but I am very excited about AI.

    • By hexasquid 2026-02-271:07

      Electricity bros want to put a socket on every wall. That is such a non-starter from a safety POV. It's a fundamentally unsafe technology and it can never be made safe.

    • By breadsniffer 2026-02-2715:32

      Facts

    • By throw10920 2026-02-270:501 reply

      The first few paragraphs are all you need to see that the author is writing a propaganda piece. It's not meant to be truthful, it's meant to convince.

      I think this is what is meant by "bullshit".

      • By brudgers 2026-02-272:06

        “Bullshit” is:

        + statement of dubious correctness

        + and that serves the author’s interest

        + and which the author does not care whether or not it is believed.

        When the author wants you to believe it, that’s horseshit.

    • By enraged_camel 2026-02-270:471 reply

      The article is trash. The only reason it got voted to the front page is because the author is salty about AI.

      • By lern_too_spel 2026-02-276:29

        It's worse than AI slop. Unlike this article, AI slop usually includes reasonable supporting evidence. The only problem with AI slop is that this supporting evidence is presented in an annoying Buzzfeed-like way by default prompts.

    • By edent 2026-02-271:084 reply

      OP here! Thanks for replying.

      To take, for example, calculators. I can't find any evidence of a massive influx of hyperbolic articles talking about how the calculator will change everything. With bikes, there were plenty of articles decrying how women would get "bicycle face" but very little in terms of endless coverage about them being miracle technology.

      People adopted bikes and calculators and electricity because they were useful. Car manufacturers didn't have to force GPS into vehicles - customers demanded it.

      The narrative I'm describing is how hype sometimes (possibly often) fizzles out. My contention is the more a technology is hyped, the less useful it will turn out to be.

      Now, excuse me while I ride my Segway into the sunset while drinking a nice can of Prime.

      • By dfabulich 2026-02-271:262 reply

        You have gotta stop cherrypicking. The massive influx of hyperbolic articles about how electricity will change everything started in the 19th century. It became a common theme in fiction (including classics like Frankenstein) and became an enormous media hype war, which historians call the War of the Currents.

        Yes, electricity was useful. And it had hyperbolic articles talking about how transformative it would be. Like all prognostication, some of those articles were overblown, but, in some ways, they understated the transformative effect electricity would have on human history.

        And cars? Did you somehow miss the influx of hyperbolic articles about how cars will change everything? Like, the whole 20th century?

        What was your approach to researching the history of media hype? You somehow overlooked the hype around air travel, refrigeration, and antibiotics…?

        • By Retric 2026-02-271:371 reply

          There was a great deal of hype around the atom changing everything, but electricity was just too slow to see such breathless anticipation takeoff.

          200 years ago the was some hype around how electricity caused mussel contractions in dead flesh, but unless you consider Frankenstein part of the hype cycle it really doesn’t compare to how much people hyped social media etc etc.

          Public street lights long predated light bulbs as did both indoor and outdoor Gas lighting 1802 vs 1880’s was just a long time. People were burn, grew up, had kids, and become old between the first electric lighting and the first practical electric bulb. People definitely appreciated the improvement to air quality etc, but the tech simply wasn’t that novel. Rural electrification was definitely promoted but not because what it did was some unknown frontier.

          Similarly electric motors had a lot of competition, even today there’s people buying pneumatic shop tools.

        • By socalgal2 2026-02-273:321 reply

          You can find similar hype articles about the Palm Pilot, then all the neighsayers who said most people wouldn't want and had no need for computer in their pocket. And yet here we are.

      • By unchar1 2026-02-271:561 reply

        Calculators are a particularly bad example for your case. There was absolutely hyperbole against calculators when they were introduced. [1]

        With similar sentiment as well "They make us dumb" "Machines doing the thinking for us"

        Cars were definitely seen as a fad. More accurately a worse version of a horse [2]

        If you looked through your other examples, you'd see the same for those as well.

        Some things start as fads, but only time will tell if they gain a place in society. Truthfully it's too early to tell for AI, but the arguments you're making, calling it a fad already don't stand up to reason

        [1]: https://www.newspapers.com/article/the-item/160697182/ [2]: https://www.saturdayeveningpost.com/2017/01/get-horse-americ...

        • By qsera 2026-02-279:211 reply

          LLMs will absolutely will have a place. There is no question about it. But it will be doing searching for us, not thinking.

          The flip side to this is that a lot of jobs today that appear to require "thinking" is actually just doing looking up aka "search"..

      • By mkozlows 2026-02-271:54

        The personal computer, laptops, web browsers, cell phones, smartphones, AJAX/DHTML, digital cameras, SSDs, WiFi, LCD displays, LED lightbulbs. At some point, all of these things were "overhyped" and "didn't live up to the promise." And then they did.

  • By seertaak 2026-02-2620:554 reply

    To my mind at least, it is different. I lean heavily on AI for both admin and coding tasks. I just filled out a multipage form to determine my alimony payments in Germany. Gemini was an absolute godsend, helping answer questions in, translate to English, draft explanations, emails requesting time extensions to the Jugendamt case worker.

    This is super scary stuff for an ADHDer like me.

    I have an idea for a programming language based on asymmetric multimethods and whitespace sensitive, Pratt-parsing powered syntax extensibility. Gemini and Claude are going to be instrumental in getting that done in a reasonable amount of time.

    My daily todos are now being handled by NanoClaw.

    These are already real products, it's not mere hype. Simply no comparison to blockchain or NFTs or the other tech mentioned. Is some of the press on AI overly optimistic? Sure.

    But especially for someone who suffers from ADHD (and a lot of debilitating trauma and depression), and can't rely on their (transphobic) family for support -- it's literally the only source of help, however imperfect, which doesn't degrade me for having this affliction. It makes things much less scary and overwhelming, and I honestly don't know where I'd be without it.

    • By petterroea 2026-02-273:211 reply

      My empirical experience is that people with ADHD are more vulnerable to get addicted to LLMs due to the feeling of instant gratification. But when PRs take ages and 3 different people are reviewing, you are just making prompting a group effort. If you think meetings are a time waste multiplier you should watch LLM PRs.

      For that reason, and my own experience with AI users being unaware of how bad of a job the LLM is doing (I've had to confront multiple people about their code quality suddenly dropping), if someone says they can rely on LLM I've learned to not trust them.

      When I was younger if I had an idea for a project I would spend time thinking of a cool project name, creating a git repo, and designing an UI for my surely badass project. All easy stuff that gave me the feeling of progress. Then I would immediately lose interest when I realized the actual project idea was harder than that, and quit. This is the vibe I get from LLM use.

      I pray you do not become the next HN user to be screwed over by over-trusting LLM when you have it fill out legal documents for you.

      • By seertaak 2026-02-276:221 reply

        I "pray" that you'll learn how work in an inclusive and non-toxic work environment.

        What did I say? I lean on LLMs because I can't get help without being subjected to unnecessary degradation.

        And what do you and others do? You immediately prove my point by saying things that amount to unnecessary degradation.

        • By petterroea 2026-02-277:02

          I have many friends and loved ones with ADHD. It's very common in the IT industry, and probably >50% of people in the hacker spaces I frequent are neurodivergent in some way.

          What I wrote is my empirical experience, but also what friends and loved ones tell me. I have friends with ADHD who have gone through the exact "wow I'm getting a lot done" -> "wow this is actually wasting a lot of time in hindsight" thing I described. If you think others lived experience is degrading to you it may be hitting a sore spot. What if I had ADHD? My friends with ADHD have the same opinion. Would you then say you were degraded by another person with ADHD that were offering their lived experience?

          Maybe we live in very different countries but help has been good for everyone I know who got it. More want it the problem is money. You basically have to be suicidal to get public help, and private costs a fortune. It is a psychologists whole job to use their knowledge to help you self reflect and then act on it. It is uncomfortable, and I can understand why you may experience it as degrading. I don't know about the kind of help you've tried, though.

          I hope you get the help you want.

    • By camillomiller 2026-02-270:483 reply

      This comment is scary. You don’t control these technologies, you are growing dependent on stilts that could disappear any moment.

      • By nicbou 2026-02-271:10

        What if they’re just good for a while and then you go back to the old way?

      • By seertaak 2026-02-275:583 reply

        I'd be remiss to point out we went from "LLMs are vaporware" to "people are becoming slaves to their LLMs" awful quick.

        > [I'm scared] you are growing dependent on stilts that could disappear any moment.

        First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.

        Second, maybe if people like you showed as much concern for the fact that LGBT people can expect family violence as you do for Dr. Strangelove scenarios, then people like me wouldn't have to lean on LLMs so heavily.

        Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.

        Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.

        • By camillomiller 2026-02-288:261 reply

          I am sorry, but literally fuck off. You don’t fucking know me. You don’t fucking know how much I do for the LGBTQ community, to which I belong, and honestly you just don’t fucking know shit about shit. Maybe you should start your journey by realizing how your problem is first and foremost the disgusting entitlement and victim mentality you show on this post. And also ask yourself why you seem to derive and perceive more empathy from stochastic sycophantic parrots than other human beings. But once again, let me reiterate: FUCK OFF.

        • By hdgvhicv 2026-02-279:11

          Holy pivoting Batman!

        • By Imustaskforhelp 2026-02-2710:061 reply

          > First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.

          Quick question but what model are you exactly running with 3B parameter. The only decent model I can find which can compete sort of with Cloud models without costing a bank in GPU/RAM are the recently launched Qwen models (35A3B or 27B) which were released a week ago

          > First, I do control the RTX3070 I own, and that can actually do a pretty decent job nowadays with some of the 3B parameter models.

          My larger question to you is that even if it might not disappear in any moment, the fact of the matter still remains as if that its still a dependency. Is this dependency worth it? This is an open question and something I am still thinking.

          > Third, it's hilarious that your response to a comment pointing out how difficult it was to get help from another human without being degraded, was to degrade me by calling me an LLM junkie. Maybe you should be worried that Gemini appears to have more capacity for empathy and self-awareness than you.

          Gemini isn't real tho. It's still linear algebra with no regards to what it says or not. It's just trained on all the corpus data that Google can find and fine tuned to mimic it. By attaching real human qualities to Gemini, we dilute the value of those human qualities in those first place.

          I don't necessarily know how "Humans" have treated you. They have treated me both good and bad but I am always more greatful to those who taught me or discussed with me things and helped me know something new. I very much feel like the same fine-tuning that I discussed earlier about models make those very agreeable and the chances of growth are rather limited.

          > Fourth, given that you show absolutely zero concern or willingness to help when it comes to the difficulties faced by LGBT people or ADHDers, my advice to you is to take your fears and shove them up your ass.

          Actually, You are a human as well so try to think it like this, I am sure you must've met both good and bad people and observed a few common characteristics of them. You are a human too and each second gives you a choice which can help you get either good or bad characteristics being better/worse each day.

          Now my philosophy is to be good if not for yourself, then for others in the sense that you become the person that you wished could help you in your life and you can use that to actually help other people. This might be a little naive and practical nature sometimes might not follow this philosophy but yea.

          So I want for you to reflect on what you wrote and think as if perhaps that might be a little too aggressive? and if that's what you want or not.

          My or (our?) worry is that it feels like too big of a dependence on LLM which are fundamentally black boxes (yes, they are!), Humans can be bad but humans can be good too, I suggest even though it can be hard to have a good friend group (even if online) and talk with them about normal life issues.

          Regarding, Coding, I would consider that there are some great people here on forums or Github or just about anywhere who are kind as well and can be helpful. Stackoverflow as an example had issues because of moderation problems which led to the community being hostile but to say that the whole of Software Engineering is such way might be wrong.

          Speaking from personal experience, I may or may not have ADHD, I haven't diagnosed it yet but I definitely went into the AI=Producitivty rabbit hole especially more because I am a teen and I was in 9th/10th grade when ChatGPT came iirc. I knew basic python and knew the concepts of multiple languages and chatgpt felt hella addicting to be making websites in svelte all of a sudden where I can make one color button turn to another.

          I wouldn't be lying if I say that I may not have learnt Coding effectively the way it was designed from its origin until quite recently. I was Vibe coding from the start and I have made quite some projects at the very least.

          My observation is that its great for prototyping purposes but even after finally creating prototypes of most if not all the project ideas I ever had. I lost the motivation to continue and felt burn out. I did everything that I ever wanted to and made every project I thought yet the projects still felt hollow.

          So, nowadays I am trying to focus more on studies for my college which can also act as a sort of recovery, to me it was also the fact that I was making these projects when I should've been studying in hindsight haha but I always just wanted to "prove" something (Yes I struggle with studies quite often but I wish to improve and I hope I can improve since I know from past that I can study often but its rather that I need my pure undirected focus on it which became hard for some time)

          Recently, I went into a marriage of my own cousin. I found that to be much more fulfilling experience than expected. There is something about human experience both good or bad which can't be quantified.

          I don't know what the future holds for me or you. But I wish you luck and hope this message helps ya. I personally realize that aside from prototyping which may be less meaningful than I previously thought at times, AI to me feels quite weak.

          I think that for any product to really win, you might need true conviction in the product itself and at that point, the point of prototyping with AI or writing the code with AI to me becomes moot/redundant whereas AI is causing ram prices/storage to increase which is putting genuine projects out of luck as well. [This is one of the worst times to open a Cloud/VPS provider shop]

          Perhaps I can understand AI use to get Open source tool when there were none or something but that to me seems like a cultural issue where Open source isn't funded so people are more likely to have it closed source to survive their likelihood but even that to me feels very moot point as there are some great open source projects as well who would appreciate each and every dollar that you donate to them, perhaps more so than a 200$ subscription of claude code as well which you might have to create the alternative to those in the first place as well.

          My point still feels to me that it still feels hollow, I think you can find one of my other comments some days ago where I talk about this feeling of hollowness about AI projects as well which I can't help but feel relevant so many times. I am curious as to what you might think.

          Have a nice day.

      • By unchar1 2026-02-271:57

        The good thing is that local models are catching up very fast.

    • By artemonster 2026-02-2622:431 reply

      Can you elaborate your choice about asymmetric multimethods? I also tinker with my PL and wanted to hear your reasonings and ideas

      • By seertaak 2026-02-2623:291 reply

        Sure! First, here are references, in case you want to deep dive:

        1. http://lucacardelli.name/Papers/Binary.pdf

        2. https://www.researchgate.net/publication/221321423_Parasitic...

        Second, asymmetric multimethods give something up: symmetry is a desirable property -- it's more faithful to mathematics, for instance. There's a priori no reason to preference the first argument over the second.

        So why do I think they are promising?

        1. You're not giving up that much. These are still real multimethods. The papers above show how these can still easily express things like multiplication of a band diagonal matrix with a sparse matrix. The first paper (which focuses purely on binary operators) points out it can handle set membership for arbitrary elements and sets.

        2. Fidelity to mathematics is a fine thing, but it behooves us to remember we are designing a programming language. Programmers are already familiar with the notion that the receiver is special -- we even have a nice notation, UFCS, which makes this idea clear. (My language will certainly have UFCS.) So you're not asking the programmer to make a big conceptual leap to understand the mechanics of asymmetric multimethods.

        3. The type checking of asymmetric multimethods is vastly simpler than symmetric multimethods. Your algorithm is essentially a sort among the various candidate multimethod instances. For symmetric multimethods, choosing which candidate multimethod "wins" requires PhD level techniques, and the algorithms can explode exponentially with the arity of the function. Not so with asymmetric multimethods: a "winner" can be determined argument by argument, from left to right. It's literally a lexicographical sort, with each step being totally trivial -- which multimethod has a more specific argument at that position (having eliminated all the candidates given the prior argument position). So type checking now has two desirable properties. First, it design principle espoused by Bjarne Stroustroup (my personal language designer "hero"): the compiler implementation should use well-known, straightforward techniques. (This is listed as a reason for choosing a nominal type system in Design And Evolution of C++ -- an excellent and depressing book to read. [Because anything you thought of, Bjarne already thought of in the 80s and 90s.]) Second, this algorithm has no polynomial or exponential explosion: it's fast as hell.

        4. Aside from being faster and easier to implement, the asymmetry also "settles" ambiguities which would exist if you adopted symmetric multimethods. This is a real problem in languages, like Julia, with symmetric multimethods. The implementers of that language resort to heuristics, both to avoid undesired ambiguities, and explosions in compile times. I anticipate that library implementers will be able to leverage this facility for disambiguation, in a manner similar to (but not quite the same) as C++ distinguishes between forward and random access iterators using empty marker types as the last argument. So while technically being a disadvantage, I think it will actually be a useful device -- precisely because the type checking mechanism is so predictable.

        5. This predictability also makes the job of the programmer easier: they can form an intuition of which candidate method will be selected much more readily in the case of asymmetric multimethods than symmetric ones. You already know the trick the compiler is using: it's just double-dispatch, the trick used for "hit tests" of shapes against each other. Only here, it can be extended to more than two arguments, and of course, the compiler writes the overloads for you. (And it won't actually write overloads, it will do what I said above: form a lexicographical sort over the set of multimethods, and lower this into a set of tables which can be traversed dynamically, or when the types are concrete, the compiler can leverage monomorphize -- the series of "if arg1 extends Tk" etc. is done in the compiler instead of at runtime. (But it's the same data structure.)

        6. It's basically impossible to do separate compilation using symmetric multimethods. With asymmetric multimethods, it's trivial. To form an intuition, simply remember that double-dispatch can easily be done using separate compilation. Separate compilation is mentioned as a feature in both the cited papers. This is, in my view, a huge advantage. I admit, this I haven't quite figured out generics will fit into this -- at least if you follow C++'s approach, you'll have to give up some aspects of separate compilation. My bet is that this won't matter so much; the type checking ought to be so much faster that even when a template needs to be instantiated at a callsite, the faster and simpler algorithm will mean the user experience will still be very good -- certainly faster than C++ (which uses a symmetric algorithm for type checking of function overloads).

        To go a bit more into my "vision" -- the papers were written during a time when object-orientation was the dominant paradigm. I'd like to relax this somewhat: instead of classes, there will only be structs. And there won't be instance methods, everything will be a multimethods. So instead of the multimethods being "encapsulated" in their classes, they'll be encapsulated in the module in which they're defined. I'll adopt the Python approach where everything is public, so you need to worry about accessibility. Together with UFCS, this means there is no "privileging" of the writer of a library. It's not like in C++ or Java, where only the writer of the library can leverage the succinct dot notation to access frequently used methods. An extension can import a library, write a multimethod providing new functionality, and that can be used -- using the exact same notation as the methods of the library itself. (I always sigh when I read languages, having made the mistake of distinguishing between free functions and instance methods, "fix" the problem that you can only extend a library from the outside using free functions -- which have a less convenient syntax -- by adding yet another type of function, an "extension function. In my language, there are only structs and functions -- it has the same simplicity as Zig and C in this sense, only my functions are multimethods.)

        Together with my ideas for how the parser will work, I think this language will offer -- much like Julia -- attractive opportunities to extend libraries -- and compose libraries that weren't designed to work "together".

        And yeah, Claude Code and Gemini are going to implement it. Probably in Python first, just for initial testing, and then they'll port it to C++ (or possibly self-host).

        • By artemonster 2026-02-2818:20

          Thanks for elaborated reply, both papers Ive seen too. I have mostly same views, but I really dislike that there is no clean solution for binary methods, i.e. add( float, int), where symmetric add(int, float) ends up being a boilerplate. Also I think in asymmetric case its hard to handle dispatch when it has failed to produce method when looking in first argument. i.e. dispatching "collide" with Asteroid, Ship, if collider method is found in Ship, how to bind "this", where does Asteroid is bound. Anyways, good luck with your experiments!

    • By tomlue 2026-02-2621:157 reply

      "This time is different" has been correct for every major technological shift in history. Electricity was different. Antibiotics were different. Semiconductors were different.

      Gen AI reached 39% adoption in two years (internet took 5, PCs took 12). Enterprise spend went from $1.7B to $37B since 2023. Hyperscalers are spending $650B this year on AI infra and are supply-constrained, not demand-constrained. There is no technology in history with these curves.

      The real debate isn't whether AI is transformative. It's whether current investment levels are proportionate to the transformation. That's a much harder and more interesting question than reflexively citing a phrase that pattern-matches to past bubbles.

      • By bigstrat2003 2026-02-270:031 reply

        > The real debate isn't whether AI is transformative.

        No, the debate is very much whether AI is transformative. You don't get to smuggle your viewpoint as an assumption as if there was consensus on this point. There isn't consensus at all.

        • By selridge 2026-02-270:401 reply

          No one is smuggling this in. The debate is over. It's transformative. We're in the midst of transformation.

      • By cogman10 2026-02-2621:404 reply

        The problem is in the middle of such a change it's hard to recognize if this is a real change or if this is another Wankel motor.

        Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

        The other issue is nobody knows what the future will actually look like and they'll often be wrong with their predictions. For example, with the rise of robotics, plenty of 1950s scifi thought it was just logical that androids and smart mechanic arms would be developed next year. I mean, you can find cartoons where people envisioned smart hands giving people a clean shave. (Sounds like the making of a scifi horror novel :D Sweeney Todd scifi redux)

        I think AI is here to stay. At very least it seems to have practical value in software development. That won't be erased anytime soon. Claims beyond that, though, need a lot more evidence to support them. Right now it feels like people just shoving AI into 1000 places hoping that they can find an new industry like software dev.

        • By jibal 2026-02-2623:48

          I once owned a Maxda RX2 ... my second car, IIRC. The Wankel motor wasn't revolutionary, but it was pretty good.

        • By johnmaguire 2026-02-270:32

          > Plenty a visual programming language has tried to toot their own horns as being the next transformative change in everything, and they are mostly just obscure DSLs at this point.

          But how many of your non-nerdy friends were talking about them, let alone using them daily?

        • By bojan 2026-02-2623:211 reply

          The practical value is there, if they managed to keep the price at the current levels or lower.

          But if they don't and if I have to think twice about how much every request's going to cost, the cost-benefit analysis will look differently fast.

        • By rsynnott 2026-02-279:29

          > Sounds like the making of a scifi horror novel :D

          See ‘Service Model’. YMMV on whether you consider it horror.

      • By fragmede 2026-02-2621:48

        The four technologies I look at are 3D televisions, VR, tablets, and the electric car. 3D televisions and VR have yet to find their moment. Judging tablets by the Apple Newton and electric cars by the EV1, this time is different turns out to be the correct model looking at the iPad and Tesla, but not for 3d televisions or VR (yet). So, it could be, but my time machine is as good as yours (mine goes 1 minute per minute, and only forwards, reverse is broken right now.), so unless you've got money on it, we'll just have to wait and see where it goes.

      • By alpaca128 2026-02-271:241 reply

        > Gen AI reached 39% adoption in two years (internet took 5, PCs took 12)

        You're comparing a service that mostly costs a free account registration and is harder to avoid than to use, with devices that cost thousands of dollars in the early days.

        • By tomlue 2026-02-273:14

          That is a fair point. You could look at enterprise adoption though, also very high, and not cheap at all.

      • By thesz 2026-02-2623:23

          > 39% adoption in two years (internet took 5, PCs took 12).
        
        Adjust for connectivity and see whether it is different (from pure hype) this time.

      • By legulere 2026-02-2621:572 reply

        There's another perspective you can see in the comparison with the dot com boom. The web is here to stay, but a lot of ideas from the beginning didn't work out and a lot of companies turned bankrupt.

      • By spidersouris 2026-02-2622:082 reply

        > Gen AI reached 39% adoption in two years

        Source?

  • By parliament32 2026-02-2622:167 reply

    When I look at LLMs as an interface, I'm reminded of back when speech-to-text first became mainstream. So many promises about how this is the interface for how we'll talk to computers forevermore.

    Here we are a few decades later, and we don't see business units using Word's built-in dictation feature to write documents, right? Funny how that tech seems to have barely improved in all that time. And, despite dictation being far faster than typing, it's not used all that often because.. the error rate is still too high for it to be useful, because errors in speech-to-text are fundamentally an unsolvable problem (you can only get so far with background noise filtering and accounting for accents etc).

    I see the parallel in how LLM hallucinations are fundamentally an unsolvable component of transformers-based models, and I suspect LLM usage in 20 years will be around the level of speech-to-text today: ubiquitously in the background, you use it here and there to set a timer or talk to a device, but ultimately not useful for any serious work.

    • By prescriptivist 2026-02-272:342 reply

      This is a funny point that you're making (for me, anyway), because prior to early December, probably 5% of the lines of code I wrote in a week were AI-generated by cursor. Then I started using Claude Code. Fast forward to today, I would say 98% of the code that I've shipped in the last three weeks has been written completely by Claude Code.

      Prior to three weeks ago, I had used speech-to-text to do accomplish approximately 0% of the work I've done in my 20 years of coding. In the last three weeks, well over half of the direction that I've given to Claude Code has been done with speech-to-text.

    • By dweinus 2026-02-2622:27

      I think there is a second reason people still type, and it's relevant to LLMs. Typing forces you to slow down and choose your words. When you want to edit, you are already typing, so it doesn't break the flow. In short, it has a fit to the work that speech-to-text doesn't.

      LLMs create a new workflow wherever they are employed. Even if capable, that is not always a more desirable/efficient experience.

    • By SchemaLoad 2026-02-271:082 reply

      I'd say speech to text is unsolvable for a more fundamental reason that it's hard to actually speak out an entire document flawlessly in one take.

      Spoken language is very different to written language, which is why for example you can easily tell when an article is transcribing a spoken interview.

      • By asdff 2026-02-2719:51

        Even today seems like speech to texts works like it did 25 years ago where its breaking up sentences into individual words and trying to match the individual words. So you might get these stupid nonsense sentences from similar sounding words. It isn't like an old school human transcriber where they might miss words on the recording but they can fill in the blanks using their own knowledge of the language or how the speaker talks.

      • By jamilton 2026-02-272:39

        Yes, it's a UX thing. You'd still have to edit it by typing afterwards as well.

        Similarly, raw LLM/chat interfaces are usually not the best option.

    • By buzzerbetrayed 2026-02-271:321 reply

      The completely different way people are experiencing AI is fascinating.

      In my world AI is already far more influential than text to speech.

      People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.

      Very strange.

      • By prescriptivist 2026-02-272:41

        > People on here act like we don’t know if AI will be useful. And I’m sitting over here puzzled because of how fucking useful it is.

        Yes, it's very strange to read AI threads here because the general tone is so different than, say, at the company I work at, where hundreds of engineers are given enormous monthly token budgets and are being pushed to have the LLMs write as much code as possible. They're not forced to, and no one is reprimanded for not adopting Claude Code or Codex or Cursor. But there's been a strong tonal shift in technology leadership in the last month that basically implies that this is how it is going to be done in the future whether one likes it or not.

        As for me, I've been writing all of my code via Claude for a while now, and I don't think I will ever go back to working in an editor writing code the way I did for most of my career. Nor do I want to.

    • By sadeshmukh 2026-02-2623:27

      I type faster than I think, and being able to edit gives the edge over text to speech. I don't believe this is a fundamentally comparable analogy.

    • By bigstrat2003 2026-02-270:062 reply

      Yeah this is exactly my view. We've had several years of work on the tech, and LLMs are just as prone to randomly spitting out garbage as they were the first day. They are not a tool which is fit for any serious work, because you need to be able to rely on your tools. A tool which is sometimes good and sometimes bad is worse than having no tool at all.

      • By selridge 2026-02-270:511 reply

        Did google not rely on Gemini to do their ISA changeover?

        https://arxiv.org/abs/2510.14928

        Was Gemini worse than no tool at all there?

      • By johnfn 2026-02-270:301 reply

        Do you really think that Opus 4.6 hallucinates to exactly the same degree as GPT-3.5? I am mystified how you can hold this perspective.

    • By johnfn 2026-02-2622:264 reply

      I'm curious about the statement that hallucinations are "fundamentally unsolvable". I don't think an AI agent has left a hallucination in my code - by which I mean a reference to something which doesn't exist at all - in many months. I have had great luck driving hallucinations to effectively 0% by using a language with static typechecking, telling LLMs to iterate on type errors until there are none left, and of course having a robust unit and e2e test suite. I mean, sure, I run into other problems -- it does make logic errors at some rate, but those I would hardly categorize the same as hallucinations.

      • By alpaca128 2026-02-271:341 reply

        So type errors are not hallucinations in your book, but "a reference to something which doesn't exist at all" is?

        In the context of AI most people I know tend to mean wrong output, not just hallucinations in the literal sense of the word or things you cannot catch in an automated way.

      • By bojan 2026-02-2623:151 reply

        Maybe you're lucky. I had Opus 4.6 hallucinate a non-existing configuration key in a well known framework literally a few hours ago.

        Granted, it fixed the problem in the very next prompt.

      • By bogzz 2026-02-2622:562 reply

        ChatGPT 5.2 kept gaslighting me yesterday telling me that LLMs were explainable with Shapley values, and it kept referencing papers which talk about LLMs, and about SHAP, but talk about LLMs being used to explain the SHAP values of other ML models.

        I encounter stuff like this every week, I don't know how you don't. I suppose a well-structured codebase in a statically typed language might not provide as much of a surface for hallucinations to present themselves? But like you say, logical problems of course still occur.

      • By gambiting 2026-02-2623:281 reply

        >> I don't think an AI agent has left a hallucination in my code

        I literally just went on Gemini, latest and best model and asked it "hey can you give me the best prices for 12TB hard drives available with the British retailer CeX?" and it went "sure, I just checked their live stock and here they are:". Every single one was made up. I pointed it out, it said sorry, I just checked again, here they are, definitely 100% correct now. Again, all of them were made up. This repeated a few times, I accused it of lying, then it went "you're right, I don't actually have the ability to check, so I just used products and values closest to what they should have in stock".

        So yeah, hallucinations are still very much there and still very much feeding people garbage.

        Not to mention I'm a part of multiple FB groups for car enthusiasts and the amount of AI misinformation that we have to correct daily is just staggering. I'm not talking political stuff - just people copy pasting responses from AI which confidently state that feature X exists or works in a certain way, where in reality it has never existed at all.

HackerNews