Comments

  • By throwaway132448 2026-03-0912:163 reply

    I found the article confusing. Its premise seems to be that alternative methods to deep learning “work”, and only faded out due to other factors, yet keeps referencing scenarios in which they demonstrably failed to “work”. Such as:

    > In 2012, Alex Krizhevsky submitted a deep convolutional neural network to the ImageNet Large Scale Visual Recognition Challenge. It won by 9.8 percentage points over the nearest competitor.

    Maybe there’s another definition of “works” that’s implicit and I’m not getting, but I’m struggling to picture a definition relevant to the history-of-deep-learning narrative they are trying to explain.

    • By deckar01 2026-03-0912:531 reply

      It seems to be an indirect attempt to promote their GitHub project. They had Claude make them an “agent” using Bayesian modeling and Thompson sampling and now they are convinced they have heralded a new era of AI.

      • By canjobear 2026-03-0914:02

        It reads to me like Claude wrote the article too.

    • By PaulHoule 2026-03-0913:271 reply

      I think the worst thing about the golden age of symbolic AI was that there was never a systematic approach to reasoning about uncertainty.

      The MYCIN system was rather good at medical diagnostics and like other systems of the time had an ad-hoc procedure to deal with uncertainty which is essential in medical diagnosis.

      The problem is that is not enough to say "predicate A has a 80% of being true" but rather if you have predicate A and B you have to consider the probability of all four of (AB, (not A) B, A (not B), (not A) (not B)) and if it is N predicates you have to consider joint probabilities over 2^N possible situations and that's a lot.

      For any particular situation the values are correlated and you don't really need to consider all those contingencies but a general-purpose reasoning system with logic has to be able to handle the worst case. It seems that deep learning systems take shortcuts that work much of the time but may well hit the wall on how accurate they can be because of that.

      [1] https://en.wikipedia.org/wiki/Mycin

      • By zozbot234 2026-03-0913:461 reply

        Symbolic AI ala Mycin and other expert systems didn't do anything that a modern database query engine can't do with far greater performance. The bottleneck is coming up with the set of rules that the system is to follow.

        • By PaulHoule 2026-03-0914:39

          Early production rules engines really sucked, like a lot of the time they didn't have any kind of indexes and full scanned a lot. Good RETE engines with indexes didn't get mainstream by the 1980s but the industry was already losing interest. In a lot of ways

          https://en.wikipedia.org/wiki/Drools

          is pretty good as is the Jena rules engine but none of these have ways of dealing with uncertainty which are necessary if you're going to be working with language and having to decide which of 10,000 possible parses is right for a sentence. People used to talk as if 10,000 rules was a lot but handling 2 million well-organized rules with Drools is no problem at all today.

          I think the problems of knowledge base construction are overstated and that a lack of tools are the problem. Or rather, the Cyc experience shows that rules are not enough, that is, after Lenat died it got out that Cyc didn't just have a big pile of facts and rules and a general reasoning procedure but it had a large database of algorithms to solve specific problems. That is, in principle you can solve anything with an SMT solver but if you actually try it you'll find you can code up a special-purpose algorithm to do common tasks before the SMT solver really gets warmed up.

          Part of the production rules puzzle is that there never was a COBOL of business rules rather you got different systems which took different answers to various tricky problems like how to control the order of execution when it matters, how to represent negation, etc.

    • By LoganDark 2026-03-0912:481 reply

      I think what they're saying is the methods used today are faster but have a lower ceiling, and that that's why they quickly took over but can only go so far.

      • By jerf 2026-03-0913:321 reply

        That would be a hypothesis, not a fact.

        I'm not closed to it. You can check my comment history for frequent references to next-generation AIs that aren't architected like LLMs. But they're going to have to produce an AI of some sort that is better than the current ones, not hypothesize that it may be possible. We've got about 50 years of hypothesis about how wonderful such techniques may be and, by the new standards of 2026, precious few demonstrations of it.

        Quoting from the article:

        "Within five years, deep learning had consumed machine learning almost entirely. Not because the methods it displaced had stopped working, but because the money, the talent, and the prestige had moved elsewhere."

        That one jumped right out at me because there's a slight-of-hand there. A more correct quote would be "Not because the methods it displaced had stopped working as well as they ever have, ..." Without that phrase, the implication that other techniques were doing just as well as our transformer-based LLMs is slipped in there, but it's manifestly false when brought up to conscious examination. Of course they haven't, unless they're in the form of some probably-beyond-top-secret AI in some government lab somewhere. Decades have been poured into them and they have not produced high-quality AIs.

        Anyone who wants to produce that next-gen leap had probably better have some clear eyes about what the competition is.

        • By LoganDark 2026-03-0913:43

          > That would be a hypothesis, not a fact.

          I agree.

  • By pron 2026-03-0914:082 reply

    > This is the VHS-versus-Betamax dynamic, or TCP/IP versus the OSI model, or QWERTY versus every ergonomic alternative proposed since 1936. The technically superior solution loses to the solution that’s easier to deploy, easier to hire for, and good enough for the use cases that pay the bills.

    Without commenting on the merit of the claims, the problem with this statement is that in many cases there is no universal "technical superiority", only tradeoffs. E.g. Betamax was technically superior in picture quality while VHS was technically superior in recording time, and more people preferred the latter technical superiority. When people say that the techinically superior approach lost in favour of convenience, what really happened is that their own personal technical preferences were in the minority. More people preferred an alternative that wasn't just "good enough" but technically better, only on a different axis.

    Even if we suppose the author is right that his preferred approach yields better outputs, he acknowledges that constructing good inputs is harder. That's not technical superiority; it's a different tradeoff.

    • By Tomte 2026-03-0915:18

      And there was a real cliff in recording time, not a marginal difference: a normal VHS tape could record a typical TV show, a normal Betamax tape could not. The utility function is a step function here.

      (Both got more recording times through Long Play techniques a.k.a. quality degradation and through actually longer magnetic tape in the cassette, but at least in the beginning it was clear-cut).

    • By kjshsh123 2026-03-0914:29

      Pretty much although it's less about majority/minority preference and more about utilitarianism and economics.

      It's possible a majority of pofeople would have been marginally happier with betamax than vhs. Even in that case, vhs can still win because a minority of people had a strong, stubborn preference for it, even if a majority of people had a weak preference for betamax.

      If 1,000,000 people are willing to pay $5 more for video quality but 800,000 people are willing to pay $8 more for longer recording, which wins out?

      Not to mention savings on the producer side are relevant too, not just consumer side.

      I'm not saying the above is necessarily the case. Just pointing out that markets aren't majoritarian, they're utilitarian.

  • By bArray 2026-03-0913:34

    > A Bayesian decision-theoretic agent needs explicit utility functions, cost models, prior distributions, and a formal description of the action space. Every assumption must be stated. Every trade-off must be quantified. This is intellectually honest and practically gruelling. Getting the utility function wrong doesn’t just give you a bad answer; it gives you a confidently optimal answer to the wrong question.

    I was talking somebody through Bayesian updates the other day. The problem is that if you mess up any part of it, in any way, then the result can be completely garbage. Meanwhile, if you throw some neural network at the problem, it can much better handle noise.

    > Deep learning’s convenience advantage is the same phenomenon at larger scale. Why specify a prior when you can train on a million examples? Why model uncertainty when you can just make the network bigger? The answers to these questions are good answers, but they require you to care about things the market doesn’t always reward.

    The answer seems simple to me - sometimes getting an answer is not enough, and you need to understand how an answer was reached. In the age of hallucinations, one can appreciate approaches where hallucinations are impossible.

HackerNews