Backprop kept producing wins. That bought it time.
“Wait longer” is not a blank check. In 2026, with Meta-scale talent, data, and compute, serious ideas should show strong intermediate results, not just theory.
Time is necessary, but it is not evidence. More compute does not replace insight, but it does speed up falsification.
So no, skepticism is not naive. If a research program still cannot point to a clear empirical advantage after years, “it just needs more time” stops sounding like science and starts sounding like insulation from the scoreboard.
Often I don’t purchase anything because I have to sift through a billion different options.
Model identifiers are often unique to specific stores, because they carry laptop configurations made just for them.
Apple, AmazonBasics, and a few others, by contrast, understand the consumer and offer a very limited—though often configurable—selection.
LeCun has had every advantage imaginable — and the scoreboard remains empty.
He joined Facebook (now Meta) in December 2013. That's over 12 years of access to one of the largest AI labs in the world, near-unlimited compute, and some of the best researchers money can buy.
He introduced I-JEPA in 2023, nearly 3 years ago. It was supposed to represent a fundamental shift in how machines learn — moving beyond generative models toward a deeper, more structured world understanding.
And yet: I-JEPA hasn't decisively beaten existing models on any major benchmark. No Meta product uses JEPA as a core approach. The research community hasn't adopted it — the field keeps pushing on LLMs and diffusion models. There's been no "GPT moment" for JEPA, no single result that made its value obvious to everyone.
So the question becomes simple: how many years, how many resources, and how many failed proof-of-concepts does it take before we're allowed to judge whether an idea actually works?