That's a super nice story. Sometimes we tend to forget the contributions of AI for the visually impaired or hearing-impaired people—for example, subtitles on Meta glasses or audio descriptions and such.
LeCun has had every advantage imaginable — and the scoreboard remains empty.
He joined Facebook (now Meta) in December 2013. That's over 12 years of access to one of the largest AI labs in the world, near-unlimited compute, and some of the best researchers money can buy.
He introduced I-JEPA in 2023, nearly 3 years ago. It was supposed to represent a fundamental shift in how machines learn — moving beyond generative models toward a deeper, more structured world understanding.
And yet: I-JEPA hasn't decisively beaten existing models on any major benchmark. No Meta product uses JEPA as a core approach. The research community hasn't adopted it — the field keeps pushing on LLMs and diffusion models. There's been no "GPT moment" for JEPA, no single result that made its value obvious to everyone.
So the question becomes simple: how many years, how many resources, and how many failed proof-of-concepts does it take before we're allowed to judge whether an idea actually works?
First, believe it or not, 3 years is not that long. It's also not a given that LeCun was given the resources he needed to work on this tech at Meta. Zuck wanted another llama.
Second, AMI Labs just secured a billion in funding, and while that's a lot of money, it's literally just a fraction of the yearly salary they are paying to Wang. Big tech companies are literally throwing tens of billions to keep doing the same thing, just on a bigger scale. Why not try something else once in a while?
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
I can’t reconcile this dichotomy: most of the landmark deep learning papers were developed with what, by today’s standards, were almost ridiculously small training budgets — from Transformers to dropout, and so on.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Its a matter of needing more time, which is a resource even SV VCs are scared to throw around. Look at the timeline of all these advancements and how long it took
Lecun introduced backprop for deep learning back in 1989
Hinton published about contrastive divergance in next token prediction in 2002
Alexnet was 2012
Word2vec was 2013
Seq2seq was 2014
AiAYN was 2017
UnicornAI was 2019
Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
If his ideas had real substance, we would have seen substantial results by now.
He introduced I-JEPA in 2023, so almost three years ago at this point.
If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.
Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
> If his ideas had real substance, we would have seen substantial results by now
This is naive. Like saying if backprop had any real substance, it would have had results within 10 years of its publication in 1989
> Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
Again. Those resources are important. But one resource being ignored is time. Try baking a turkey at 300 for 4 hours veruss at 900 for 1 hour and see how edible each one is
Backprop kept producing wins. That bought it time.
“Wait longer” is not a blank check. In 2026, with Meta-scale talent, data, and compute, serious ideas should show strong intermediate results, not just theory.
Time is necessary, but it is not evidence. More compute does not replace insight, but it does speed up falsification.
So no, skepticism is not naive. If a research program still cannot point to a clear empirical advantage after years, “it just needs more time” stops sounding like science and starts sounding like insulation from the scoreboard.
"They're moving the goalposts" is increasingly the autistic shrieking of someone with no serious argument or connection to reality whatsoever.
No one cares about how "AGI" or whatever the fuck term or internet-argument goalpost you cared about X months ago was. Everyone cares about what current tech can do NOW, and under what conditions, and when it fails catastrophically. That is all that matters.
So, refining the conditions of an LLM win (or loss) is all that matters (not who wins or loses depending on some particular / historical refinement). Complaining that some people see some recent result as a loss (or win) is just completely failing to understand the actual game being played / what really matters here.
I'm just saying that AI critics like to say that they don't like AI, and to prove their point they constantly move up their definition of "good enough", and when and AI reaches that objective, they change their definition of good enough.
Wish you the best.