" …
What exactly is AMI building? The short answer is world models, a category of AI system that LeCun has been arguing for, and working on, for years. The longer answer requires understanding why he thinks the industry has taken a wrong turn.
Large language models learn by predicting which word comes next in a sequence. They have been trained on vast quantities of human-generated text, and the results have been remarkable, ChatGPT, Claude, and Gemini have demonstrated an ability to generate fluent, plausible language across an enormous range of subjects. But LeCun has spent years arguing, loudly and repeatedly, that this approach has fundamental limits.
His alternative is JEPA: the Joint Embedding Predictive Architecture, a framework he first proposed in 2022. Rather than predicting the future state of the world in pixel-perfect or word-by-word detail, the approach that makes generative AI both powerful and prone to hallucination, JEPA learns abstract representations of how the world works, ignoring unpredictable surface detail. The idea is to build systems that understand physical reality the way humans and animals do: not through language, but through embodied experience."



Good luck getting your model to learn how to code through physical experience instead of through text.
I’m skeptical, but it makes a lot more sense. You don’t just “learn to code.” Writing the text is the easy part. It’s about solving problems. This is next to impossible to do reasonably without actually understanding what the solution needs to do and what capabilities you have to do it. That’s why the LLM method has produced such shit code. It’s just reproducing text. It doesn’t actually understand the problem or what it can use to get it done.
Tell it to Lecun. He won the Turing prize. I figure he knows what he’s doing. Let him cook I sez.
PS: I didn’t down vote you. It’s good to be skeptical.
I dunno, the I-JEPA paper only dealt with image classification, and it looks like it isn’t scaling with larger model sizes like the other techniques.
Besides, Meta was one of the biggest failures in AI model building while he was there. Not exactly a confidence booster.
I’m extremely skeptical if he’s truly raising money off of name recognition alone instead of a real demo frontier model that just needs scaling.
Yep. And per the article’s conclusion -
“…The question is whether being right about the problem is the same as being right about the solution.”