Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Find out how you can attend here.
AI pioneer Yann LeCun kicked off an animated discussion today after telling the next generation of developers not to work on large language models (LLMs).
“This is in the hands of large companies, there’s nothing you can bring to the table,” Lecun said at VivaTech today in Paris. “You should work on next-gen AI systems that lift the limitations of LLMs.”
The comments from Meta’s chief AI scientist and NYU professor quickly kicked off a flurry of questions and sparked a conversation on the limitations of today’s LLMs.
When met with question marks and head-scratching, LeCun (sort of) elaborated on X (formerly Twitter): “I’m working on the next generation AI systems myself, not on LLMs. So technically, I’m telling you ‘compete with me,’ or rather, ‘work on the same thing as me, because that’s the way to go, and the [m]ore the merrier!’”
With no more specific examples offered, many X users questioned what “next-gen AI” means and what might be an alternative to LLMs.
Developers, data scientists and AI experts offered up a multitude of options on X threads and sub-threads: boundary-driven or discriminative AI, multi-tasking and multi-modality, categorical deep learning, energy-based models, more purposive small language models, niche use cases, custom fine-tuning and training, state-space models and hardware for embodied AI. Some also suggested exploring Kolmogorov-Arnold Networks (KANs), a new breakthrough in neural networking.
One user bullet-pointed five next-gen AI systems:
- Multimodal AI.
- Reasoning and general intelligence.
- Embodied AI and robotics.
- Unsupervised and self-supervised learning.
- Artificial general intelligence (AGI).
Another said that “any student should start with the basics,” including:
- Statistics and probability.
- Data wrangling, cleaning and transformation.
- Classical pattern recognition such as naive Bayes, decision trees, random forest and bagging.
- Artificial neural networks.
- Convolutional neural networks.
- Recurrent neural networks.
- Generative AI.
Dissenters, on the other hand, pointed out that now is a perfect time for students and others to work on LLMs because the applications are still “barely tapped.” For instance, there’s still much to be learned when it comes to prompting, jailbreaking and accessibility.
Others, naturally, pointed to Meta’s own prolific LLM building and suggested that LeCun was subversively trying to stifle competition.
“When the head of AI at a big company says ‘don’t try and compete, there’s nothing you can bring to the table,’ it makes me want to compete,’” another user drolly commented.
LLMs will never reach human-level intelligence
A champion of objective-driven AI and open-source systems, Lecun also told the Financial Times this week that LLMs have a limited grasp on logic and will not reach human-level intelligence.
They “do not understand the physical world, do not have persistent memory, cannot reason in any reasonable definition of the term and cannot plan . . . hierarchically,” he said.
Meta recently unveiled its Video Joint Embedding Predictive Architecture (V-JEPA), which can detect and understand highly detailed object interactions. The architecture is what the company calls the “next step toward Yann LeCun’s vision of advanced machine intelligence (AMI).”
Many share LeCun’s feelings about LLMs’ setbacks. The X account for AI chat app Faune called LeCun’s comments today an “awesome take,” as closed-loop systems have “massive limitations” when it comes to flexibility. “Whoever creates an AI with a prefrontal cortex and an ability to create information absorption through open-ended self-training will probably win a Nobel prize,” they asserted.
Others described the industry’s “overt fixation” on LMMs and called them “a dead end in achieving true progress.” Still more noted that LLMs are nothing more than a “connective tissue that groups systems together” quickly and efficiently like telephone switch operators, before passing off to the right AI.
Calling out old rivalries
LeCun has never been one to shrink away from debate, of course. Many may remember the extensive, heated back and forths between him and fellow AI godfathers Geoffrey Hinton, Andrew Ng and Yoshia Bengio over AI’s existential risks (LeCun is in the “it’s overblown” camp).
At least one industry watcher called back to this drastic clash of opinions, pointing to a recent Geoffrey Hinton interview in which the British computer scientist advised going all-in on LLMs. Hinton has also argued that the AI brain is very close to the human brain.
“It’s interesting to see the fundamental disagreement here,” the user commented.
One that’s not likely to reconcile anytime soon.
Source link
lol