Research
Meta’s Yann LeCun Asks How AIs will Match — and Exceed — Human-level Intelligence
In the new Lecture Series in AI, the pioneer explained his vision for the future of this revolutionary technology.
More than 1,000 students, researchers, and members of the community gathered in Lerner Hall Oct. 18, to hear Yann LeCun deliver the second talk in Columbia Engineering’s Lecture Series in AI.
LeCun, who is chief AI scientist at Meta and a professor at New York University, delivered a presentation titled “How Could Machines Reach Human-level Intelligence: Towards AI systems that can learn, remember, reason, plan, have common sense, yet are steerable and safe.”
In his welcoming remarks, Dean Shih-Fu Chang noted that LeCun’s “particularly intriguing” talk had sold out just three minutes after tickets had become available.
In introducing LeCun, Vice Dean for Computing and AI Vishal Misra told the crowd how the computer scientist had “stunned the world” in 1989 when he demonstrated a revolutionary system that could automatically recognize hand-written digits. LeCun is now regarded as one of the "godfathers" of AI for his pioneering work in deep learning. His advancements in convolutional neural networks and his contributions to neural network training methods underlie many modern AI systems. LeCun (along with Geoffrey Hinton and Yoshua Bengio) was awarded the 2018 Turing Award for contributions to deep learning.
During his talk, LeCun offered a candid perspective on the state of AI and future directions, expanding on and updating a 2022 position paper.
Moving beyond LLMs
LeCun began his talk by expressing skepticism about the term "Artificial General Intelligence,” which he described as misleading.
"I hate that term," LeCun remarked. "Human intelligence is not generalized at all. Humans are highly specialized. All the problems we can fathom or imagine are problems we can fathom or imagine." Instead, he suggested using the term "Advanced Machine Intelligence,” which he noted has been adopted within Meta.
LeCun believes that achieving human-level intelligence in machines is possible. He underscored that the goal is not only desirable but will eventually lead to highly practical applications, such as AI-powered wearable devices that can identify plants, translate languages, and help navigate the world seamlessly.
LeCun also critiqued the current focus on large language models (LLMs), referring to them as "autoregressive LLMs." He acknowledged their ability to produce coherent text but expressed doubt about their capacity to reach human-level intelligence.
"Existing systems don’t understand the world as well as a housecat," he said, emphasizing that these models are trained to predict the next word based on prior text — an approach he referred to as “kind of a hack.” According to LeCun, current LLMs lack the capability to plan and interact meaningfully with the physical world.
Toward AI with a model of the world
In describing the types of systems that he is developing with students and colleagues, LeCun emphasized the importance that a system has an understanding of its context and the rules that govern it, which he referred to as a “world model.”
"The role of a world model is to predict what the outcome of a series of actions is going to be," he said. "Predicting the outcome of a series of actions is what allows us to reason and plan." He elaborated on his vision for objective-driven AI systems, describing them as systems capable of creating detailed models of the world that go beyond text-based abstractions.
LeCun explained that the goal is to develop systems that can plan down to the most basic levels — something many non-human animals can do, but which LLMs cannot. He pointed to advances in joint embedding predictive architecture as a promising pathway and provided updates on progress toward creating more robust, world-model-driven AI systems.
He contrasted the current landscape with the limitations of AI today, noting that while LLMs can pass complex exams like the bar, they still can’t perform simple, practical tasks that a child could accomplish. "Any 17-year-old can learn to drive a car with 20 hours of practice, but we still don’t have level-5 automation," he said, adding that existing autonomous vehicles rely on approaches that, in his words, “cheat.”
LeCun concluded his talk with a call for open-source AI development.
"I think the main risk of AI in the future is if AI is controlled by a handful of companies," he said.
View the Full Talk
Lead Photo Credit: Brandon Vallejo/Columbia Engineering