
Research
For Eric Xing, Purpose Should Drive Intelligence
The president of Mohamed bin Zayed University of Artificial Intelligence offered a fresh take on the quest for artificial general intelligence.
Eric Xing, president and University Professor at Mohamed bin Zayed University of Artificial Intelligence, delivered the most recent installment of Columbia Engineering’s Lecture Series in AI. Xing’s talk was titled “Toward General and Purposeful Reasoning in the Real World Beyond Lingual Intelligence.”
“This is the last lecture of the academic year, and we’re ending with a bang,” said Columbia Engineering Vice Dean for Computing and AI Vishal Misra.
In a personal introduction, Executive Vice President for Research at Columbia University and Professor of Computer Science Jeannette Wing, who recruited Xing to his first faculty role at Carnegie Mellon University, called him “a phenomenal researcher in AI who’s moved the field in many ways.”
Professor and Chair of the Department of Statistics at Columbia, Tian Zheng, characterized Xing as “a strong voice in bringing fields together.”
“Under Eric’s leadership, MBZUAI has been a fast-rising global leader in AI research and education,” she said.

Reframing the Challenge
Xing opened his talk by reminding the audience of a familiar conflict in AI research: Do large language models (LLMs) contain the makings of artificial general intelligence (AGI), or does that lofty goal require approaches that move beyond technologies that enable today’s leading AI systems?
In a prior installment of the Lecture Series in AI, Yann LeCun forcefully rejected the possibility that LLMs would ever power systems that truly understand the world. For Xing, this isn’t the right question.
“This conversation is interesting but misguided because it’s overly focused on intelligence, which is a subjective state rather than a practical measurement,” he explained. “Consider someone who excels as a musician but struggles with mathematics. Are they not intelligent?”
Instead, researchers should focus on the capabilities of a system.
Xing highlighted that while LLMs can effectively read and write, language alone often falls short in describing complex scenarios. For example, human language is poorly suited to describing details of a streetscape for the purposes of an autonomous vehicle or to describing the viscosity of water.
“I predict that LLMs will eventually be able to provide satisfactory answers to anything describable through language,” Xing said. For situations that don’t fall into that category, state-of-the-art systems are still far from approaching AGI. “Your cat is probably smarter than an LLM simply because it has some understanding of space and time.”

A New Approach to AGI
Xing dedicated the bulk of his talk to proposing a new architecture for AI systems that are tuned to general and purposeful reasoning. Based on what he calls a Physical, Agentic, Networked (PAN) world model and relying heavily on simulation, Xing’s approach is designed to achieve complex and practical objectives.
Drawing inspiration from Frank Herbert’s science fiction series "Dune," Xing invoked the concept of Kwisatz Haderach, a being who inherits the cumulative experiences of all its ancestors and who can simulate the outcomes of all possible scenarios.
“Our aim should be goal-oriented agents capable of operating within environments to achieve objectives in a ‘Dune-like’ manner,” Xing argued. He emphasized the importance of constructing robust world models that facilitate extensive simulations before trying to implement action in the real world.
“You don’t want to travel all the way to Mars and then start planning,” he said. “You want to rely on your simulation first.”
View the Full Talk
Lead Photo Caption: Guest speaker Eric Xing, president at Mohamed bin Zayed University of Artificial Intelligence
Lead Photo Credit: April Renae/Columbia Engineering