Campus

Exploring the Past — and Future — of AI

OpenAI’s Shuchao Bi sees a bright future for machine learning.

June 26, 2025
Grant Currin

How did we arrive at the present moment in AI? What are today’s biggest challenges? What does the future hold?

These are the questions that Shuchao Bi considered in his talk, “Advancing the Frontier of Silicon Intelligence: the Past, Open Problems, and the Future,” and delivered to a packed auditorium June 12 at Columbia Engineering. Bi, who worked as an engineering director at Google and co-founded YouTube shorts, is currently a researcher at OpenAI

Zhou Yu, an associate professor of computer science at Columbia Engineering, introduced Bi, who was her classmate at Zhejiang University. She noted that Bi was “the embodiment” of their alma mater’s ethos of blending practical utility with genuine innovation.

 

Image
Shuchao Bi pictured with Zhou Yu
Shuchao Bi pictured with Zhou Yu (left), associate professor of computer science at Columbia who co-hosted Bi’s distinguished lecture. Credit: David Dini/Columbia Engineering

How we got to now

Bi opened with a brisk tour of AI’s intellectual roots, beginning with Alan Turing’s vision of simulating an infant brain and educating it into intelligence. 

“That’s exactly what machine learning is,” Bi told the audience. “You build a system with minimal human priors and let it learn from the data.”

He traced the rise of self-supervised learning, deep networks, and landmark architectures like transformers. Along the way, Bi illustrated how scaling compute and data (not hand-coded knowledge) unlocked leaps in performance. 

“With sufficient data, neural networks surpassed human-engineered algorithms, wiping out decades of manual feature design,” he said before running through a series of the most important recent papers in AI, on topics like generative adversarial networks and deep residual networks.

Bi celebrated transformers as a turning point that elegantly resolved prior limitations. “This is the most important architecture of the last decade,” he said. “It’s massively parallelizable, data-efficient, and it scales beautifully.”

Looking ahead

Bi was equally candid about the open problems. Despite dazzling capabilities, today's models still fall short of Artificial General Intelligence (AGI). 

“AGI is not just about solving math problems, it’s about generalizing across domains, adapting to new tasks, and interacting with environments,” he said. “That’s where reinforcement learning and curiosity-driven exploration come in.”

To reach AGI, Bi argued, we’ll need more than scale. “The scaling laws aren’t failing, the data is,” he said. “We need fundamentally better data, especially utility-aligned data, and more efficient learning algorithms.”

He emphasized the role of exploration in discovery. “Human science is built on inspiration and iteration,” he said. “Models that can search, explore, and generate new hypotheses will be key.”

Bi concluded on a characteristically thoughtful note. “Every few months, we see something once thought impossible become reality,” he said. “That should make us question all the things we still think are impossible.”


Lead Photo Caption: OpenAI’s Shuchao Bi delivered a distinguished lecture at Columbia Engineering on June 12
Lead Photo Credit: David Dini/Columbia Engineering