Drafting the Blueprint for AI at Columbia Engineering

In this conversation, Dean Shih-Fu Chang and Vice Dean Vishal Misra discuss how the School is driving and responding to this exciting moment in the development of artificial intelligence.

By Grant Currin


Artificial intelligence isn’t new at Columbia Engineering. Our researchers have been probing the foundations of AI and implementing machine learning algorithms for decades. Today, faculty in our computer science and related departments are world leaders in understanding AI systems, and researchers across the School are applying models to domain areas that range from simulating the behavior of atoms to predicting Earth’s climate.

In this interview, Dean Shih-Fu Chang and Vishal Misra, vice dean for computing and AI, reflect on the recent explosion in interest and investment in AI systems as well as concerns about how they will impact society.

It’s been two years since ChatGPT became widely available. As leading AI researchers, have you been surprised by its impact?

Vishal Misra: Not really. From the moment it came out, people were asking it to write poetry, code, or work with data. It could do anything that people wanted, and the fact that it was free meant everybody was trying it. It’s become the fastest growing software application in history, whether enterprise or consumer. Since I had been working with these models for some time, I believed in their power. The form factor that OpenAI created — a chatbot based on a large language model — appealed to people and felt very natural.

Shih-Fu Chang: I’m not surprised by the impact either. What did surprise me was the speed of progress. In the past, software generation took years. Now, there’s a new generation with exponentially greater capabilities every few months. It’s amazing to see how quickly things have changed in such a short time. The idea that AI could not only answer questions one by one but remember context, engage in longer conversations, and mix different modalities — text, image, audio — is incredible. And it’s doing all of this faster, cheaper, and with smaller models, which is the dream of AI researchers.

Training a foundation model is so expensive that only a handful of companies can afford to do it. How do universities fit into this resource-intensive landscape?

SC: We can’t compete with companies on the sheer scale of training resources, but that’s not where our strength lies. What universities can do is lead in other aspects. We go deeper into understanding the foundation of AI, explaining why it works (or why it doesn’t) and why it behaves in certain ways. This concept of explainable AI has been around for a while, but it is particularly crucial now because of how fast AI is advancing. Universities should also focus on questions that industry isn’t currently prioritizing. For example, How do you develop machine learning that learns with fewer examples? How do you ensure that AI behaves ethically and responsibly?

Image
Vishal Misra
VISHAL MISRA IS VICE DEAN OF COMPUTING AND ARTIFICIAL INTELLIGENCE AND PROFESSOR OF COMPUTER SCIENCE.

VM: What we can do is really understand what makes these models tick and come up with better architectures. Many current models are trained using brute force: more data, bigger models. But do you need a model that understands both quantum gravity and Python code? Probably not. In academia, we’re in a position to create more efficient architectures that are tailored to specific tasks. The challenge is finding the right set of training data and the right model size to solve specific problems effectively. The idea is to ensure that AI is targeted and not just an all-encompassing giant.

How is Columbia Engineering investing in AI infrastructure to support this kind of work?

SC: We’re investing heavily in high-performance computing infrastructure for a group of faculty to use in their work on foundational AI research, experimenting with new architectures, algorithms, and data ingestion techniques. This GPU cluster will be located on the Morningside Heights campus and represents an investment of about $10 million. In addition, our faculty are playing an active role in developing the state-wide Empire AI GPU supercluster, a $400 million multi-institution public-private initiative launched in April 2024 by New York Gov. Kathy Hochul, in support of large-scale AI academic research.

How does Columbia Engineering contribute to interdisciplinary AI applications, such as in finance, health, or climate science?

SC: Universities lead in foundational technology and theory. But to have a real impact, you need to work with experts in different disciplines. Columbia Engineering focuses on combining our computational strengths with practical applications. In finance, for example, we collaborate with Columbia Business School on risk assessment, analytics, and logistics optimization. That is, how do we use AI to analyze financial information, summarize reports, and make predictions? In health care, we work closely with the medical school and New York’s health care industry to develop AI-based diagnosis and treatment technologies. Climate prediction is another area where Columbia is particularly strong. We have one of the best institutions in the country for combining climate science and artificial intelligence.

Image
Shih-Fu Chang
SHIH-FU CHANG IS DEAN OF COLUMBIA ENGINEERING, MORRIS A. AND THE ALMA SCHAPIRO PROFESSOR OF ENGINEERING, AND PROFESSOR OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE.

VM: Sports is another example. A few years ago, we realized that many faculty members were independently working on sports-related AI projects, whether in mechanical engineering, computer science, or biomedical fields. With funding from our industry partner, Dream Sports, we recently launched the Columbia-Dream Sports AI Innovation Center. In September, we held a symposium featuring leading academics as well as AI experts from the NBA and New York Yankees. Bringing people from all over the country together in one place made a huge difference and is sure to spark many new collaborations.

Much of the discussion around AI in education centers on cheating. What role should this technology play in teaching and learning?

VM: One of the first things I do in my undergrad class is ask ChatGPT to solve an old homework problem. I ask students to find and correct the subtle mistakes in the answers generated by ChatGPT. This not only helps them understand the material — it teaches them not to blindly trust these models. Since these models are here to stay, the question becomes, how do we use them to improve learning rather than hinder it? One example is a course assistant chatbot that we’re developing. It’s designed to guide students toward the answer rather than giving it outright, simulating how a human teaching assistant would respond. It will offer hints, point out mistakes, and direct students to course material. We’re working with the Center for Teaching and Learning to test it, but early indications show that it’s helping students learn more.

SC: Another critical challenge is for educators to adapt. If we continue with the status quo — assigning the same homework or asking students to simply repeat facts — then, yes, students will turn to AI for answers. But if we ask students to think deeper, to engage in discussions, and to critique each other’s work, then AI becomes a tool. It’s necessary to create assignments and classroom activities that require a level of interaction and dynamic thinking that AI can’t replicate.

What new skills are you trying to impart to students now that AI tools are more prevalent?

SC: When Nvidia CEO Jensen Huang was here, someone asked if he was worried that the AI models would take away human jobs in the next few years. He said you don’t have to worry about AI taking away your job. You should worry about whether the person sitting next to you is going to learn how to use AI and take away your job. Students need to learn to embrace AI, learn how it works, and understand when it makes mistakes. They should learn to integrate AI into their learning process and workflows. The human’s role is to guide thinking. What if boundary conditions change? What if the initial prompt is changed? We teach students critical thinking — to consider different hypotheses, to challenge assumptions, to investigate outcomes.

Image
Dean Chang and Vice Dean Misra
DEAN CHANG AND VICE DEAN MISRA.

VM: Critical thinking is the key skill we hope to reinforce. AI will make certain tasks easier, but it will push us to explore new directions and avenues we haven’t thought of before. When the camera was invented, it freed up a lot of artists who were doing portraits or landscapes. Their minds started going into Cubism and Impressionism — they started to explore abstract art. You’re not going to lose your programming job because of AI, but you’ll be able to use AI to do your programming job better or do new kinds of things. AI will lead us in new directions, but it’s very hard to predict where we’ll go. We just have to make sure we go in that direction thoughtfully and ethically.

We go deeper into understanding the foundation of AI, explaining why it works (or why it doesn’t) and why it behaves in certain ways.

Dean Shih-Fu Chang

How do you keep up with the rapid advancements in AI, and how do you advise others to stay informed?

VM: You have to read a lot. There are thought leaders who publish blogs and insights online. Platforms like X (I still call it Twitter) are good for keeping up with conversations. Like Dean Chang said, new models are coming out rapidly, so you need to keep up with what all of the major AI companies are doing. If you’re really technical, you can follow conferences like ICML or NeurIPS.

SC: We are privileged being in New York City and in a university setting. Not only do we have one of the best AI research groups in the world — we’re also exposed to AI’s intersection with experts in every academic discipline and every industry, including finance, health care, media, and fashion. Attending events and listening to podcasts with thought leaders are very helpful.

We’ve just launched the Lecture Series in AI. Keeping up with those talks will be a wonderful way for our students, alumni, and community to stay up to date. Above all, it’s important to learn how to use these systems to augment human ability. AI and machine learning are very well suited to fast thinking. When students ask me questions like, “How do I compete as a programmer,” I tell them to compete in slower thinking, not faster thinking. That ability is a hallmark of the unique engineering education we provide at Columbia Engineering and the broad-based liberal arts education that our students receive in their curriculum across the University.


Photos by Chris Taggart

Lecture Series in AI

Yann Lecun standing behind a podium with a microphone. The podium's banner reads "Columbia Engineering"

Meta’s Yann LeCun Asks How AIs will Match — and Exceed — Human-level Intelligence

In the new Lecture Series in AI, the pioneer explained his vision for the future of this revolutionary technology.

Read more about 'Meta’s Yann LeCun Asks How AIs will Match — and Exceed — Human-level Intelligence'
Pierre Gentine addressing an audience

The Power of AI for Climate Modeling

A talk by Pierre Gentine kicks off a new initiative from Columbia Engineering, the Lecture Series in AI.

Read more about 'The Power of AI for Climate Modeling'