Research

Brains Behind the Bots: Neuroscience’s Big Role in the future of AI

Experts came together at Columbia to explore how brain science can shape the next generation of AI.

June 04, 2025
Bernadette Young

The Emerging Trends in AI workshop opened with an ambitious goal: to explore the evolving relationship between artificial intelligence and the fields of neuroscience, cognitive science, and the social sciences. 

Co-hosted May 5 to 6 by Columbia Engineering, the Simons Institute for the Theory of Computing, and the NSF Institute for Artificial and Natural Intelligence (ARNI), the two-day event marked a significant moment of cross-disciplinary exchange in AI research and application.

In his welcome address, Columbia Engineering Dean Shih-Fu Chang highlighted the opportunity for academia to lead in foundational AI innovation. 

"We see rapid progress in AI across industry," Chang said, "but what are the emerging areas where academia and the research community can make long-term, fundamental contributions — especially in resilience, trust, neuroscience, and data?"

Image
Shih-Fu Chang sitting in a crowd and holding a microphone
Columbia Engineering Dean Shih-Fu Chang Credit: April Renae/Columbia Engineering

Richard Zemel, the Trianthe Dakolias Professor of Engineering and Applied Science in the Department of Computer Science and director of ARNI, expanded on that vision by explaining the unique mission of the institute, one of 25 NSF-funded AI institutes across the country, each pairing AI with a distinct field of study, from physics to agriculture. ARNI, he noted, is focused on the intersection of AI with neuroscience and cognitive science.

“Neuroscience and AI have developed in parallel for years, but in the last decade, both fields have exploded,” Zemel said. “Now is the perfect time to bring them together — to allow for real bidirectional collaboration where neuroscience can inform AI, and AI can push the boundaries of neuroscience.”

Zemel described how new brain-recording technologies have unlocked unprecedented views into neural activity, while advances in AI have produced large models capable of tasks once thought to be out of reach. ARNI aims to harness these endeavors to benefit each other. The workshop’s themes — the shared challenges of resilience and robustness in AI and neuroscience, and the use of synthetic data to accelerate discovery across scientific and social domains — reflect this integrative approach.

Over two days, experts explored how techniques from machine learning can illuminate our understanding of the brain and how neuroscience can inspire more robust, interpretable AI systems. They examined the practical and ethical dimensions of synthetic data alongside its potential to enhance research in fields like public policy and social science.

Bringing together voices from computer science, neuroscience, public health, and more, the workshop reflected a growing recognition: the future of AI will not be built in silos.

Image
Workshop on Emerging Trends in AI panelists sitting on stage, from left to right: Shafi Goldwasser, Costis Daskalakis, Tatsunori Hashimoto, Daniela Kaufer, Adam Klivans, Nikolaus Kriegeskorte, Yael Niv
Panelists for “Synthetic Data: Ethical and Practical Impacts on Research and Policy”. Credit: April Renae/Columbia Engineering

What it takes to get synthetic data right

At the “Synthetic Data: Ethical and Practical Impacts on Research and Policy” panel, researchers discussed the promise and pitfalls of synthetic data, emphasizing its growing role in research when access to original datasets is limited by cost, privacy, or policy. 

Tijana Zrnic, Ram and Vijay Shriram Postdoctoral Fellow at Stanford University, questioned the value of synthetic datasets since they cannot be representative of real data, saying “In general, I really don't believe in creating information out of thin air.” Zemel argued that curated synthetic data carries embedded expertise and can be a powerful tool if the dataset is large enough that it could examine data at a granular level.

The panelists also cited differential privacy as a technique for producing safe and useful synthetic data. Ran Canetti, professor of computer science at Boston University, noted that using differentially private data “is good against overfitting by design.” This approach would help prevent problems like p-hacking while preserving privacy.

The conversation also turned to long-term challenges, particularly the risks of bias and manipulation when AI systems adapt to user preferences over time. The panel warned that synthetic data could be misused to reinforce misleading narratives if not properly benchmarked. 

Roberto Rigobon, professor of applied economics at MIT, pointing to how system design can prioritize engagement over accuracy, compared the potential influence of synthetic data with the rise of social platforms that encourage business models built on clicks. 

“Now [users] don’t want to learn the truth — [they] just want to be satisfied.” 

The group stressed that synthetic data, while powerful, must be used ethically, transparently, and with a deep understanding of its limitations. They all agreed that a standard should be established, the data and results should be easily replicable, and models used to interpret data should be open-source.

Image
Shafi Goldwasser standing at a podium with a Columbia Engineering banner
Shafi Goldwasser, research director of the Resilience Research Pod at the Simons Institute for the Theory of Computing and the C. Lester Hogan Professor in Electrical Engineering and Computer Sciences at UC Berkeley. Credit: April Renae/Columbia Engineering

Turning to neuroscience to strengthen machine learning

Several experts drew attention to the deep divide between neuroscience and artificial intelligence, noting how one aims to heal and understand the brain while the other prioritizes performance, utility, and commercial success. This tension shaped the panel discussion, “ML & Neuroscience: Toward More Resilient & Robust Systems.”

The panelists emphasized that experimental alignment across the two fields is most valuable when the scientific questions are shared, rather than simply mimicking tasks. They also discussed the danger of assuming there’s only one "correct" way to function, whether for humans or machines, cautioning against a narrow definition of optimality.

Nikolaus Kriegeskorte, professor of psychology and neuroscience; director of Cognitive Imaging in the Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Psychology, Department of Neuroscience; and an affiliated member of electrical engineering, made the point that AI models can act as “existence proofs” that push neuroscientific understanding forward — even when those models aren’t biologically plausible. 

The conversation turned to pressing ethical concerns, particularly around the misuse of AI in mental health and personal prediction. 

Adam Klivans, professor of computer science at University of Texas at Austin, warned of the financial incentives of companies driving careless or exploitative applications. The panelists expressed concern about discrimination, especially in hiring and health care, and urged caution as AI systems inherent biases from the data they're trained on. Some noted that traits seen as disorders might be adaptations to past environments and that reducing human variability to neat categories could be both scientifically flawed and socially harmful. The session ultimately called for diverse, ethical approaches that respect both individual differences and societal impact.

Academia’s role in AI’s next chapter

The workshop made clear that if we want to build AI that’s resilient, interpretable, and socially responsible, we need to look beyond engineering alone. As models grow in complexity and influence, breakthroughs will increasingly come from the spaces between disciplines — from collaborations that draw equally on neuroscience, public policy, and computer science.

What emerged over the course of the two days wasn’t just a list of technical challenges or novel research ideas. It was a shared conviction that the future of AI will be shaped by how well we understand ourselves. With academia as a uniquely fertile ground for this kind of cross-pollination, initiatives like ARNI are paving the way for deeper insight into both minds and machines.

“Academia still has a real edge when it comes to fostering collaboration between computer scientists and neuroscientists,” said Shafi Goldwasser, event co-organizer and research director for the Resilience Research Pod at the Simons Institute for the Theory of Computing and professor of electrical engineering and of computer sciences at UC Berkeley. “I doubt half of OpenAI is going to be neuroscientists, but in universities, that kind of interdisciplinary work is not only possible — it’s something we should be actively capitalizing on."


Lead Photo Caption: Richard Zemel, Trianthe Dakolias Professor of Engineering and Applied Science at Columbia Engineering; director of ARNI
Lead Photo Credit: David Dini/Columbia Engineering