Research

Team Led by Elias Bareinboim Wins $5M NSF Grant to Transform AI Decision-making

Multi-institutional team will use causal modeling techniques to build AI systems that better communicate with people and react to unforeseen circumstances.

November 27, 2023

As our world moves closer to an AI-based economy, a growing number of decisions that were once made by people are now being delegated to automated systems. This is a trend that will likely accelerate in the coming years. But these systems can often have problems: they may learn societal biases we don’t want them to have, and show discrimination based on gender, race, religion, or other sensitive attributes. Considerations of algorithmic fairness have become an increasingly critical topic of discussion throughout the AI community and society more broadly.

A multi-institutional team led by Columbia Engineering has won a $5 million National Science Foundation (NSF) grant to address these challenges, transforming AI decision-making by building more efficient, explainable, and transparent decision-support systems. Decision-making has been studied in AI almost since its inception, and it can now scale up to complex systems with millions of variables. Still, these systems haven’t been understood through the lenses of causal inference, which takes into account other dimensions of this process, such as the existence of unobserved confounding and other deviations in terms of an idealized data collection. The goal of this grant is to integrate traditional AI decision-making with causal modeling techniques to create AI systems that can better communicate with people and react to unforeseen circumstances.

“We will marry the framework of structural causal models with the leading approaches for decision-making in AI, including model-based planning with Markov Decision Processes and their extensions, reinforcement learning, and graphical models,” said the team’s leader Elias Bareinboim, an associate professor in the Department of Computer Science and leading expert in causal inference. He continued: “We expect to connect principles of causality and the explanatory power of scientific methodologies with the scalability of modern AI methods, moving towards more robust AI systems that take causality into account.” 

Current AI systems are driven by data, often combined with probabilistic/statistical algorithms and other tools. But statistical associations can’t always predict what is going to happen when environmental changes or external interventions occur. Systems need to understand the often complex, dynamic, and unknown collection of causal mechanisms underlying the environment. The team led by Bareinboim has researchers across multiple institutions, including University of Massachusetts, Amherst; University of Southern California; University of California, Irvine; University of California, Los Angeles, and Iowa State University. 

The project aims to develop the foundations for the next generation of AI systems and to then apply these foundational results in practice. Specifically, the project is focusing on two real-world applications, one in public health and another in robotics. The public health project, in collaboration with the Mailman School of Public Health, aims to address the challenge of how to provide care for people with mental illness through more personalized and precise interventions. The robotics thrust will apply causal decision-making to the problem of robot navigation in complex environments that include multiple agents such as self-driving vehicles, urban drone flight, and mobile service robots in warehouses, hospitals, airports, and offices. For this project, the team plans to build a mobile object manipulator to study the challenges for robots to “walk among” humans in tasks where robot autonomy needs to be safe, enabling, and helpful for humans. 

There is a growing concern in society regarding automation and how AI can potentially influence the world, which is at the heart of this project on AI decision-making. Bareinboim shared his take on the issue: “On the one hand, based on the current generation of AI systems and the underlying theories, even though I am usually optimistic, I don’t think the concerns are far-fetched. Most systems are black boxes, and not even the AI engineers who build them have any clue why the AI acts in the way it does. In other words, it’s reasonable to be concerned about the implications and the potential loss of control. ” 

This lack of understanding is undesirable because allowing AI to make decisions and influence society without comprehending the principles behind their choices is unscientific. No one puts 200 people inside an airplane and by trial and error try to make it fly. Researchers had to have a solid understanding of physics and aeronautic engineering in order to build airplanes first that work, and then become safer and more efficient over time. Bareinboim noted that “this is a great example of a beautiful interplay between serious scientists and engineers, working together over long periods of time to build something highly complex. I believe the science of intelligence design is even a bit more complicated than that, and it shouldn’t, therefore, be done by purely trial and error.“ 

The goal of this NSF-funded initiative is to advance the science of causal artificial intelligence, the branch of AI that combines causal principles found throughout the sciences with AI. “On the other hand,” Bareinboim explained, “we humans understand the world through causal lenses. We build scientific theories using our intelligence and correct them through data. If we can create AI systems aligned with causal, or scientific, principles, then we will be making a major advance in building a new generation of powerful AI tools for developing autonomous agents and decision-support systems that will communicate with humans, be safer, and more trustworthy.”

Stay up-to-date with the Columbia Engineering newsletter

* indicates required