Perspective Shift

Artificial empathy could open a new frontier in AI

Jun 16 2021 | By Mindy Farabee | Photo Credit: Courtesy of Hod Lipson

EVA, a soft robot learning to mimic facial expressions is one of many bio-inspired novel autonomous systems in the Creative Machines Lab headed by Lipson, the James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering.

Artificial intelligence is one thing, social intelligence is a very particular kind of thing. For people and robots to collaborate most effectively—to collectively switch gears on the fly, intuit next steps with no specific training, or obey the laws of human nature—the latter are going to have to pick up some new tricks.

Mechanical engineer Professor Hod Lipson, who studies novel ways of teaching machines to learn, believes his team has hit on one promising approach: first teach robots to empathize with each other.

Recently, his Creative Machines Lab (collaborating with Professor Carl M. Vondrick and their joint PhD student Boyuan Chen) conducted an experiment in which a robot was placed in a small pen after being programmed to move toward any green circle it spotted. What the robot didn’t know is that sometimes there were actually two green circles present: one its camera could see and another concealed by a carboard box. In each case, the robot behaved as programmed, moving toward the only circle within its line of vision.

Meanwhile, a second, observing robot sat above enjoying an unobstructed view. Without prior training or explicit knowledge of the first robot’s handicap, the observer simply watched its partner putter around for two hours. At the end, the observing robot was able to predict the ambulating one’s goal and path 98 out of 100 times across varying situations.

Empathy begins with being able to see the world from another person’s point of view, and that’s what our robot did.

Hod Lipson
James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering

Hod Lipson, James and Sally Scapa Professor of Innovation in the Department of Mechanical Engineering

The researchers concluded that for the observer to predict so accurately, it must have been able to understand, to some extent, what the world looked like from its partner’s perspective. That’s not trivial, says Lipson: “Empathy begins with being able to see the world from another person’s point of view, and that’s what our robot did.”

Green circles are a long way from complex human behaviors, but the researchers see in their results a glimmer of where we might be headed.

Social cooperation in humans evolved over a millennia side-by-side with development of our gray matter. In fact, many believe our social nature—the drive to cooperate toward common goals—is a primary reason human brains acquired such enormous complexity. Of course, embedded there is the need for common values to steer group action toward the widest benefit. If computers are on the brink of understanding each other, that’s one more reason for our big brains to work on designing the rules by which they’ll engage with us.

Stay up-to-date with the Columbia Engineering newsletter

* indicates required