Faculty Tech Talk: Smart Machines and the Road to Driverless Cars

Apr 17 2018 | By Jesse Adams | Video: Columbia Video Network

Have recent fatal accidents among self-driving cars hit the brakes on the rise of autonomous vehicles? Or will machine learning quickly transcend current limitations to pave the way for an imminently driverless future?

Two experts at the forefront of smart machines, Professors Hod Lipson and Matei Ciocarlie, sat down with Dean Mary C. Boyce and a crowd of students at Carleton Commons on April 2 to explore the profound challenges of creating machines and artificial intelligence able to perform complex activities far beyond controlled laboratory settings. It was the third in Columbia Engineering’s new series of faculty tech talks connecting students with campus researchers tackling extraordinary problems.

Lipson, a professor of mechanical engineering and data science, is director of the Creative Machines Lab, which pioneers new ways for machines to create and strives to bring biologically-inspired approaches into computers and robots, and co-author of the 2016 book Driverless: Intelligent Cars and the Road Ahead. Ciocarlie, an assistant professor of mechanical engineering with affiliated appointments in computer science and the Data Science Institute, is head of the Robotic Manipulation and Mobility Lab, where he helps robots acquire fine motor skills to eventually interact with the world as skillfully as biological organisms.

Roboticists Hod Lipson and Matei Ciocarlie joined Dean Mary C. Boyce to discuss how to move driverless cars and other smart machines beyond controlled laboratory settings.

The professors agreed that daunting challenges stand in the way of implementing autonomous vehicles on the nation’s roadways, while sharing different perspectives on how readily artificial intelligence will be able to match people’s street smarts. We’ve excerpted a few edited highlights of the conversation below:

Q: In the realm of intelligent machines, things have begun moving very quickly, whether in regards to embodied intelligence, artificial intelligence, [or] augmented intelligence…what are the biggest challenges to building even smarter machines right now?

Matei Ciocarlie: Abstract intelligence is what gets a lot of press in terms of artificial intelligence—playing Go, playing chess—but those settings are well-suited for computers. They are very orderly, very well-defined, but our real world is a mess. It’s very difficult for a robot to deal with the sheer number of situations we might encounter… The amount of information that we process via touch and perception, how do we replicate that in robotics? Physical interaction with this complicated world of ours, like dexterous manipulation, is an incredibly difficult problem… Researchers are developing tactile sensors that gather orders of magnitude more data than anything else that exists right now.

Hod Lipson: Artificial intelligence has made a lot of progress virtually but not so much physically—robots are still pretty incapable compared to humans, animals, squirrels, or however you measure it. AI has not earned its place in the physical world. Humans and animals have crawled in the rain and the sand and the mud, AI has not done that yet… But computers are getting faster exponentially. Ten years from now, today’s computing will look the way 1940s computing does to us. Technology like driverless cars and robotics are riding this curve… It’s coming faster than even the experts think it’s coming.

Q: Autonomous vehicles are in the news a lot right now, because numerous elements have advanced simultaneously to suddenly enable a very rapid pace of development… what final challenges—across hardware, software, and data and imaging—need to be addressed?

Ciocarlie: With recent progress, it's easy to forget how many decades of technology progress got us to this point. For example, GPS is crucial to self-driving cars, but precise only within a couple of meters. You will drive off the road if you just localize yourself based on GPS… When building new methods to account for that, we have to think about a driving algorithm that works not 99.5% or 99.9% of the time but 100% of the time… My sense is that a fully autonomous car capable of navigating every type of road is coming slower than we would tend to believe, because there are still a lot of difficulties in two big problems: the “last mile” and the “long tail”. The “last mile” is the ability to take you on those tiny side streets…the highway is the easiest and the large boulevard can be done, but as you get deeper into a neighborhood things can become more difficult. The “long tail” is things that happen very rarely, the oddest of things that have a small but non-zero chance of happening and eventually do… Humans have this very elusive thing called common sense that we can’t quite define which helps us deal with these situations, and that's been difficult to replicate.

Lipson: People don’t understand what’s difficult about telling the difference between a child and a fire hydrant, it’s so obvious. For us humans, we’re so good at understanding what we’re seeing that we don’t even understand what’s hard about it, but it’s really hard for computers. That’s finally being solved… through the cloud effect of AI systems teaching other AI systems, robots teaching other robots. That’s an alien concept to humans because, for example, we can only have one lifetime of experience driving, but a driverless car can have many lifetimes of experience because it can learn from all other cars. So, in a strange way, the more cars that are on the road the better each one of them gets... We can save lives already before we solve the last mile or the long tail, it’s good to have these cars on the road sooner rather than later, the moment they are as good as an average driver, which by the way is a pretty low bar.

Stay up-to-date with the Columbia Engineering newsletter

* indicates required