Article

Your (future) car’s moral compass

By Edmond Awad

Picture a driverless car cruising down the street. Suddenly, three pedestrians run in front of it. The brakes fail and the car is about to hit and kill all of them. The only way out is if the car crosses to the other lane and swerves into a barrier. But that would kill the passenger it’s carrying. What should the self-driving car do?

Would you change your answer if you knew that the three pedestrians are a male doctor, a female doctor, and their dog, while the passenger is a female athlete? Does it matter if the three pedestrians are jaywalking?

Millions of similar scenarios were generated by an experimental website my fellow researchers and I created and named “Moral Machine.”

After the website received substantial media attention, more than four million people from 233 countries and territories visited the website between June 2016 and December 2017. They rated scenarios like the one described above, which were inspired by the famous philosophical conundrum the trolley problem. Though all of the scenarios are unlikely in real life, what we learned from visitors’ appraisal of them could help inform the regulation and programming of autonomous vehicles (AVs) and may also have implications for machine ethics generally. The main question we wanted to answer: How does the public think autonomous vehicles should resolve moral trade-offs? And could we use their responses to build a new kind of moral compass?

Related Content