Adobe Stock 224332680

Researchers want to use this tool to learn how machine intelligence can make moral decisions

What would you do?

The brakes on a self-driving car suddenly fail. It’s speeding straight towards a zebra crossing where – at just that very moment – two women and two older men are making their way across. The car could swerve to avoid them and hit a concrete barrier. If that happened, three children and a man would die. How should the car react? What would you do?


Adobe Stock 224332680

The “Moral Machine” website confronts its users with a total of 13 such extreme scenarios. The situations and traffic participants vary. Users must weigh up the damage that’s going to be caused by the accident and choose between two destructive outcomes.

Try it out for yourself! Make 13 decisions – Click here!

Please wait, the image is currently loading and will be there shortly.

Click here: https://www.moralmachine.net/h...

In a large-scale study, the Moral Machine’s creators at the Massachusetts Institute of Technology (MIT) used the website to collect data from 2.3 million people. The results of the study were published in the journal Nature.

Results of the study

The study showed that the majority of respondents would tend to spare children rather than elderly people. And most would rather run over animals than people. However, a closer look revealed major cultural differences between test persons in different parts of the world. The team of researchers found that people from countries with stable state institutions, such as Finland and Japan, were more likely to choose to run over people crossing the road illegally than were respondents in nations with weaker institutions, such as Nigeria or Pakistan.

When evaluating the participants by country, the researchers divided them into three groups, namely a western, eastern and southern cluster. The choices made by participants in many Asian countries – that is, in the eastern cluster – differed from those made by the other groups in that the former wouldn’t spare the younger people. In these countries, people tend to have greater respect for the older members of the community. The southern cluster (Central and South America) differed from the western cluster (Europe, North America) in that – to give just one example – Central and South Americans would tend to grab the wheel themselves much more often than wait for the self-driving car to change course.

In scenarios that involved saving a homeless person on one side of the road or a manager on the other, respondents’ decisions were often related to the level of economic inequality in their culture: people from Finland – where the gap between rich and poor is relatively small – rarely let the self-driving car leave its lane and steer to the other side. To them, it didn’t matter who was in the car’s path. Participants from Colombia – a country with significant economic inequality – chose mostly to let the lower-status person die. Why these classifications? “Because self-driving cars could, at least in theory, assign a certain societal significance to every pedestrian and base decision-making processes on it. After all, humans also let such considerations influence their actions,” says Iyad Rahwan, co-author of the study, in an interview with Der Spiegel.

The study therefore proves there’s no such thing as a universal moral code. Nevertheless, the researchers hope the survey will provide them with a better understanding of how people make these kinds of decisions. The results may help in the future programming of algorithms for self-driving cars. The scientists put it like this: “Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.”

Criticism of the Study

The results of the study deviate in part from the rules proposed by the German Ethics Committee in June 2017. These rules provide that “in the event of unavoidable accident situations, any influence on the decision according to personal features (age, sex, physical or mental condition) shall be strictly forbidden.” The artificial intelligence of self-driving cars should therefore ignore external or internal characteristics in order to prevent discrimination. Furthermore, the rules of the Ethics Committee stipulate that those who have chosen to get into a self-driving car should never run over bystanders in order to save themselves. The rules stipulate, in particular: “It may be acceptable to generally programme [the software] in such a way as to mitigate personal injury. Those involved in creating mobility risks must not sacrifice bystanders.”

Professor Armin Grunwald was a member of the Ethics Committee that drafted the rules. He criticises the study, saying that online games might give rise to certain assumptions about human behaviour, but whether these would then apply in reality is difficult to answer. “After all, a game isn’t real life. At any rate, we need to refrain from simply transferring the results of a game to reality.” The head of the Institute for Technology Assessment and Systems Analysis (ITAS) in Karlsruhe warns: “Surveys will never give us information about what’s legitimately required and allowed from an ethical standpoint, but rather what people think about it. But this doesn’t mean it’s ethically right.”

Iyad Rahwan, founder and, since late 2018, Director of the new Center for Humans and Machines at the Max Planck Institute for Human Development, sees an opportunity in Moral Machine: of course, ethical issues can’t be resolved by letting people decide on them in a survey or in an online experiment. “But policy-makers should at least know how ordinary people feel about these issues – not least because they must be prepared to justify their decisions to a public that may disagree with them.”

Author

Ludmilla Ostermann