Yes, self-driving cars will kill people. Here’s how they’ll decide who to save.

Impact

We’re on the brink of a new era of transportation, with self-driving cars making decisions as we take the back seat. One analyst recently made a “conservative” estimate that 10 million autonomous vehicles would be on the road by 2020. But this future also forces us to confront an ugly question: If an autonomous vehicle has to suddenly choose between killing two people, whom should it choose?

What would a human do?

A recent study from the University of Osnabruck in Germany examined how human drivers would react to a sudden ultimatum. Scientists put study participants in virtual reality driving scenarios and forced them to make a choice between hitting one virtual object and another. While “driving,” the study participants would suddenly be confronted with a split-second moral decision — if they swerved left, they might hit and kill a virtual man walking his dog, for example. If they swerved right, they might kill a virtual woman walking alone.

“The truth is, humans put a price tag on each and every thing. There’s a price tag on the left lane and a price tag on the right lane,” Peter König, a professor at the University of Osnabruck’s Institute of Cognitive Science, said in a phone call. “Autonomous [vehicles] will decide as a consequence of their construction.”

“The truth is, humans put a price tag on each and every thing.”

Repeated trials tested a variety of combinations — goat versus trash can, man versus goat, child versus adult — and allowed scientists to create a hierarchy of value. König argues that car manufacturers will have to consider this data on human morality when designing their own vehicles, or at least know what choices a self-driving car may make in a dire situation.

Would you be spared?

When given a choice between swerving into one obstacle or another, no choice is ideal. Even so, trends did emerge from the experiment: Participants generally spared children over adults, and they tried to save two lives on one side of the road instead of one life on the other side.

“A human and a dog on one side are more likely to be spared than a single human on the other side,” König said. “Humans also contemplate the number of lives lost … and therefore children are more valuable than adults because they have longer to live.”

Those results are fairly intuitive, but gender gets particularly sticky. Participants showed a slight preference for saving men, though the numbers were so close that there “may be no real difference,” König said.

“In the overall model, there is a marginally higher value for males [over] females. However, males were sacrificed in more than half the number of times,” he said. “If we had ten times more data, maybe we could see the differences better.”

This could affect policy

Autonomous vehicle manufacturers generally haven’t gone public with how their cars may make these decisions, König said. But this study basically suggests that engineers can decide to model their cars’ decision-making processes after human minds, if that is indeed the right choice.

“The car will have to make a decision,” König said. “Whether the manufacturer knows what a car will actually do, I don’t know. These are big questions, and we need to answer them.”

In Germany, the Federal Ministry of Transport and Digital Infrastructure recently published 20 ethical principles for self-driving vehicles. It’s a first-draft example of how the world is trying to regulate autonomous cars (and minimize liabilities) before they actually hit the road.

“These are big questions, and we need to answer them.”

According to Science Daily, the German guidelines suggest that a child who dashes into the street and is hit by a self-driving car is “less qualified to be saved” than an adult who is merely standing by, since the children technically contributed to their own risks. But that seems to be the opposite of how study participants felt, since they ranked children as more worthy of saving than adults.

The guidelines also state that “all classification of people based on their personal characteristics (age, gender, physical or mental condition) is prohibited,” which could render the hierarchy in König’s study unusable.

“Human decisions, if you would implement them in autonomous vehicles, might violate [these guidelines],” König said. “To say ‘it violates our constitution, end of discussion’ is perhaps too short. It’s only a question of a few years and [self-driving cars] will be available. Given the many thousands of deaths on our streets, I think it’s important to integrate morality [and human decisions] into autonomous vehicles.”