If an autonomous car is suddenly faced with an inevitable collision, is it better it swerve to hit and kill five pedestrians; or crash and kill its two passengers?

People may reject autonomous vehicles if the moral dilemmas they’ll face, like this one, aren’t quickly solved, assert industry professionals.

Since every movement, action, and reaction of an autonomous car is a function of its programming, responses to ethical dilemmas must also be programmed into the vehicle’s software.

For instance—should your autonomous vehicle drive off a cliff to avoid collision with a jay-walker, killing you in the process, or hit the pedestrian instead? In essence, a programmer, or government regulator, may determine your death or survival before the vehicle ever rolls off the assembly line.

Research published last June by Iyad Rahwan, MIT associate professor, showed the public believes autonomous cars should sacrifice their occupants in order to save, for instance, a crowd of pedestrians, but no one wants to get into a car that is programmed to do so, reports the Associated Press.

“Most people want to live in a world where cars will minimize casualties,” said Rahwan. “But everybody wants their own car to protect them at all costs.”

[Do you agree? Rahwan’s fellow researchers at MIT have built a website called The Moral Machine asking people for their input on moral dilemmas so that it can be used to teach autonomous vehicles. You can check it out here.]

In a paper titled, “The social dilemma of autonomous vehicles,” published in the journal Science in late June 2016, researchers surveyed the public and found potential buyers of autonomous cars were far less likely to purchase one unless the vehicle’s survivorship ethics could be programmed to chiefly protect the occupant.

“I think the authors are definitely correct to describe this as a social dilemma,” said Joshua Greene, professor of psychology at Harvard University, of the paper’s findings.

“The critical feature of a social dilemma is a tension between self-interest and collective interest.”

Proponents of the autonomous car movement are guaranteeing zero fatalities once the technology is perfected—but those in opposition to autonomous cars argue that governments and engineers can’t even prevent trains from colliding and crashing, and the near-infinite number of environmental variables affecting autonomous car safety poses a major challenge to achieving zero fatalities, or even reduced fatalities.

Of course, autonomous cars could be programmed to ensure occupant and public safety above all else—but the trade-off could be unbearably slow commutes, with vehicles operating well below current speed limits in order to more safely integrate with traffic, pedestrians, and wildlife.

Public and expert skepticism in autonomous vehicles aside, the individual’s unwavering interest in self-preservation poses a significant challenge to those who seek to proliferate autonomous cars.

“There is a real risk that if we don’t understand those psychological barriers and address them through regulation and public outreach, we may undermine the entire enterprise,” concluded Rahwan.

(MIT and the Associated Press via CTV News)