The hidden side of politics

What Can the the Trolley Problem Teach Self-Driving Car Engineers?

Reported by WIRED:

OK, tell me if you’ve heard this one before. A trolley, a diverging track, a fat man, a crowd, a broken brake. Let the trolley continue to speed the way it’s going, and it will smash into the crowd, obliterating the people in its way. Hit the switch, and the trolley will careen into the fat man, KOing him—permanently—on impact.

That is, of course, the classic trolley problem, devised in 1967 by the philosopher Philippa Foot. Almost 50 years later, though, researchers in the Scalable Cooperation group at the Massachusetts Institute of Technology Media Lab revived and revised the moral quandary. It was 2016, so the trolley was now a self-driving car, and the trolley “switch” the car’s programming, designed by godlike engineers. MIT’s “Moral Machine” asked users to decide whether to, say, kill an old woman walker or an old man, or five dogs, or five slightly tubby male pedestrians. Here, the decision is no longer a split second one, but something programmed into the car in advance—the sort of (theoretically) informed prejudgement that helps train all artificial intelligence.

Two years on, those researchers have collected a heck of lot of data about people’s killing preferences: some 39.6 million judgement calls in 10 languages from millions of people in 233 different countries and territories, according to a paper published in Nature today. Encoded inside are different cultures’ various answers to the ethical knots of the trolley problem.

For example: participants from eastern countries like Japan, Taiwan, Saudi Arabia and Indonesia were more likely to be in favor of sparing the lawful, or those walking with a green light. Participants in western countries like the US, Canada, Norway, and Germany tended to prefer inaction, letting the car continue on its path. And participants in Latin American countries, like Nicaragua and Mexico, were more into the idea of sparing the fit, the young, and individuals of higher status. (You can play with a fun map version of the work here.)

Across the globe, some major trends do emerge. Moral Machine participants were more likely to say they would spare humans over animals, save more lives over fewer, and keep the young walking among us.

The point here, the researchers say, is to initiate a conversation about ethics in technology, and to guide those who will eventually make the big decisions about AV morality. As they see it, self-driving car crashes are inevitable, and so is programming them to make tradeoffs. “The main goal is to capture how the public reaction is going to be once those accidents happen,” says Edmond Awad, an MIT Media Lab postdoctoral associate who worked on the paper. “We think of this as a big forum, where experts can look and say, ‘This is how the public will react.’”

So what do the people actually building this technology think about the trolley problem? I’ve asked lots of AV developers this question over the years, and the response is generally: sigh.

“The bottom line is, from an engineering perspective, solving the trolley problem is not something that’s not heavily focused on for two reasons,” says Karl Iagnemma, the president and cofounder of the autonomous vehicle company nuTonomy. “First, because it’s not clear what the right solution is, or if a solution even exists. And second, because the incident of events like this is vanishingly small and driverless cars should make them even less likely without a human behind the wheel.”

Another frequent objection: Self-driving cars definitely don’t have the data or training today to make the kind of complex tradeoffs that people are considering in the Moral Machine experiment. It’s hard enough for their sensors to distinguish vehicle exhaust from a solid wall, let alone a billionaire from a homeless person. Right now, developers are focused on more elemental issues, like training the tech to distinguish a human on a bicycle from a parked car, or a car in motion.

It is, however, likely that engineers are training their tech to make certain tradeoffs. “The way people in autonomous driving taxonomize or organize the objects that they detect is that they have vulnerable objects and non-vulnerable ones,” says Forrest Iandola, the CEO of the company DeepScale, which builds perception systems for self-driving cars. “The most important vulnerable objects to detect are humans with no protection. But a parked car or a traffic cone tend to be non-vulnerable.” And thus: better to hit.

And it’s also true that autonomous vehicles will have to grapple with different car cultures throughout the world. NuTonomy, for example, tests its autonomous technology in Boston and in Singapore, and its “rulebook” is different in each context. In Boston, you’ll be surprised to learn, drivers are much more aggressive, so the cars are trained to react differently there.

Still, it’s not as if companies like nuTonomy is grappling with the trolley problem on regular basis. “We’re all focused on developing systems that are safe and rigorously well-designed systems,” says Iagnemma. “The second wave systems will adapt to our driving preferences, and to culture and geography. It may also include these ethical questions.” So sure—maybe it’s nice to start the conversation now.


More Great WIRED Stories

Source:WIRED

Share

FOLLOW @ NATIONAL HILL