GregHowley.com

Morality and Self-Driving Cars

October 19, 2017 - -

I just listened to the Radiolab episode Driverless Dilemma, and it brought back to me thoughts that I've heard brought up by Cory Doctorow in recent articles such as The Problem with Self-Driving Cars: Who Controls the Code?.

The basic idea is this: if your self-driving car is traveling down a narrow highway at 65mph and for some reason finds a stopped van with a family of four, and a fatal crash is unavoidable, should your car choose to drive you off a cliff to save that family?

This is happening now. This is relevant now.

In the Radiolab episode, interviewees say that the car should absolutely kill the driver to save the others. But when people are asked if they'd buy a car that behaved that way, as you'd expect, nobody would buy it.

At its simplest level, this is The Trolley Problem, an old philosophy thought experiment, of which there are multiple permutations. In general, people tend to say that yes, they would pull a lever to send a trolley off a cliff to save a group of people, but that they would not push a large man onto the tracks if his body would stop the collision. The difference seems to be that when it's a button or some distant action, it's easy to distance yourself and make the cold logical decision. But when you're up-close and personal, it's difficult to remain rational - you make an emotional decision. For example, if you had a decision between pushing a button which would set off a bomb and kill 200 people, or else strangling a child. The latter is unthinkably horrible, whereas the explosion, which could easily kill dozens of children, feels less objectionable.

Now, the real question. Those who are designing software for self-driving cars must actually make these decisions. How much of the very dichotomous human emotional/logical response should they incorporate? This goes way beyond Asimov's three laws. Someone in the Radiolab episode commented that programmers shouldn't be the ones deciding these things. It should be society having a larger discussion. But given the health (or lack thereof) of American politics at the moment, I can't help but wonder how likely that is. I fear that rather than society making these decisions, it will be corporations. Car companies. And their goals don't always line up with the things that mean greater good for society.

This software is being written today. These decisions are already being made.

Closing note: Another thing I always consider when I hear of these issues is V2V (vehicle-to-vehicle) communication, which will certainly be a component of future networked self-driving cars. I think of network security, and of the capacity of a potential techno-terrorist do design software which effectively lies to other vehicles to create a lethal situation. The software which runs cars should also contain a bullshit filter - a minimal amount of skepticism. Else it could fall prey to these unlikely but possible attacks.


EDIT - Also relevant: TO SURVIVE THE STREETS, ROBOCARS MUST LEARN TO THINK LIKE HUMANS