Public reaction to the Google Car as kick-off for machine ethics conversation

“If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.” –A survey of research questions for robust and beneficial AI, Future of Life Institute (But at what point is the algorithm subject to double jeopardy and no longer subject to new lawsuits?)

After the equivalent of 75 human years of practice, in which it no doubt paid better attention to learning than a 16 year old human would, Google’s self-driving cars will now officially and openly hit public roads in California, the company announced today.

The key words in the announcement: “We’re looking forward to learning how the community perceives and interacts with the vehicles…” That reads to me like “we really hope people don’t react to these the way they did to Google Glass.”

I hope that the backlash against robots isn’t too severe. I don’t want to treat them like a technological inevitability which humans have no ability to resist, but…

The public’s reaction to the Google Car will likely act as a general referendum on the future of artificial intelligence and robotics. So far, looking around at Twitter replies posted to major media outlets covering today’s news, sentiment seems very mixed.

linear-vs-exponential-1024x658-1

In a big-picture sense, I think of my company Little Bird as an autonomous learning machine. Today it’s for enterprise marketers doing research about market influencers, trends and intent. Long term I hope to work on self-improving systems for augmenting human learning in general, through discovery and filtering.

Thus I have an interest in how the public reacts to self-driving cars as a human myself, and as a person who wants to continue to build good machines. Maybe I shouldn’t associate what we’re doing with all of that, though.

Here’s an interesting one. “Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the public on these issues? Should such trade-offs be the subject of national standards?”

It was convenient when we couldn’t blame an overwhelmed human for which choice they made under duress, but it’s all going to be rationalized by machines that aren’t overwhelmed.

That’s from the Future of Life organization linked-to above, which is full of technologists building artificial intelligence but also working to make sure it doesn’t result in substantially adverse consequences for humanity. It’s a pretty awesome organization; it’s the one Elon Musk made a big donation to earlier this year.

I don’t know how to wrap this all up thematically, these are just some thoughts thrown together. I think we need to think about autonomous vehicles from an ethical perspective, from a human evolution perspective, regarding city planning and ecology, regarding class, race and privilege. (Has anyone written about that yet? In science fiction, at length I’m sure.) This is an entire field of study that’s coming on really fast. I just thought I’d wade into the conversation.