Public reaction to the Google Car as kick-off for machine ethics conversation

“If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.” –A survey of research questions for robust and beneficial AI, Future of Life Institute (But at what point is the algorithm subject to double jeopardy and no longer subject to new lawsuits?)

After the equivalent of 75 human years of practice, in which it no doubt paid better attention to learning than a 16 year old human would, Google’s self-driving cars will now officially and openly hit public roads in California, the company announced today.

The key words in the announcement: “We’re looking forward to learning how the community perceives and interacts with the vehicles…” That reads to me like “we really hope people don’t react to these the way they did to Google Glass.”

I hope that the backlash against robots isn’t too severe. I don’t want to treat them like a technological inevitability which humans have no ability to resist, but…

The public’s reaction to the Google Car will likely act as a general referendum on the future of artificial intelligence and robotics. So far, looking around at Twitter replies posted to major media outlets covering today’s news, sentiment seems very mixed.

linear-vs-exponential-1024x658-1

In a big-picture sense, I think of my company Little Bird as an autonomous learning machine. Today it’s for enterprise marketers doing research about market influencers, trends and intent. Long term I hope to work on self-improving systems for augmenting human learning in general, through discovery and filtering.

Thus I have an interest in how the public reacts to self-driving cars as a human myself, and as a person who wants to continue to build good machines. Maybe I shouldn’t associate what we’re doing with all of that, though.

Here’s an interesting one. “Machine ethics: How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost? How should lawyers, ethicists, and policymakers engage the public on these issues? Should such trade-offs be the subject of national standards?”

It was convenient when we couldn’t blame an overwhelmed human for which choice they made under duress, but it’s all going to be rationalized by machines that aren’t overwhelmed.

That’s from the Future of Life organization linked-to above, which is full of technologists building artificial intelligence but also working to make sure it doesn’t result in substantially adverse consequences for humanity. It’s a pretty awesome organization; it’s the one Elon Musk made a big donation to earlier this year.

I don’t know how to wrap this all up thematically, these are just some thoughts thrown together. I think we need to think about autonomous vehicles from an ethical perspective, from a human evolution perspective, regarding city planning and ecology, regarding class, race and privilege. (Has anyone written about that yet? In science fiction, at length I’m sure.) This is an entire field of study that’s coming on really fast. I just thought I’d wade into the conversation.

  • It’s interesting, the tradeoffs have been made by humans for so long that I think part of people’s apprehension is really the uncanny valley response and not necessarily a rational one.

    Not to say that I’m in the technology worshiping camp, far from it I tend to want measured progress and testing. But machine ethics are going to have to be looked at with the same complexity that our other systems of ethics require. Which I think is part of your point, and I agree…we need a framework rather than knee jerk reactions. Anyone who’s built a technology product knows that when something goes wrong you need a framework for evaluating the problem based on duty of care (e.g. things like, is someone’s physical safety threatened vs. emotional health vs. inconvenience). Obviously self-driving / decision making machines should be held to a stricter standard of duty of care than software that surfaces content or assists people in communicating, but both still need an agreed upon framework before you jump in and start reacting to problems.

  • Thanks Joe, for that thoughtful comment. I wonder if other ethical or moral frameworks will be applicable – and when they will not. John Rawls’ Veil of Ignorance, for example, might suggest that possible injury of a person should outweigh certain damage to property. Because if you were making the rule behind a veil of ignorance and you didn’t know whether you were going to be the person at risk or the property owner, you’d probably chose to prioritize the person’s safety.

  • Re: that last point, very true. It’s one of the reasons I think a lot about things like Uber and Airbnb, because if we are to get to a place of responsible ethics in machine learning / AI, we’ll need to be willing to work through it first by looking at complex technological ecosystems (Zeynep Tufekci has some good reads in the area, if you haven’t caught yet: https://medium.com/@zeynep )

    As an aside, if you have a list of possible writing topics would be cool to see something about relationship between ethics & customers. I think the topic often gets buried in cust. svc and/or company culture, but there’s something deeper at work that few startups address. Or don’t! You write whatever intrigues you=P