Breakout #10

Breakout Session #10

Ethical and Social Implications of Automated Vehicles

Tuesday, July 19, 2016

Room: Union Square 15 & 16

Organizers

  • Patrick Lin, California Polytechnic State University

Overview

We will examine ethical and social issues, beyond the legal and policy questions raised in other sessions. This includes: the case for why ethics is important in the first place; how developers may think about values and weights in programming decisions and managing risk; ethical issues in licensing and testing; and consumer perceptions, especially as gathered from surveys.

Agenda

Panel 1: Why Ethics?
On automated vehicles, ethics has been largely connected to bizarre crash dilemmas, which many critics dismiss as too rare to worry about and/or outweighed by the technology’s benefits. This panel will look at the case for why ethics matter and to what extent, not just for crash dilemmas but also for more ordinary events. For instance, even if an action is not illegal, it may still elicit negative reactions, if the broader public believes that it is unethical or handled poorly. This, in turn, can affect market share and invite attention from regulators. And where the law is unclear—as is often the case with emerging technologies—our moral compass often helps to point the way forward; so there can be legal and policy implications from ethical issues.

Scheduled panelists include:

  • Emily Frascaroli, attorney, Ford Motor Company
  • Stephen Wu, attorney, Silicon Valley Law Group
  • Tom Powers, professor, University of Delaware, Center for Science, Ethics & Public Policy
  • Patrick Lin, professor, Cal Poly, Ethics + Emerging Sciences Group (moderator)

Panel 2: Values and Weights
In programming automated cars, ethics is often implied already in the design, even if we don’t recognize making those assumptions. For instance, an automated car might place more weight on its occupants’ safety than those of other road users; or it might be programmed to value human safety over property damage; or it might give a wider berth to a truck or bicyclist, even if it means edging closer to other vehicles, increasing risk to them. These are not unreasonable assumptions, but they can be a source of ethical liability if not carefully considered. They could be contested or deemed negligent if they lead to foreseeable problems, such as if a car were to crash into a home just to avoid very minor injury to a person. This panel will ask examine a range of object classes to account for, including the process of arriving at sensible values and weightings.

Scheduled panelists include:

  • Noah Goodall, research scientist, Virginia Transportation Research Council
  • Stephen Erlien, lead engineer for controls, Peloton Technology
  • Sarah Thornton, researcher, Stanford University, Dynamic Design Lab
  • Ryan Jenkins, professor, Cal Poly, Ethics + Emerging Sciences Group (moderator)

Panel 3: Licensing and Testing
Again, ethics is more than weird crash dilemmas. Right now, automated cars are being tested and fielded on our roads. But several important questions arise: Is it ethical to beta-test an automated car—two tons of steel and glass, moving at 60 mph—on public roadways, alongside unsuspecting families? To what extent does this beta-testing resemble human-subjects research, which would normally require an ethics committee or institutional review board? Even in conducting simulator tests, what are the human-subjects research considerations? In licensing automated cars for road operations, is passing a human driving test adequate, or should a higher standard be met, and why? Should they be re-licensed with software updates and sensor upgrades: do they become “new” cars?

Scheduled panelists include:

  • Wendy Ju, executive director for interaction design research, Stanford, Center for Design Research
  • Suzie Lee, project director, Virginia Tech Transportation Institute, Center for Data Reduction and Analysis Support
  • Shad Laws, innovation projects manager, Renault Innovation Silicon Valley
  • Keith Abney, professor, Cal Poly, Ethics + Emerging Sciences Group (moderator)

Panel 4: Consumer Perceptions
Inasmuch as ethics is not a science but a social discussion, it is important to keep a pulse on public attitudes toward new technologies. Consumers might care about some ethical issues, or they might not; and this may affect how industry develops and deploys a technology. In this panel, we will look at public attitudes as measured by surveys, as well as perceptions by industry and other stakeholders. This is not to say that the customer is always right; for instance, racist attitudes probably should not be encoded into automated machines. In cases where public attitudes diverge from ethical positions, understanding why an attitude is adopted may suggest a strategy for reconciling it with ethics.

Scheduled panelists include:

  • Sarah Hunter, head of public policy, X, formerly known as Google[x]
  • Joe Barkai, product and market strategist, formerly research vice president at IDC
  • Jason Millar, postdoctoral research fellow, University of Ottawa; chief ethics analyst, Open Roboethics Initiative
  • Patrick Lin, professor, Cal Poly, Ethics + Emerging Sciences Group (moderator)

Session Goals
As the ethical issues for automated vehicles—which may impact law and policy—haven’t been discussed much, beyond limited circles, the goal of this session is to continue raising awareness of the issues and to invite perspectives from a broader audience. As work on these issues moves forward, and again since this is a social discussion, it’s vital to have a full picture of various perspectives from technology developers, consumers, insurers, regulators, and other stakeholders.

Breakout 10 / 22