Ethics, Safety, And Software Behind Self-Driving Cars In The Aftermath Of First Pedestrian Killed
By Lori Cameron and Michael Martinez
Published 05/11/2018
Share this on:
The future of self-driving vehicles—the next great imaginative leap for technology, with fortunes to be made—suffered a catastrophic setback recently when the all-important software allowing the autos to “see” pedestrians was called into question: A driverless vehicle operated by Uber struck and killed a woman in Arizona, prompting the state to suspend the firm’s testing.
The March 2018 incident is said to be the first known case of a person killed by a self-driving vehicle, creating an uneasy public. The National Transportation Safety Board and Uber are both investigating the crash.
Still, the race continues among the world’s biggest companies—Uber, Google parent Alphabet’s Waymo, Tesla, GM—to develop a safe autonomous vehicle.
The pedestrian death has renewed attention on the ethical considerations, safety measures, and software that have gone into the development of self-driving cars—all of which have been steadily documented and studied by scientists and analysts at the IEEE Computer Society since the beginning.
Here’s a summary of that research from the Computer Society Digital Library, placed in front of the paywall for a limited time.
Tech meets ethics: How safe is safe enough when building autonomous vehicles
The ethical considerations of autonomous vehicles cannot be understated, especially when a situation involves choosing between two victims, a pedestrian or passenger. As well, if a vehicle crash is unavoidable, does the car choose to crash into a small vehicle versus an SUV?
If algorithms are generally designed to do the former, not the latter, isn’t this a form of discrimination against the owners of small vehicles who might not be able to afford larger ones?
Dieter Birnbacher, a professor of philosophy at the University of Duesseldorf, and Wolfgang Birnbacher, an FPGA system designer at IBEO Automotive Systems GmbH, concede that, as with any safety-critical technology, the general public is usually willing to accept a certain level of risk in exchange for the benefits of said technology.
“Despite all security measures, a residual risk is unavoidable, which raises questions: How safe is safe enough? How safe is too safe?” they wrote recently in the IEEE Intelligent Systems magazine.
Smart cars could reduce travel times, improve air quality, and eliminate virtually all accidents caused by human error. Ultimately, if public consensus determines an acceptable level of risk, the public becomes responsible for what happens in the face of that risk, the two experts said.
“It goes without saying that an egalitarian decision algorithm along these lines would lead to a radical shift of responsibility from the individual to the public. Neither the owner nor the passengers could be held responsible for the behavior of the vehicle any longer since risk preferences and conflict solving are determined in advance by societal consensus, leaving no room for individual intervention,” the authors say.
The top three causes of car accidents are distracted driving, speeding, and drunk driving, according to the National Highway Traffic Safety Administration. Human error—which accounts for over 90% of accidents—would be largely eliminated with self-driving cars, leaving any remaining likelihood of accidents to autonomous vehicle design quality.
And there’s the rub. How can car manufacturers make self-driving cars safer?
The answers are complex, but researchers from the University of Sao Paulo, Brazil propose one solution: an independent module, the Autonomous Vehicle Control (AVC).
The AVC is a safety system designed and built independently from the vehicle’s uniquely manufactured system. The AVC can be installed into any vehicle and tested for industry safety standards across the board, no matter who the manufacturer is.
The idea is to implement an AVC that will both interact with the vehicle’s systems and create a protection layer that is independent of the way the vehicle’s system was developed, ensuring that, no matter how the car is designed by a manufacturer, it will meet all safety standards.
Building autonomous vehicles with military-grade safety features
Self-driving cars aren’t just for the civilian world. They’ll certainly be for the military, too.
The stakes for security just got higher.
Answers developed by the military could be used broadly, as with other technologies developed by the armed forces (such as the Internet).
In the High-Assurance Cyber Military Systems project, researchers are investigating how to construct complex networked-vehicle software to secure all manner of military vehicles.
The technology, while ideal for military vehicles whose systems could be hijacked in wartime, also shows great promise for vehicles used by private citizens in peacetime.
Experiments demonstrate that careful attention to requirements and system architecture, along with verified approaches that remove known security weaknesses, can build vehicles able to withstand attacks from even sophisticated attackers who are already quite familiar with the vehicle’s design.
Using smartphone tech to build semi- or fully-autonomous cars
Automakers are now planning the next big thing for smart vehicles, and they are looking at the world’s biggest digital successes—Google, Apple, and Amazon—for new ideas, according to BMW researchers, Matthias Traub, Alexander Maier, and Kai L. Barbehön.
For example, researchers are studying what you are likely holding in your hand right now—your iPhone. Specifically, they are looking at the smartphone’s architecture, including Apple’s iOS operating system, to help provide personalization for each driver.
The prototypes for these semi- or fully-autonomous vehicles are being developed by virtually all major auto manufacturers and are expected to reach the market within the next few years.
One example of personalization is the evolution from simple cruise control to active cruise control, in which the car will automatically slow down if it detects another car in front traveling more slowly. The driver can also set the distance between cars at any number of seconds, in most cases two to four.
Building smart cars with an artificial intelligence platform that detects hidden cars
Researchers from the National Tsing Hua University and the Industrial Technology Research Institute in Taiwan are developing a vehicle detector that creates a grid around a vehicle called a “bounding box” that is defined by two longitudes and two latitudes. Within that grid, the vehicle detector spots all vehicles, whether hidden or in plain view, and allows the car to stop or maneuver around them. A library of vehicle training images, whose appearance is randomly truncated, are stored in the system so the detector can spot obstructed vehicles better.
Compared to other classical object detectors, their work achieves very competitive results with 85.32 average precision (AP) and computational speeds pushing 30-48 frames per second on the NVIDIA Titan X and GP106 (DrivePX2).
Lori Cameron is Senior Writer for IEEE Computer Society publications and digital media platforms with over 20 years of technical writing experience. She is a part-time English professor and winner of two 2018 LA Press Club Awards. Contact her at l.cameron@computer.org. Follow her on LinkedIn.
About Michael Martinez
Michael Martinez, the editor of the Computer Society’s Computer.Org website and its social media, has covered technology as well as global events while on the staff at CNN, Tribune Co. (based at the Los Angeles Times), and the Washington Post. He welcomes email feedback, and you can also follow him on LinkedIn.