Robots are machines that respond automatically and unthinkingly to the commands of others. They perform routine mechanical tasks that have been pre-programmed by humans. They are not sentient beings and are not capable of conscious thought. Robots are best at repetitive chores on assembly lines in factories where they are anchored in place and perform the same single action over and over again. The word robot derives from the Czech word robota, meaning the type of forced labor that slaves had to perform for their masters. The word first appeared in a 1920 play called R.U.R. A scientist named Rossum built and sold robots that were heartlessly exploited by factory owners until they revolted and destroyed the entire human race.
Isaac Asimov’s First Law of Robotics.
A science fiction writer, Asimov anticipated the likely technical and social problems involving robots. His first law was that a robot must not injure a human being or allow a human being to be harmed. Asimov died in 1992, long before the advent of robo-cars. If he was still around, he would be among the first to realize his first law would be an impossibility in the world of robo-cars, machine-driven vehicles following orders established by humans and written into code by programmers.
Robo-cars are still a long ways off.
In an article I wrote four years ago, Baby, You Can Drive My Car, I said nearly everyone has been convinced that driverless vehicles are right around the corner. One big reason they aren’t is the staggeringly incomprehensible amount of data that needs to be managed flawlessly by Big Data and A.I. The system will probably be operated by a federal bureaucracy like the Federal Aviation Administration but mind-bendingly bigger and more complicated.
There are hundreds of millions more car routes than airplane routes.
-
- Each day in the U.S. there are 100,000 flights between 15,000 airports. This means the FAA has to monitor, manage, and coordinate 225 million possible Point A to Point B airport combinations.
- Each day there are one billion vehicle trips on U.S. roads. There are 120 million homes and apartments, 30 million businesses, and 4 million retail stores, or 24 quadrillion possible Point A to Point B routes to monitor, manage, and coordinate.
Engineers say robo-cars have the potential to improve road safety.
Social scientists say the ways robo-cars will be programmed raise ethical issues and have unintended consequences for public safety and the environment. “The greater challenge is the artificial intelligence behind the machine,” Toyota Canada president Larry Hutchinson said. “Think of the millions of situations that we process and decisions that we have to make in real traffic. We need to program that intelligence into a vehicle, but we don’t have the data yet to create a machine that can perceive and respond to the virtually endless permutations of near misses and random occurrences that happen on even a simple trip to the corner store.”
There is an implied assumption that robots are going to be perfect drivers.
This is utter nonsense. Robo-cars will not be infallible. As with everything computerized, there will be bugs, errors, crashes, and system failures. Add to this the fact that conventional cars will not all disappear at once. There will be a long transition period while human-driven cars are sharing the road with robo-cars, with human-driven cars crashing into robo-cars and vice-versa.
Many people think robo-cars will never have accidents, but that’s an impossibility.
Iyad Rahwan, a computer scientist at the Massachusetts Institute of Technology says “People who think about machine ethics make it sound like you can come up with a perfect set of rules for robots.” Writing in Forbes, Lance Eliot says you can forget about the notion of zero fatalities and zero injuries in an era of self-driving driverless cars. “If a pedestrian darts into the street from between two parked cars, and the self-driving driverless car is coming down the street at the posted speed limit, the self-driving car cannot magically stop in time. The same is true about a bicyclist that suddenly veers into the path of a self-driving car. And so on.”
Worldwide, there are 1.3 million auto fatalities a year.
Researchers estimate that robo-cars will eliminate three-fourths of all accidents. In the U.S. alone there are 6 million auto accidents a year that kill 40,000 people. This means that in the robo-car world, each year in the U.S. there will still be 1.5 million accidents and 10,000 deaths. And the robo-cars will decide who will die because they will have been programmed according to someone’s set of assumptions and specifications.
Someone has to die – who should it be?
In 2016, scientists launched the Moral Machine, an online survey that asks people to explore moral decision-making regarding autonomous vehicles. The survey presents us with moral dilemmas, such as a situation where a driverless car must choose between killing the passengers in the car or killing pedestrians. For 18 months, the researchers gathered nearly 40 million such decisions from 233 countries and territories worldwide. They found there were a number of moral preferences shared across the globe, including saving the largest number of lives, prioritizing the young, and valuing humans over animals. Those spared the most often were babies in strollers, children, and pregnant women.
Survey takers took on the role of robo-car programmers.
Here are some of the choices they made about who would be programmed to die in an accident:
- Overweight people were chosen to be killed 20 percent more often than fit people.
- Homeless people were chosen to be killed 40 percent more often than executives.
- Jaywalkers were chosen to be killed 40 percent more often than people who obeyed traffic signals.
The researchers agreed that driverless cars will constantly be taking actions defined by how they were programmed.
“Driverless cars will be constantly making decisions that redistribute risk away from some people and towards others,” said study co-author Azim Shariff, a psychologist at the University of British Columbia in Vancouver. “Consider an autonomous car that is deciding where to position itself in a lane – closer to a truck on one side, or a bicycle lane on the other. If cars are programmed to be slightly closer to the bicycle lane, they will reduce the likelihood of hitting other cars at the expense of increasing the likelihood of hitting cyclists. The precise positioning of the car will appear to cause no ethical dilemmas most of the time, but over millions and billions of these situations, either more cyclists will die or more passengers will die. Study co-author Jean-François Bonnefon, a research director at the Toulouse School of Economics in France, said ”The moral dilemma for autonomous vehicles is something brand-new. We’re talking about owning and using an object that might decide to kill you in certain situations. I’m sure you would not buy a coffee maker that’s programmed to explode in your face in some predetermined set of circumstances.”
Amy Maxmen, writing at nature.com, says when a driver slams on the brakes to avoid hitting a pedestrian crossing the road illegally, she is making a moral decision that shifts the risk from the pedestrian to the passengers in the car. The study’s authors say that their scenarios represent the minor moral judgements that human drivers make routinely and which can sometimes be fatal. A driver who veers away from cyclists riding on a curvy mountain road increases her chance of hitting an oncoming vehicle. As the number of driverless cars on the road increases, so too will the likelihood that they will be involved in such accidents.
Would you ride in a self-driving car that has been programmed to sacrifice its passengers to save the lives of others?
Writing at livescience.com, Edd Gent tells us new research has found that people generally approve of robo-cars programmed to minimize the total number of deaths in a crash, even if it means harming people in the vehicle. Here’s the fun part – people said that they wanted an autonomous vehicle to protect pedestrians even if it meant sacrificing its passengers, but most said they would not want to ride in these vehicles themselves and even more said they would not buy a self-driving vehicle that would prioritize pedestrian safety over passenger safety.
Driverless vehicle technology will need to deal with moral and ethical dilemmas.
There are many scenarios involving undesirable alternatives. When a pedestrian steps in front of a car that can’t stop in time, the car’s software must make an unpleasant choice. Will it direct the car to hit the pedestrian, hit an oncoming car in the other lane, or swerve into a tree? Vehicles will react by using pre-programmed formulas.
Who will be making the decisions about how the algorithms are written?
Azim Shariff, director of the Culture and Morality Lab at UC Irvine, found when others are the passengers, we think the vehicle should sacrifice itself and its occupants to safeguard pedestrians and other drivers. But when we are the occupants, most of us believe the vehicle’s occupants should be protected at all costs. Few of us are surprised to hear that self-preservation wins, but where does this leave us? Who will be the ones to decide how driverless vehicles will be programmed to handle ethical dilemmas? Will it be left to the manufacturers, who are all competing with each other? Or will it be a government agency deciding what type of programming manufacturers will use?
Elon Musk has compared self-driving cars to horizontal elevators.
Writing in CleanTechnica, Steve Hanley tells us Musk maintains that one day in the near future, people will give the idea of getting into a self-driving cart less thought than they do to getting in an elevator. “There is, however, a massive difference between the two,” says Hanley. “An elevator has a rigidly defined path it must follow. It cannot jump over to an adjacent elevator shaft, nor must it decide what to do if a pedestrian, bicyclist, or another elevator suddenly appears in its path. Some say that society will tolerate a certain level of deaths and injuries attributable to autonomous cars, but what is more likely is that people will expect self-driving cars to be perfect. There is a whole coterie of trial lawyers salivating over the prospect of asking a jury of humans to award damages for a death or dismemberment attributed to computer error. I would expect the tolerance for injuries caused by autonomous cars to be about the same as it would be for an elevator that suddenly plunges from the penthouse to the parking garage without warning.”
Someone has to program every situation, every eventuality.
Take a look at the MIT Media Lab website, Moral Machine, which they describe as Human Perspectives on Machine Ethics. They have put together an interactive set of a dozen lesser-of-two-evils scenarios. You are the one who decides which action the vehicle takes. Take a few minutes now and see how your philosophy compares with others. (CLICK HERE FOR THE EXPERIMENT)
Want to read more articles like this? Click here.