You spot a trolley barreling down the track towards five workers. There is a lever you can pull to switch the trolley over to another track, but there is a worker on this track as well. You quickly assess and other than the lever, there are no other feasible options. It is either one life or five lives. Do you pull the lever?
Making a small adjustment to your dilemma, instead of a lever used to divert the trolley to another track, you are standing on a footbridge that the trolley will pass under. There is a rather large man standing close to you. As with the original scenario there are five people on the track. You quickly assess that the only way to stop the trolley from killing the five is to push the large man off the bridge. As with the first scenario you are faced with a 1 or 5 choice. What do you do?
When presented with this version of the trolley problem, only 10% would push the man off the bridge. This result has been at the heart of what has puzzled philosophers and researchers for decades. Why is there such a huge disparity between how people react to the two scenarios? Maybe ethical dilemmas are not so utilitarian after all. While both scenarios are about exchanging 1 life to save five, people do not see them as being ethically equivalent. It is okay to pull the lever, it is not okay to push the man.
There is some criticism of using the Trolley Problem as a method to discuss ethics. One of the main criticisms is that the "fatman" variant is not very realistic. Regardless, there are so many variations of the Trolley Problem that have been repeated across hundreds of experiments that there is little doubt that ethics is not simply a utilitarian choice. Studies have focused on gender, race, disabilities, and other differences, each confirming time and again that how we make ethical choices is highly subjective.
This brings us to the modern version of the Trolley Problem along with some real world implications. Driverless cars, drones, and other forms of automated transportation are quickly becoming a reality. While driverless cars would save more lives and reduce injuries, how might your car be programmed to resolve an ethical dilemma?
This was the subject of a series of 5 experiments published in a 2016 study. Are you okay with a car being programmed to veer into a wall or off a bridge to save a group of pedestrians?
Results from these studies confirmed that similar to the original Trolley Problem a large percentage of people are okay with a utilitarian program that sacrifices one to save the many, but things quickly change when it is a family member in the car. Of equal interest is that while people are okay with a utilitarian program, they would be reluctant to actually buy such a car.
While the trolley problem is just a thought experiment, how smart cars are programmed is a reality. Design ethics and social engineering means facing some difficult trade offs in an effort to preserve a moral principle. More at the heart of the dilemma, who makes these decisions for the majority? The company that programs the car, a governing body, or some other alternative?
To end, consider the following. Would it be ethical to program a car differently depending on the passenger? Would say the airplane, car or transportation of a world leader have the same program as you or I? What about a world renowned scientist working on a cure for cancer? How will it be decided which passengers or vehicles are provided which program? What do you think?
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science.
Hauser, M., Cushman, F., Young, L., Kang Xing Jin, R., & Mikhail, J. (2007). A dissociation between moral judgments and justifications. Mind & Language, 22(1), 1-21.
Workman, L., & Reader, W. (2014). Evolutionary Psychology: An Introduction (3 ed.). Cambridge University Press.