Mason Unger

A SAFETY ETHICAL DILEMA

Mason Unger ()

1

Mason Unger

MY CASE

My name is Mason Unger, and I am a software engineer at Google. I have worked on Google’s self-driving car project for the past four years. I have been tasked with researching the safety of our vehicles. I often ride in our test vehicles and monitor how our cars make decisions; then based on my observation, modify programs to optimize safety. Over the past two years, our cars have driven over two million miles. Over that span, I have recorded 20 accidents in total. Most of these accidents were deemed to be the fault of our cars. Most were due to people not paying attention (i.e. using handheld devices) or people being so amazed by our car that they lost focus and crashed into our vehicle. Although, one accident left our test driver in the hospital.

Our car was driving down the road when a child ran out in front of it. To avoid the child the car engaged its breaks and the car following (too close) behind it swerved into oncoming traffic. This caused an oncoming car to swerve into the google car. Another four situations like this one occurred leaving our test drivers injured but not hospitalized. In summary, our cars have experienced twenty crashes. Five of these crashes posed a serious threat to our drivers, but the other fifteen would be classified as minor crashes with little risk of injury.

After reporting this at our latest safety board meeting, I was called into Chris Urmson’s, head of the self-driving program, office. Here we talked about the upcoming Google unveil (where we would be presenting our self-driving cars and all information associated with them. Urmson saw safety as the number one concern of the public. He feared that if the public deemed our car unsafe, we would lose interest, and the program could be terminated. He then told me not to report the five major accidents.

The next day, I was having second thoughts about withholding information from the public, even if it benefited our project. I talked to Urmson again. I stated that even if we report all twenty accidents, that is still only one accident for every 100, 000 miles, which is exactly the average for American drivers. In a sense, our cars are just as safe as conventional drivers. Urmson stated that the five major accidents would scare the public. He thought that even one life threatening accident would throw the publics opinion. I stated that we only needed more time to optimizethe system. After all, the major accidents occurred within the first year of test driving. Urmson stated that he had promised that our self-driving cars would be consumer ready within the next year, and a setback like this would make it impossible for our cars to be consumer ready. In order to make the deadline, I was to falsify our safety data. I had a choice to make. Either falsify the data and endanger human lives, or present the true

data and possibly lose my job. I decided to consult codes of ethics, similar case studies, and consumer opinions to see if they could give me any advice in this situation.

CODES OF ETHICS

Codes of ethics are sets of rules to govern all disciplines of engineering. These codes are helpful to engineers when ethical decisions must be made. The National Society of Professional Engineers (NSPE) has a code of ethics that governs all engineering disciplines.

The first fundamental canon states that engineers must always hold the safety, health, and welfare of the public above all else [4]. In my situation, If I were to lie to the public and our cars would to go onto the road, I would be putting the public’s safety at risk. A by-law of the first cannon states that if an engineer’s judgement on the public’s safety is overruled, he or she should notify proper authorities [4]. My judgment as an engineer was overruled when Urmson stated that the cars would be safe enough. This by-law encourages me that I am right, and I must find a higher authority. Because of this, I now have a third option; I could go to Google’s CEO, Sergey Brin and tell him the situation. A second by-law states that engineers can not reveal information without the consent of their employer, unless that information is required by the code [4]. This statement puts me in a tough spot. The only way I could reveal the information is if I was given consent to do so, or if withholding the information is in violation of the code.

The third canon in the NSPE code of ethics states that all public statements must be objective and truthful [4]. This proves that withholding information is against the code. If I were to give the public false information, I would be violating the third cannon. This gives me the right to present the public with my true information because doing otherwise would be a violation. The NSPE code of ethics shows that keeping the information from the public is immoral, but it does not help me analyze the consequences of this. If I present my true findings, the public will likely abandon the Idea of self-driving cars. This would shut our project down and my co-workers would all lose their jobs. This is just a theory of mine, so I will continue to analyze sources that will help me make the best decision.

The second code of ethics that I chose to consult was the Software Engineering (SE) code of ethics. This code directly governs my branch of engineering. One key point in this code states that software engineers must take full responsibility of their work [6]. This means that I am directly responsible for the safety of the google cars. This also implies that I am responsible for the safety information, and on that note, responsible for telling the truth in matters regarding the safety of our cars.

The code also reads, “Software engineers shall approve software only if they have a well-founded belief that it is safe, meets specifications, passes appropriate tests, and does not diminish quality of life, diminish privacy or harm the environment. The ultimate effect of the work should be to the public good [6]”. Obviously our cars need more testing. As a software developer and safety manager I cannot let an unsafe product put innocent people’s lives in danger. If I were to present false information, our cars would become commercially available.

The code states that I must disclose to appropriate persons any actual or potential danger to the public, and also must avoid deception when addressing the public [6]. I have already addressed my concern to the leader of our self-driving project, Chris Urmson, but he is the person who told me to falsify my records. This would mean that the next person I would have to speak to is Google’s CEO, Sergey Brin. According to the SE code of ethics, it would be immoral for me to present falsified information to the public.

These codes have helped me determine the morality of one of my choices, but I must consult other sources before I make a decision. It is clear that engineering codes of ethics prove that it is unethical to falsify information that would harm the public. However, it may be immoral, but it may not be wrong. I also must weigh the repercussions on me, my co-workers, and the company before I make a decision.

SIMILAR STUDY

A case study that closely mirrors my dilemma was recently written by Michael McFarland. In his case study, Wayne Davidson, a software engineer for an aerospace engineering firm, is in charge of determining the safety of a flight tracking program [3]. He deems the program to be unsafe for commercial use because it often loses track of planes (this would result in crashes and put the public at risk), but his boss urges him that the company purchasing the software (The Federal Aviation Agency) will run many test before it is actually in use [3].

This gives Wayne’s company time to fix their program and update the software for the FAA before it begins to use it large scale. His boss claims that because the FAA will only be running tests and small scale simulations for the next month, the program is safe [3]. Also, if Wayne’s company does not present the program, they will lose their contract with the FAA, and thousands of jobs will be lost [3]. Ultimately Wayne chose to present the software as is and then later fix it [3]. This caused no problems and the software was fixed before it was used large scale [3].

In my situation, I know that the software used to control our self-driving cars is unsafe. My boss has asked me to give false information that shows our cars to be safe. Although in my case, I may not have enough time to completely make the cars safe before Google’s promised launch date. We could continue to update the vehicles, but for the first year of production, the cars may still have some bugs that cause serious accidents. But like in the case study, if we show that our cars are unsafe, we may lose interest, funding, and our jobs. It would seem that if we could correct the problem before the car was made commercially available, this could be a positive situation just like the case study.

PUBLIC OPINION

In order to see this dilemma from a different perspective, I decided to talk to two people. The first person, Justin Shaffer, is an avid car lover [2]. He works as a mechanic, and has been working with cars for the past thirty years [2]. I wanted to ask Justin what he thought about self-driving cars and their safety.

He stated that he would trust a computer to drive and avoid accidents more than he would trust himself [2]. This was very shocking to me. I asked him what if these cars got into just as many accidents as the human driver. He stated, “The positives from not having to drive outweigh the negatives [2]”. As a car enthusiast, I expected Justin to be completely against self-driving cars. His only concern is that he would not buy a self-driving car unless it had a steering wheel because being able to drive when he wants is very important [2]. So if a conservative car lover would not be phased by self-driving cars, what would the general public think?

To answer this question I decided to talk to my friend, Ashley Shaffer. Ashley has earned her master’s in social work [1]. She works as a social work researcher at Shippensburg University [1]. Her schooling and research has led her to become very good at understanding the public and their actions.

I asked her what she thought the public would think if I presented all of the safety data. She stated, “The public would react poorly. Who would want to get in an accident that they had no control in? Even though it’s not true, most people think that if they are in control, they can prevent harm [1].” I then proceeded to ask her what she thought I should do. She stated that I should go to a higher authority, and if Google’s CEO still wants me to present the false data, I should listen to him because then I are no longer as liable for the information [1]. She also recommended that I look at other ethical sources that were not engineering related, but instead analyzed ethical dilemmas holistically [1].

ANOTHER ETHICAL SYSTEM

To analyze this problem from a wider scope, I decided to consult a code of ethics that is not associated with engineering. I chose to look at Utilitarianism, and in specific Jeremy Bentham. Bentham was a British philosopher in the late 1700’s [7]. I chose to apply Bentham’s ethical system because it seems to fit my dilemma.

Bentham’s ethics are all based on his greatest happiness principle [7]. This states that the best course of action causes the most happiness and the least pain [7]. The only way to determine the correct action is to weigh the happiness of everyone involved [5]. This is called felicific calculus or hedonistic calculus [5]. In order to use this system, you must understand the seven areas that get analyzed for each person or persons. The seven areas are intensity (how intense is the pleasure/pain?), duration (how long does it last?), certainty (how certain are you that this will cause the pleasure/pain?), propinquity (how soon will the pain/pleasure occur?), fecundity (will the pleasure/pain lead to other pleasures/pains?), purity (is the pleasure free of pain?), and extent (how many people are effected?) [7]. All of these values can be assessed on a scale from -10 (pain) to 10 (pleasure) [5]. The action that has the highest overall score is the action that should be pursued according to Bentham [5].

I decided to analyze the three choices that I had: to tell the public the truth, to lie to the public, or to go to Google’s CEO, Sergey Brin. After using Bentham’s felicific calculus, it is obvious that talking to Sergey Brin should be my course of action. In the other first scenario, more people were harmed by the truth, even though in telling the public the truth, I was following the codes of ethics. Also in this case the public does not gain any happiness. In the second scenario, the lies caused harm because of the number of consumers effected, and the seriousness of the harm. Going to Sergey Brin does not cause pain or pleasure, so in this case, it is the right course of action according to Bentham.

CONCLUSION

After reviewing engineering codes of ethics, case studies, consumer opinions, and an ethical system outside of engineering, it is clear that I should seek out Sergey Brin. The codes of ethics support this decision because they both directly state that I must disclose potential danger to the public to the appropriate people (in this case, Brin)[4][6]. In a similar case study, a software engineer was faced with a similar dilemma. His program was deemed unsafe, but he had enough time to fix it, so he decided to sell the unsafe software in hopes of fixing it later [3]. This saved many jobs [3]. In my case, I may not have enough time to fix the safety issues associated with our car, so it is logical that I should go to someone in charge. One person that I Interviewed, Justin Shaffer, stated that he would still feel safe in a google car [2]. The second person, Ashley Shaffer, stated that the public would react poorly to the safety report, and also stated that I should talk to Google’s CEO [1]. Finally, Jeremy Bentham’s felicific calculus proves from a different perspective that speaking to Sergey Brin about the safety data is the correct choice [5][7]. After meeting with Brin, he decided that we should postpone our release date and the date of our presentation until we can make our cars more consumer friendly.

RECOMMENDATION

Engineering is full of ethical decisions. When faced with a dilemma engineers must consult as many sources as possible. Codes of ethics can be a great start, but just because an action is unethical according to the code, does not make it wrong. You must factor in the people effected and the effects of your actions. This could be the loss of funding, jobs, or even lives. Engineers must also consult a variety of sources. If you only consult engineering based sources, you may come to a different conclusion. I recommend that you look at public opinions. The reason we are doing engineering projects is to help the public, so it would make sense to consult your consumers. Finally, an engineer faced with an ethical dilemma should consult ethical systems that are not related to engineering. This can give a broader view of the whole situation. By consulting a variety of sources one can make the correct decision, and as engineers, it is our duty to make the correct decision.

REFERENECES

[1] A. Shaffer. (2015, October 30). Interview.

[2] J. Shaffer. (2015, October 28). Interview.

[3] Michael McFarland. (2012, June). “Occidental Engineering Case Study: Part 1.” Markkula Center for Applied Ethics. (Online article).

[4] “National Society of Proffesional Engineers Code of Ethics.” (N.D.). National Society of Proffesional Engineers. (Online article).

[5] “Philosophy 302: Ethics, The Hedonistic Calculus.” (N.D.). Lander Philosophy.(Online Article).

[6] “Software Engineering Code of Ethics” (N.D.). Coputer Engineering Society.

[7] Wesley C. Mitchell. (1918, June). “Bentham’s Felicific Calculus.”Academy of Political Science.

OTHER SOURCES

“Air bags, Safety, and Social Experiments.” (N.D.)Online Ethics Center. (Online Article).

“Obligation to Client of Employer?” Online Ethics Center. (Online Article).

ACKNOWLEGEMENTS

I would like to thank Anjali Sachdeva, Ashley Shaffer, Ethan Henderson, and NowaBroner for encouragement and advice.

1