Self-Driving Ethical Dilemmas Essay

With the rising presence of machines in society, it has become necessary to decide how to implement a moral processing system into machines. Self-driving cars have seemed like a dream for a long time, always appearing as something futuristic, but unfathomable in the current world. However, as recently as 2014, Google and a few other companies received permission to test their self-driving cars in California. At this time, no cars are available for purchase.

The integration of self-driving cars into society will pose moral dilemmas, legal dilemmas, and dilemmas in self-defense the solution to these dilemmas will call for a determination to be made concerning the degree to which they should be implemented. The introduction of new technology, such as self-driving cars, into any previously established aspect of life inevitably leads to negative uses which can cause dilemmas in self-defense. One important issue is how should the car respond to an attempt at vandalization?

Speeding away, alerting police, remaining at the crime scene to preserve evidence, or defending itself could all be valid options (Linn). In an effort to establish new laws, human self-defense laws could be used to determine when the car should defend itself. Human self-defense laws are the set of rules that determine to what degree a human can fight back legally. Self-defense is allowed in the presence of an immediate threat to someone’s well-being with the stipulation that the response is proportional to the perceived threat.

Some states have a law where one must attempt to retreat before trying to defend themselves while other states have no such stipulation and say that the individual should stand his/her ground (“SelfDefense”). The individual may use lethal force on someone else if he or she is unlawfully entering your house which is known as the castle doctrine (“Self-Defense”). The car might assume its interior is its “home” and disregard the safety of the aggressor thus causing a disconnect on who is responsible for the sustained injury.

Nobody is perfect, but a new system inevitably leads to exploitation, thus causing dilemmas in self-defense. The concept of making laws for robots not operated by humans raises many possible legal dilemmas. The concept of establishing liability laws when self-driving cars are rolled out for public use in an effort to achieve standards is a necessary burden. A car dealership may decide to exempt manufacturers from blame if any changes are made to their programming.

They could also exempt mechanics and repair shops from liability on fixing self-driving cars. This would effectively remove responsibility from the people most likely to be blamed. Defining the person who turned the vehicle on as the one who was using its technology, therefore having control of the car in a crash, is a critical step in establishing standards for liability laws (“Autonomous”). These laws were used in different states when establishing guidelines for testing, so it can be assumed that similar laws would be implemented at the car’s release.

Vovlo, a reputable car company, stated that they will accept full liability when their cars are in autonomous mode. They urged law makers to solve the issue of liabilities in a crash situation in which they stated, “If we made a mistake in designing the brakes or writing the software, it is not reasonable to put the liability on the customer … We say to the customer, you can spend time on something else, we take responsibility” (Harris). Volvo is the first company to make this promise.

They will not take responsibility if the person causes the crash on their own, nor will they take responsibility if someone else runs into the car, it would then be the driver’s responsibility (Harris). The line between human and machine is one that must be walked delicately in order to properly make these laws. Humankind has an unwritten code of ethics, but the question of robotical ethics raises moral dilemmas.

The ethical dilemma is best presented with the following situation: “If a person falls onto the road in ront of a fast-moving self-driving car, and the car can either swerve into a traffic barrier, potentially killing the passenger, or go straight, potentially killing the pedestrian what should it do? ” (Lubin). If it saves the passengers then that brings in a question of whether or not that was the “correct life” to take. If it saves the pedestrians then that brings in serious disconnect between the car and its owner, obviously no car owner wants to interact with a car on a daily basis that might kill them.

The fact that a bought vehicle might kill its owner in certain situations would also lead to a significant drop in sales (Gent). When people were surveyed about this situation they responded in a utilitarian manner: whatever saves the most people. In contrast, when the same group were asked about this situation regarding people they were related to, they responded with wanting to save themselves and family (Gent). A set of well-established rules should take human choice in the situation out of the picture.

The Robot Laws of Morality would be a good starting point in establishing a plan to program robot ethics. Law 0: A robot may not injure humanity or through inaction, allow humanity to come to harm” (Trappl 2). “Law 1: A robot may not injure a human being or, through inaction, allow a human being to come to harm except were such actions would conflict with the zeroth law law 2: A robot must obey orders given to it by human beings except where such orders would conflict with the zeroth law or first law law 3: A robot must protect its own existence as long as such protection does not conflict with the zeroth law, first law, or second law (Wallach and Allen 1).

The undescriptive nature of law 1 “allow humanity to come to harm”, might create many problems if this ethical system was known. If a car has to swerve into a motorcyclist with or without a helmet it would chose the helmet because it would cause less harm. A swarm of questions are raised if law 1 is known universally. Would this rule being known cause no motorcyclists ists to wear helmets? Would this reduce the sale of big cars? Would this reduce sales in general? Will they put a personalized option system on the dashboard so the user can decide how the car makes decisions?

Given the average American’s propensity for personalized options, the decision making on the dashboard seems likely, and also reduces the chance of any of the other situations being an issue (Trappl 6). Ironically, the stagnant American identity saves self-driving car users from most of the moral dilemmas. The solution to these dilemmas has made one thing apparent: if the already established American laws and American cultural norms are followed meticulously, then there is not much wrong that can be done.

Humans committing crimes is nothing new and hard to be avoided, so the correct laws need to be put in place to diminish abuse of the new technological systems of the cars. The line between human and machine may be blurred but the blurring of that line will be an important step in establishing laws for objects not operated by humans. Last, inevitable desire for personalization may finally quiet the long lasting debate about robot ethics.