According to the New York Times (15 May 2015), Google is experimenting with a fleet of 100 electric self-driving cars that are designed to drive themselves with complete autonomy (now with 1 million miles traveled). The user travels as a passenger, and cannot even act as a co-pilot; they only thing they can control is the “START” button and a red button called “E-STOP”, to stop the car in extreme cases. There are no other devices – no steering wheel, brake pedal or accelerator. The car is controlled using a Smartphone application which, similar to an SMS, receives an instruction for the destination chosen by the user. The author of the article states that the idea of self-driving cars is far from being a reality, but that its object is to address an as yet unresolved issue: “the car itself might need to be programmed with a basic code of ethics, which presents some interesting dilemmas”.

A study about public attitudes about using this type of car has been made by the University of Michigan titled, “Sustainable Worldwide Transportation”  indicate that 23% of American would not ride in such vehicles, and 36% would be so apprehensive that they would only watch the road. Furthermore, out of the remaining 41%, around 8% would frequently experience some level of motion sickness (see  HERE).

By way of example, the author cites an article from Australian magazine WIRED (29 June 2015), in which the writer, Jason Millar, raises some of the ethical issues presented by self-driving cars, given that these robot cars could be the cars of the future (owing to their ecological characteristics) and because it seems, they could avoid accidents due to carelessness or human error.

Self-driving cars ethical possible dilemma

Millar presents an example of a possible ethical dilemma, which he calls the “Tunnel Problem”, proposing the following: “You are travelling along a single-lane mountain road in an autonomous car that is fast approaching a narrow tunnel. Just before entering the tunnel a child errantly runs into the road and trips in the centre of the lane, effectively blocking the entrance to the tunnel. The car is unable to brake in time to avoid a crash.  It has but two options: hit and kill the child, or swerve into the wall on either side of the tunnel, thus killing you. Now ask yourself, Who should decide whether the car goes straight or swerves? Manufacturers? Users? Legislators?”

In Millar’s opinion, in a standard car, a person’s moral decision would determine the option to take; however, in this type of vehicle, all will depend on its programming, since neither the driver, nor the passengers can intervene.

It is logical to think that the user, manufacturer and legislator should envisage a situation like this, and program the car’s “behaviour”, so that it makes the choice, based on a predetermined code of ethics.

 The self-determination of the passengers is still to be resolved. Millar suggests a solution to the problem, stating that one could adopt the same approach as medical professionals: informed consent. He says, “In healthcare, when choices that bear a heavy ethical or moral weight must be made, it is standard practice for nurses and doctors to inform patients of the treatment options, side effects, and other associated risks, and let patients make their own decision. This same approach of informed consent can be applied to the engineering of the driverless cars.” The idea is that designers and engineers would have to inform users of how the car had been programmed, and they, in turn, would have to give their consent so that in an extreme situation, they always accept the vehicle’s code of ethics.

The last new is from Stanford University News (MAY 22 2017) said, “Jason Millar, an engineer and postdoctoral research fellow with the Center of Ethics in Society, continue working on the CARS ethical programming project. He is tackling how to translate knowledge developed in academic and philosophical circles into the daily design work of technology and artificial intelligence products” (see HERE).

In our opinion, the dilemma presented here can be extended to other technological advances regarding robotics and artificial intelligence, which may present objective ethical problems that are still difficult to predict.

Subscribe

Subscribe to our newsletter:

We don’t spam! Read our privacy policy for more info.