Can robots have significant moral status? An emerging issue among roboticists and bioethicists. An article published in Science and Engineering Ethics (June 20, 2019) makes three contributions to this debate, the most important of which is “behaviorism”, which argues that robots can have their own moral status, which the article defends against seven specific objections.

Another article published in the journal AI & SOCIETY (June 15, 2020) also explores this question. Below, we excerpt some of the most interesting aspects to help better understand this controversial issue.

Only conscious beings can be objects of moral concerns.

The author analyzes different positions in the argument: “One of the more imminent concerns in the context of AI [artificial intelligence] is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built to interact with human beings. In recent years, some approaches to moral consideration have been proposed that would include social robots as proper objects of moral concerns, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the ‘consciousness criterion’, which proposes phenomenal consciousness as a necessary condition for accrediting moral status.” From a bioethical point of view, this approach is an objective criterion for

Social robots moral status topic

After explaining why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience), the author then responds to some of the common objections against this view. He also examines three inclusive alternative approaches to moral consideration that could accommodate social robots and points out why they are ultimately implausible.

The well-founded and structured article concludes, “[…] that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them.”

On-going debate on social robots moral status due to new advances  should be based on conscious as the main factor to consider

As an epilogue, the author gives his opinion on this line of research and further debates: “It is, of course, greatly valuable that technological advances stimulate imagination and moral reasoning, which allows one, in Gunkel’s words, to ‘think otherwise’ about moral status and rights—think outside the box (Gunkel 2007; 2012).”

Read more about technological advances and their eventual ethical issues HERE.

We suggest our readers read the aforementioned study Mosakas, K. On the moral status of social robots: considering the consciousness criterion. AI & Soc (2020).