Can robots have significant moral status? An emerging issue among roboticists and bioethicists. An article published in Science and Engineering Ethics (June 20, 2019) makes three contributions to this debate, the most important of which is “behaviorism”, which argues that robots can have their own moral status, which the article defends against seven specific objections.

A more recent large–scale review published AI & SOCIETY journal (June 15, 2020) from which we excerpt what appears to be the most interesting to understand the controversial issue better.

Only conscious beings can be objects of moral concerns.

The author analyzes different positions “One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions built to interact with human beings. In recent years, some moral consideration approaches included social robots as proper moral concern objects, even though it seems unlikely that these machines are conscious beings. In the present paper, I argue against these approaches by advocating the “consciousness criterion,” which proposes phenomenal consciousness as a necessary condition for accrediting moral status.” From a bioethical point of view, this approach is an objective criterion for an adequate statement on the matter. 

Social robots moral status

After explaining why it is generally supposed that consciousness underlies the morally relevant properties (such as sentience), the article responds to some of the common objections against this view. The author also examines three inclusive alternative approaches to moral consideration that could accommodate social robots moral status and point out why they are ultimately implausible.

The well-founded and structured article concludes, “[…] that social robots should not be regarded as proper objects of moral concern unless and until they become capable of having conscious experience. While that does not entail that they should be excluded from our moral reasoning and decision-making altogether, it does suggest that humans do not owe direct moral duties to them.”


On-going debate due to new advances on the matter should be based on conscious as the main factor to consider


As an epilogue, the author gives her opinion about this line of research and further debates “It is, of course, greatly valuable that technological advances stimulate imagination and moral reasoning, which allows one, in Gunkel’s words, to “think otherwise” about moral status and rights—think outside the box (Gunkel 2007; 2012).”

We suggest our readers read the aforementioned study Mosakas, K. On the moral status of social robots: considering the consciousness criterion. AI & Soc (2020). https://doi.org/10.1007/s00146-020-01002-1.