A serious ethical dilemma is how to regulate the actions of the new “intelligent machines”. A specific case of this is “driverless” or “self-driving” cars. Their use could entail moral dilemmas, especially with respect to possible deaths in potential accidents. Researchers at Massachusetts Institute of Technology have designed a free online platform entitled “Moral Machine”, which presents dilemmas such as whose life would we choose to save in a hypothetical accident with one of these self-driving cars, an elderly or a young person, or a woman as opposed to a man, etc. This is a serious ethical problem that will have to be resolved before these “intelligent machines” start to form part of our society (see our assessment HERE on this particular issue).

Current use of intelligent machines

The technologies that surround us take many shapes and have different levels of developmental progress and impact on our lives. A coarse categorization could be the following:

• Industrial robots: these have existed for many years and have made a huge impact on manufacturing. They are mostly preprogrammed by a human instructor and consist of a robot arm with a number of degrees of freedom.

• Service robots: a robot which operates semi- or fully autonomously to perform useful tasks for humans or equipment but excluding industrial automation applications. They are currently applied in health care and selected settings such as transportation, lawn mowing and vacuum cleaning.

• Artificial intelligence: software that makes technology able to adapt through learning with the target of making systems able to sense, reason, and act in the best possible way. There has, in recent years, been a large increase in the deployment of artificial intelligence in a number of business domains including for customer service and decision support even in some tasks in hospital care (Source: Fronting in robotics and AI).

Different types of self-driving cars that are coming to the market.

.  The growing use of autonomous systems by the military – for example, the increased use of personal drones, reconnaissance robots, and augmented reality systems by ground troops in the US, UK, and elsewhere.

AI ethics comments

We ask; just because something is technically possible, does that make it morally justifiable? In this sense, we cite an article titled: Robot ethics – tough questions in system design, that concluded: How society assures and regulates robots in a world of human laws; what a framework for ethical governance might look like; and whether the state should regulate on behalf of the people most affected by technologists’ research: the public.

Yet the underlying problem is easy to state, if not so easy to solve: the more we look to machines to augment human decision-making in a messy, emotional, complex, and illogical world, the more we realise that centuries of law are designed to protect us from other people, not from machines. That demands our urgent consideration (July 1, 2019).