Last month, February 17, The Washington Post published an article entitled, “Can Computer Algorithms Learn to Fight Wars Ethically? Maybe the autonomous weapons being developed by the Pentagon will be better than humans at making moral decisions. Or maybe they’ll be a nightmare come to life” (Read the full article HERE).
The issue poses an important bioethical dilemma. The arms race faces a new scenario with the development of AI-powered weapons. In this respect, the article presents a short update on the panoply of deadly autonomous weaponry under development today and the ethical problems that may arise from their use. The main argument of Western countries for developing this type of weaponry is the known principle that a balance of forces between the great powers would avoid a global confrontation and maintain a relatively stable status quo.
To see the relevance of this issue, last March 1, BBC News published an article with the title: Biden urged to back AI weapons to counter China and Russia threats (read the full article HERE). In this respect, a report of European Parliaments says “The Russian Military Industrial Committee has already approved
an aggressive plan whereby 30% of Russian combat power will consist of entirely remote-controlled
and autonomous robotic platforms by 2030.” (EPRS | European Parliamentary Research ServiceScientific Foresight Unit (STOA) – March 2020 (pg. 68). For these reasons, this issue should be analyzed from an autonomous weapons ethical approach.
According to the author of the aforementioned Washington Post article, “[…] the U.S. military is trying to come to grips with the likely loss of at least some control over the battlefield to smart machines. The future may well be shaped by computer algorithms dictating how weapons move and target enemies”. This “loss of at least some control” appears to be an indication of the ethical problem and raises the question: can human beings have total control of lethal autonomous machines?
Current risks of the use of lethal autonomous weapons
Without entering into the debate of the feasibility of providing killing machines with ethical criteria using algorithms, the article reports on the latest known status of autonomous machines, citing as an example the following case: “[…] a self-driving car being tested by Uber struck and killed a woman in Arizona. A nearly two-year government investigation revealed that the car hadn’t malfunctioned; rather, it had been programmed to look only for pedestrians in crosswalks. Jaywalking, as the woman was doing, was beyond the system’s grasp, so the car barreled ahead.” It continues to point out the risks currently involved in the use of lethal autonomous weapons: “AI researchers call that ‘brittleness’, and such an inability to adjust is common in systems used today. This makes decisions about how much battlefield risk to embrace with AI particularly challenging. What if a slight uniform variation — some oil soaked into a shirt or dirt obscuring a normal camouflage pattern — confuses a computer, and it no longer recognizes friendly troops?” It appears that extensive research is needed to achieve an autonomous machine with logical criteria (read our article Could self-driving cars with no human intervention raise ethical difficulties?).
Autonomous weapons ethical approach is relevant when they are already produced and sold.
Nonetheless, autonomous weapons are the target of high investment in this lucrative business. Moreover, the production and use of this kind of weapon on the battlefield is not a mere project; Pentagon authorities have said that this year, the US should use them as defensive weapons. According to the article, Britain and China have already sold several lethal autonomous weapons that have been used on the battlefield to Saudi Arabia and other countries.
From a bioethical point of view, these new lethal weapons should be controlled by an international treaty similar to the one that exists for nuclear weapons.