The Nuffield Council on Bioethics examines, in a recently published briefing note, current and potential applications of Artificial Intelligence (AI) in healthcare and research, and the ethical issues arising from its use.
Will the increasing use of AI lead to a loss of human contact in healthcare?
The use of AI in health care could make medical care more efficient for the patient and could speed up diagnosis and reduce potential errors.
It could also help patients manage symptoms or cope with chronic conditions, and its use could avoid bias and potential human error.
Regardless, there are some important questions to consider: who is responsible for the decisions that AI systems make? Will the increasing use of AI lead to a loss of human contact in care? What would happen if AI systems were hacked?
The briefing note outlines the ethical issues that could be raised by the use of AI in healthcare. These could be: the possibility of the AI making wrong decisions, the unknown identity of the person responsible for the decisions when the AI is used to support decision making; difficulties in validating the results of AI systems; the risk of bias inherent in the data used to train the AI systems.
In addition, there is a need to guarantee the security and privacy of potentially sensitive data; to ensure public confidence in the development and use of AI technology; to know the effects on the sense of dignity and social isolation of people in situations of care; of knowing the roles and skill requirements of healthcare professionals and the potential for AI to be used for malicious purposes.
Hugh Whittall, Director of the Nuffield Council on Bioethics, said: “The potential applications of AI in healthcare are being explored through a number of promising initiatives across different sectors, by industry, industry organizations of health and government investment. While their goals and interests may vary, there are some common ethical issues that arise from their work.”
Our briefing note outlines some of the key ethical issues that need to be considered in order to reap the benefits of AI technology and maintain public trust. The challenge will be to ensure that AI innovation is used in a transparent way, that it addresses the needs of society and that it is consistent with public values.