Researchers from the Dana-Farber Cancer Institute have warned in an article about the need to review healthcare mediated by Artificial Intelligence (AI) in cancer patients. “To date, there has been little formal consideration of the impact of patient interactions with AI programs that haven’t been vetted by clinicians or regulatory organizations,” explains Amar Kelkar, paper’s lead author.

The article has been published in the journal JCO Oncology Practice, and is addressed to medical societies, government leaders and healthcare personnel, who are the first to address the dilemmas that the incorporation of Artificial Intelligence into clinical practice and oncology research may present.

It is true that it has meant a great advance in the oncological field in the search for patterns, diagnosis and evolution of the disease, monitoring and prediction of oncological treatment and in the achievement of more efficient medical care, thus reducing time and economic costs.

However, Amar Kelkar, specialist in stem cell transplantation in oncology, explains that “We wanted to explore the ethical challenges of patient-facing AI in cancer, with a particular concern for its potential implications for human dignity.”

The article defines three areas in which patients are likely to interact with Artificial Intelligence. The first is telehealth, which is a platform for conversations between doctors and patients, which can use AI to reduce wait times and collect patient data. The second is remote monitoring of patients’ health, which can be improved with AI systems that analyze patient information. The third is health coaching, which can use AI to provide personalized health advice, education and psychosocial support.

AI has great potential in these areas, but it also poses ethical challenges that have not been adequately addressed. Telehealth and remote health monitoring pose risks to patient confidentiality when AI collects their data. As for autonomous health coaching programs, as they become more human-like, there is a risk that they will not be supervised by humans, thus eliminating the personal contact between doctor and cancer patient.

All of these situations can lead to a depersonalization of health care. The authors propose several guiding principles for the development and adoption of AI in patient-facing situations. These would be human dignity, patient autonomy, equity and justice, regulatory oversight and collaboration.

“No matter how sophisticated, AI cannot achieve the empathy, compassion, and cultural comprehension possible with human caregivers.” “Overdependence of AI could lead to impersonal care and diminished human touch, potentially eroding patient dignity and therapeutic relationships. To ensure patient autonomy, patients need to understand the limits of AI-generated recommendations” Kelkar notes.



Subscribe to our newsletter:

We don’t spam! Read our privacy policy for more info.