The emergence of artificial intelligence (AI) in medical care has previously been addressed by our Observatory (read HERE). Now, an evidence-based ethical analysis published in the European Archives of Psychiatry and Clinical Neuroscience reviews the use of E-mental health applications for the treatment of depression.

This issue is of particular interest from an ethical point of view, because depression is a growing mental health challenge for global healthcare, with high prevalence in developed countries (Read our article The World in a mental health crisis classed as ‘monumental suffering’).

So-called e-mental health applications (apps) are increasingly being used to treat depression. In this respect, the aforementioned article reviews the advantages, risks, and issues that require further consideration. We excerpt what we think of more interest.

How do these apps work?

mental health applications privacy . Depression

Applications used as depression treatments. Privacy questioned

The paper begins by explaining how this technique is currently being applied:

“E-mental health applications (apps) […] can be used in guided interventions, i.e. as an element of blended therapy, or as unguided interventions in the form of stand-alone applications. Most apps for depression are based on cognitive-behavioral therapy (CBT), which is also the most prevalent form of therapy in face-to-face treatment. Other therapeutic methods are acceptance and commitment therapy and mindfulness-based treatment. Apps for depression are usually structured in modules and may contain elements of psychoeducation, interactive tools for self-management, a diary-function to record thoughts and emotions, a schedule for daily activities, behavioral experiments, and enabled goal setting functions. The duration of treatment varies between 4 and 12 weeks.”

Evaluation of this type of treatments

The author continues with the positive reviews of different studies on the apps: “These apps have several advantages that make them useful tools for depression treatment: They are easily accessible, which is an opportunity for people living in areas without an adequate provision of mental health services. Another important feature is anonymity, which reduces the stigma often associated with mental health treatment. Furthermore, apps have proven to be cost-effective, which is relevant for their implementation in standard treatment. Given these advantages and opportunities, it is to be expected that apps will become an increasingly important factor for the treatment of depression, a development that calls for an in-depth ethical analysis.”

FDA eases entry for psychiatry apps during COVID-19 crisis

In this respect, the current healthcare crisis has changed paradigms in many fields. News earlier this year reported that, “To address mental health needs during the coronavirus disease (COVID-19) public health emergency, the US Food and Drug Administration (FDA) today [16/04/2020] announced it would relax certain premarket requirements for computer programs and mobile apps designed to support treatment of conditions such as depression, anxiety, obsessive-compulsive disorder and insomnia.” (Regulatory Focus, April 16, 2020).

User review of mental health applications privacy

An analysis of user reviews concluded that these apps “allow users to take a more active role in managing their mental health. However, there are several risk factors when it comes to autonomy”. In fact, a meta-analysis found that “some apps may be too demanding and overstrain patients’ self-management skills. This is especially the case when information on the intervention is given in an inadequate or insufficient manner”.

Studies have shown that apps for depression are suitable for use in primary care, finding that “these interventions are best suited for patients with mild-to-moderate depression. This is consistent with results from other meta-analyses as well as user reviews”.

Users have justified expectations in depression apps data privacy. Are they right?

As we have previously said, one of the current challenges of the introduction of AI in medical care is patient privacy (read Genetic data privacy in the U.S. questioned. Bioethical approach). In this respect, the study says that “Given the fact that both demand for apps and their overall acceptance is high, as shown in [many] reviews, users have justified expectations that their data are secure and that the quality of the app is guaranteed. However, many apps lack a transparent data and privacy policy, especially when it comes to the disclosure of data handling”. An assessment study of apps for depression found that, “those who do most state primary uses of data without any information on secondary uses. Only 20% of apps provide information on the jurisdiction in which data are processed. The vast majority of apps (92%) transfer data to third-party entities for commercial reasons, but only a few disclose this transmission explicitly. Some even transfer strong identifiers such as user names. Another crucial aspect of accountability is the often-varying quality of apps. As reviews […] show, the evidence-base for many apps, especially those freely available in app stores, is either unclear or lacking. This may explain why interest outpaces adoption when it comes to apps. [Many] surveys show that although patients are principally interested in apps and willing to use them, there are widespread concerns regarding their effectiveness due to unclear quality standards.”

Mental health applications privacy strongly questioned

From a bioethical point of view, many of the studies addressed the informed consent of the users of these apps in depth. Taking into account that these apps, as we said above,  have a  diary-function to record thoughts and emotions, a schedule for daily activities, and behavioral experiments, it could be an unprecedented violation of privacy.