Image
bild på kvinna i en soffa som chattar vid en dator
Photo: AI
Breadcrumb

AI and Mental Health – How Can Chatbots Contribute to Psychiatric Care?

Published

How can a digital chatbot support people experiencing mental health issues? In the CHAT-MH project, researchers are exploring how AI-based tools can complement traditional care and create new opportunities for people with anxiety and mild depression. Lilas Ali discusses the work behind the chatbot BETSY, its potential, and the ethical questions raised at the intersection of technology and psychiatric care.

Image
porträtt lilas ali
Lilas Ali, docent och universitetslektor vid Institutionen för vårdvetenskap och hälsa
Photo: Fredrik Hjerling

Lilas Ali, associate professor at the Institute of Health and Care Sciences, has recently been awarded ALF funding for her CHAT-MH project. The research focuses on the chatbot BETSY, which stands for Behaviour Emotion Therapy System and You, designed to offer support to individuals with anxiety and mild depression.

The project started in 2019 with a Virtual Reality initiative at Östra Hospital. This was at a time when interest in AI was rapidly growing, explains Lilas Ali. Researcher Almira Osmanovic Thunström and her team developed both VR and chatbot interventions to test their effects and safety within psychiatry.

So far, the chatbot BETSY has only been tested on healthy individuals. However, when the initial results indicated promising effects, the research team decided to move forward and examine how the chatbot could support individuals with milder mental health diagnoses through the CHAT-MH project.

A Digital Support to Complement Care for Anxiety and Mild Depression

The aim is to use the chatbot as a complement to regular care for individuals suffering from anxiety and mild depression, says Lilas Ali. Participants in previous studies have described the chatbot as empathetic and easy to use. It operates on a rule-based dialogue tree guided by keywords and context, referring only to trusted and validated sources, such as 1177. The chatbot is not intended to diagnose, treat, or offer clinical advice but rather to provide support and information, making it safe to use.

One of BETSY's interesting features, according to Lilas Ali, is its user-friendliness and accessibility.
A major advantage is that BETSY can be accessed 24/7, says Lilas. It is available anytime, offering continuity. Every interaction follows the same structure and approach, regardless of how often the user engages with the chatbot. There is also an option to save and visualise information for future needs.

A Tool for Support, Not Diagnosis or Treatment 

Unlike large language models thar are often used in AI, the prototype is designed to stay within non-clinical topics such as work-related stress, sleep, and general mental health, focusing on mild to moderate anxiety and depression. BETSY is not meant to diagnose or treat but to provide support and information, emphasises Lilas Ali.

But how does a chatbot compare with human contact, especially when engaging with people experiencing mental health challenges who often need empathy and understanding?

Lilas Ali is clear that human contact is crucial and irreplaceable.
The chatbot is intended to complement traditional care, says Lilas. Interestingly, research shows that chatbots are sometimes perceived as more empathetic than human therapists, which raises important questions about what healthcare can learn from this to improve interactions.

Person-Centred Co-Creation and Ethical Boundaries

In this project, we applied a person-centred co-creation approach. A patient representative with personal experience was involved from the very beginning and has been an active co-creator of BETSY. The public was also invited to participate through anonymised surveys. The focus has been on mapping out preferences for design, functionality, and delivery, as well as addressing concerns—an essential step in creating safe systems that act in the best interests of the end user. Both positive and negative perspectives were welcome, and by engaging the public, we have gained access to a diverse range of ages, genders, and views on chatbots for mental health, explains Lilas Ali.

I strive to see the potential of new technology in healthcare, says Lilas Ali, but safety aspects and, above all, ethics are crucial. We will not get answers to our questions without testing the prototypes and carefully evaluating their effects under controlled conditions. This is why it is essential that professionals are present to monitor and intervene if anything deviates. Only then can we build trust in new tools within psychiatric care, concludes Lilas.

 

Researchers of the project
  • Lilas Ali,
  • Steinn Steingrimsson,
  • Almira Osmanovic Thunström,
  • Sandra Wieneland,
  • Rajna Knez,
  • Harald Aiff
  • Linda Wesén.