AI and mental health: “it could help revolutionise treatments”
Professor Miranda Wolpert is Director of Mental Health at Wellcome. Here, she gives her insights into the opportunities and risks of AI in mental health – and why we must approach the potential of AI with curiosity and not assumption.

pocketlight via Getty Images
Mental health problems affect one in two people and are projected to be the cause of the world’s biggest health burden by 2030. The scale of the challenge means we need a seismic shift in how we address these problems.
Artificial intelligence is already being used to speed up or improve many parts of the complex path that leads from identifying potential treatments to ensuring people are benefitting from them. In particular, generative AI (genAI), which are models of artificial intelligence that learn patterns from existing data and generate new content, is starting to transform many aspects of our lives. Now, people are asking: can genAI play a leading part in transforming mental health treatments?
For many in the mental health community, the answer to this is a hard no. There are concerns that interventions involving genAI are, at worst, harmful and, at best, inferior to current treatments. There are concerns about genAI widening health inequalities as it learns from skewed and biased datasets, general or generic models being inappropriately deployed, leading to false information and unhelpful or even harmful interactions. There are also concerns about privacy, data protection and accountability.
While we need to remain alive to these important concerns, we should keep an open mind to exploring scientifically the role genAI might play as part of a wider revolution in mental health solutions that is taking place.
"Even the best human delivered therapy does not help everyone. Is it right to apply higher standards to AI?"
What are the challenges of human interventions in mental health?
Many non-pharmacological interventions for mental health problems are delivered through language. They are reliant on the capabilities of the human brain, interviews and conversations between people.
As a former clinician, I know the training that therapists go through to deliver talking therapies. I also know that even the best delivered therapy with the best trained human does not help everyone. As a mental health researcher, I am also aware of the issues of cultural competence, bias, misunderstanding and fatigue that affect the effectiveness of interventions delivered by humans. This is irrespective of how caring the humans involved may be.
Is it right to apply higher standards to AI than we apply to humans?
What is the potential of AI in mental health?
As a system grounded in language, genAI could potentially help revolutionise mental health treatments in several ways – from those that are low risk to those that may have higher risk. Below are some examples ranging from the least controversial to the most:
- Automating routine tasks and supporting human interactions such as note-taking, creating reports tailored to different audiences and providing reminders between sessions in order to aid efficiency and reduce waiting times.
- Providing scalable training data for new therapists or AI-generated roleplay to help train those who need to speak to people in a mental health emergency.
- Helping individuals learn new skills that may have therapeutic benefits. For example helping train people in cognitive reframing which is a technique that helps individuals change negative thought patterns.
- Powering fully automated therapeutic chatbots which seek to coach or support individuals with their mental health challenges.
Could AI transform mental health treatments?
We need to consider genAI as part of a wider revolution happening in mental health therapy – from a digital therapy to help reduce the distress that people who hear voices can experience to singing therapy for postnatal depression.
These new treatments are potentially transformative of the mental health landscape in that they focus on specific symptoms that hold people back and are underpinned by cutting-edge science focused on understanding the mechanisms of action. They address issues that are a priority for those with lived experience of mental health problems and, in many cases, are co-designed with lived experience expertise input. They also present new options for scalable solutions. I believe that genAI innovations can exemplify these features as well as helping many of these other new treatments go to even greater scale.
Why we’re choosing curiosity over fear of AI
At Wellcome, we are focused on solutions to the urgent health challenges facing everyone – including mental health. We believe science is crucial to achieve these solutions. Our vision is of a world where no one is held back by mental health problems, and we want to understand the role generative AI can play in making this possible.
That’s why we’re bringing together researchers, healthcare professionals, developers, ethicists and people with lived experience to help explore foundational aspects of genAI’s in terms of potential to aid mental health.
We must not neglect consideration of the dangers in the use of genAI. We are aware of the valid concerns around continuing or even widening health inequities by deploying models trained on biased data, limiting access to those with data resources and the potential harm from the deployment of inappropriate models. Excessive automation of mental health care may need to be guarded against if this is not what best meets the needs of those seeking help.
But for Wellcome the question of whether AI could and will transform mental health outcomes is ultimately an empirical question. It must be answered by investment in science to explore potential including exploring potential risks and harms rather than solely relying on opinions or beliefs. By supporting research into efficacy that also engages with the perspectives of those with lived experience and the complex ethical questions surrounding the use of AI, we can establish what tools are useful, in what context and for whom.