Open navigation
Search
Offices – United Kingdom
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – United Kingdom
Explore all insights
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights
About CMS
UK Pay Gap Report 2024

Learn more

Select your region

Publication 02 Dec 2025 · United Kingdom

AI and Mental Health - What happens when AI meets policy at the heart of UK governance?

6 min read

On this page

On 17 November 2025, CMS representatives joined the All-Party Parliamentary Group on Artificial Intelligence to explore how government and industry advisors are shaping the future of AI. The discussion tackled critical questions around cultural nuances in mental health assessments, the rise of sovereign AI compute, and striking the right balance in regulation. Below, you will find a summary of the key insights from this thought-provoking session.

CMS continues to act on the Advisory Board for the group. Please do reach out to your usual CMS contact with any questions or for additional insights. 


Key points in light of queries from Members are bolded.

Can AI help, or harm, mental health? November’s meeting put this question front and centre, with AI and Mental Health as the primary focus. The topic couldn’t be more urgent: national mental health services are under immense strain, and recent reports reveal troubling cases where individuals disclosed suicidal thoughts to general-purpose language models, sometimes with devastating consequences. 

Claire Harrison, Chief Digital and Technology Officer at the MHRA, opened the evening with three key points: 

  • There are clear benefits of AI when applied in mental health contexts, such as detecting symptoms of depression via voice analysis, or providing platforms for  use in regions with low concentrations of mental health professionals. However, there are also significant risks: models are predominantly trained on Western datasets, but expressions including distress manifest in very different ways across cultures. 
  • There should always be a human involved to support the final decision, and transparency regarding reasoning and data is paramount. Patients should always be able to bypass the model, and must never be prevented from accessing care pathways. 
  • The MHRA takes an anti-‘one-size-fits-all’ approach to evidence required for regulatory approval. It will consider the opinions of subject-matter experts, QA testing, technology team analysis (including attempting to bypass guardrails), and the involvement of local community and cultural leaders. 

Dame Til Wykes, Professor of Clinical Psychology at Kings College London, reiterated these points, arguing that:

  • Whilst models may be capable of diagnosing symptoms of depression and PTSD from vignettes with greater accuracy than humans, they are weaker at diagnosing more complex disorders, especially based on moment-to-moment changes in patient behaviour. As recent news stories have shown, models can actually reinforce maladaptive behaviours, failing to provide the cognitive ‘friction’ that therapists would provide.
  • If AI systems are to be used across regions and cultures, they must be built be relevant local professionals, not just translated into different languages.
  • A key regulatory regime applies if mental health-related AI technology is considered as a medical device or software. However, this distinction is currently unclear, so safety testing should be prioritised regardless.
  • Another key question is about liability. Large swathes of the population (up to 1 in 3) are using large language models. At the moment, there are guardrails that direct users to certain contacts, such as the Samaritans, but there are concerns about the infallibility of this system. If something does go wrong, who will be responsible? Where is ‘agency’ present? 

Industry experts Dr Lesa Wright and Mark Contreras, of Psychiatry UK, then provided a clear example of a safety-first deployment of various mental-health related AI tools:

  • Development of the tool began 6 months after the establishment of a clinical safety team. The tool saved clinicians 20% of their time by writing draft clinic letters for ADHD and ASD assessments, enabling a corresponding increase in patient time. 
  • Another AI tool filters through hundreds of patient messages to identify distressed patients and direct them to support. Regarding a proposed patient support chatbot, patients only access this after referral from their GP, as the GP knows the patient has mental capacity.  There are also strong data storage and access controls, testing for model drift and biases, and hallucination-mitigation techniques.
  • However, the UK lacks significant sovereign AI computing power for commercial use. This means involvement from US-based providers is required, and talent pipelines should be a focus area. This would reduce instances of talent moving overseas, while encouraging the development of cross-functional skills, for example in the areas of health and technology.

Yauheniya Tyler, Founder and CEO of Uptitude, discussed overly onerous regulation: 

  • Solutions should be tailored (in some cases being clinic-specific) rather than being too broad.
  • Regulatory approval cannot be too slow or too expensive. With the pace of technological development, approval taking 18 months could easily exceed the development time for fine-tuned, bespoke generative AI models for mental health. Tools are likely to be obsolete by then, since general purpose generative AI is advancing rapidly. Regulatory hurdles encourage the use of free, general-purpose models, which are less effective and have fewer guardrails. 
  • The government is currently taking a sector-based approach to AI regulation, but is considering a more comprehensive regime. Yauheniya suggested that while a sector-based approach makes the most sense, whether or not there is a human in the loop should be taken into account when lowering regulatory hurdles compared to automated solutions. 

Lastly, Dr Becky Inkster, Honorary Research Fellow at the University of Cambridge, rounded off the evening by mentioning:

  • There are also important drawbacks to using AI models. Spoken word has proven positive effects on suicide mitigation, and a recent BMJ paper[1] regarding an internal audit of ChatGPT showed potential for 1.2 million suicide-related queries per week. This highlights the scale of the problem. 
  • Good regulation could incentivise large model providers to ‘hand-off’ queries to trusted AI providers. One example of this is the government’s implementation of trusted providers for the Digital ID scheme. This would be particularly helpful in cases where referring users to mental health resources would disadvantage them, but continued engagement would be more beneficial. 
  • Rewards for safety should be a focal point, including financial incentives or expedited approval pathways for technologies that have been demonstrated to be low-risk. 

[1] BMJ 2025;391:r2290
 

Back to top