Open navigation
Search
Offices – United Kingdom
Explore all Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights – United Kingdom
Explore all insights
Search
Expertise
Insights

CMS lawyers can provide future-facing advice for your business across a variety of specialisms and industries, worldwide.

Explore topics
Offices
Global Reach

Apart from offering expert legal consultancy for local jurisdictions, CMS partners up with you to effectively navigate the complexities of global business and legal environments.

Explore our reach
Insights
About CMS
UK Pay Gap Report 2024

Learn more

Select your region

Publication 22 Jan 2026 · United Kingdom

Deepfake ‘doctors’: AI and the spread of medical misinformation

2 min read

On this page

AI is transforming the healthcare sector, accelerating drug discovery, and improving diagnostics. But what happens when AI simulates clinical endorsement or hallucinates health advice?

Fact-checking organisation, Full Fact, has discovered hundreds of videos featuring AI-generated “doctors” promoting fictional medical conditions and recommending supplements unbacked by medical evidence.

David Taylor-Robinson, a professor at the University of Liverpool, was one of many professionals whose image had been manipulated to endorse healthcare products without his consent. Although Taylor-Robinson specialises in children’s health, several videos shared on social media gave the impression he was a women’s health expert, providing advice on “thermometer leg”, a fabricated side-effect of menopause. 

Beyond reputational harm, the unauthorised use of a professional’s likeness to promote medical misinformation engages a number of legal rights including privacy, false endorsement/passing off, and defamation. Misrepresenting healthcare professionals to promote products also raises alarm bells for consumer protection and advertising regulation. Creators of these deepfake videos and the social media platforms that host them could be held to account for this.

AI may also be a cause for concern in clinical settings. Whilst increasing numbers of medical professionals and patients are using generative AI for medical advice, a recent study from the Icahn School of Medicine found that widely-used AI chatbots frequently hallucinate fictional medical information including symptoms, diagnoses, and cures. When challenged, the chatbots often continued to elaborate on their fabricated outputs.  

This convergence of plausibility and misinformation raises further concerns for consumer and patient protection and could expose developers, advertisers, and healthcare professionals to liability risk.

When AI systems influence health decisions or imply advice is endorsed by clinicians, the importance of accuracy, transparency, and accountability increases. The consequences of falling short may include harm to individuals (whether featured in deepfakes or following medical misinformation), regulatory enforcement, litigation proceedings, and the erosion of trust in the medical industry.

Back to top