UK weighs up tighter rules for AI chatbots amid child safety concerns
Key contacts
The UK government is considering tougher regulations on AI chatbots over concerns they may expose children to harmful content or encourage self-harm. Technology Secretary, Liz Kendall, told MPs that she is “especially worried” about the risk to young people forming unhealthy relationships with generative AI tools.
Tamsin Blow of CMS notes “Chatbots demonstrate the difficulty of the law keeping pace with technological change. The Online Safety Act – which was rooted in protecting children from illegal and harmful content - only extends to chatbots that can be classified as user-to-user services or search services. Chatbots that do not allow user-to-user interaction and do not enable the search of more than one website and/or database fall outside the Act and are not part of the current regulatory regime.”
Apps like Replika that allow users to engage with an AI companion, but not with other users, may not be covered by current UK legislation on online safety.
Kendall said that she will ask Ofcom, the UK’s communications services regulator, to set out clear expectations for covered chatbots, and that she will launch a public information campaign in the new year. Kendall said that, if new legislation is required to cover chatbots not currently included by the Act, “that’s what we’ll do”. The move follows heightened scrutiny after a 14-year-old’s death was linked by his mother to his interactions with a character-based AI chatbot. This has intensified calls for stronger safeguards. Kendall signalled a preference for targeted interventions over a sweeping new bill. When asked whether the UK should follow Australia’s lead in banning social media for under-16s , Kendall stressed the need to balance protection with digital literacy.