Grok in deep trouble over deepfakes? What Ofcom’s recent investigation means for online platforms
Key contacts
Last month, Ofcom announced a formal investigation into X’s artificial intelligence chatbot Grok due to concerning reports that the Grok AI chatbot was being used to create and share undressed “deepfake” images of people and sexualised images of children.
For those who don’t know, a deepfake generally refers to a video, picture or audio clips made with artificial intelligence to look real. Deepfakes use AI to mimic a real person’s voice and facial expressions using videos or pictures of them. If the AI is sophisticated enough, it can be difficult to tell the difference between a real image and the deepfake, making it easy to use deepfakes to commit fraud or spread misinformation. In the case of Grok, deepfakes may even (when created by bad actors) mimic inappropriate images of a person without their consent.
Grok was originally created by xAI as an AI-powered chatbot assistant, intended for general purpose interactions. However the X platform has also embedded a version of the Grok model within its social media platform, which expanded Grok’s reach (and its ability to generate deepfakes) to a much wider audience.
On 12 January 2026, Ofcom announced its formal investigation into X’s service to consider whether X failed to comply with its duties under the Online Safety Act 2023 (OSA) by offering a service capable of generating and sharing content that amounts to image abuse, pornography that is accessible to children, and child sexual abuse material (CSAM). Ofcom also noted that it was assessing whether to investigate xAI as well, but recently confirmed that it is unable to do so since xAI’s provision of Grok has no user-to-user or search element and therefore falls outside of the scope of the OSA and Ofcom’s jurisdiction (although, the UK government has confirmed that they will look into the regulation of chatbots separately, and we expect developments on this in the future).
The significance of the Grok case is twofold: first, it demonstrates the real-world impact of AI on platforms. Deepfakes and generative AI are a hot topic for regulators, and businesses using such technology in the user generated content (UGC) context must prepare for heightened scrutiny. Second, the investigation shows that Ofcom is serious about enforcing the OSA, regardless of the scale or location of the relevant in-scope provider. The OSA provides broad duties and wide-reaching powers, enabling Ofcom to tackle issues that previously fell between the cracks of civil and criminal law. Although – as was the case with xAI – it is clear that the OSA does have limits.
In the context of the X investigation, Ofcom are considering whether X has breached the following OSA requirements:
- Requirement for in-scope providers to have completed certain risk assessments regarding the availability of illegal and (in the case of children) harmful content (“Relevant Content”) on their platform;
- Duty for in-scope providers to use proportionate measures to prevent individuals from encountering certain priority Relevant Content (such as pornographic content accessible to children, intimate image abuse and CSAM) – including for example to put in place highly effective age assurance to protect children from seeing pornography in line with Ofcom’s guidance.
- Duty for in-scope providers to implement systems which allow users to easily report any Relevant Content and enable the provider to swiftly remove any content prohibited from its sites.
If Ofcom considers that X has breached any of these rules, it could – among other things – be hit with a penalty of up to £18m (or 10% of X’s qualifying worldwide revenue if greater).
Soon after Ofcom announced its investigation into Grok, a similar inquiry was launched against X in California (and various other regulators globally), and under this mounting pressure, the platform quickly spoke out to refute the allegations, confirming that it has implemented measures to avoid the editing of pictures by users to create in explicit images. However, this was not enough to quell Ofcom’s concerns, who confirmed on 15 January that – while X’s actions demonstrated a “welcome development” – their investigation remains ongoing. Indeed other regulators have also launched further investigations since – including the UK Information Commissioner’s Office.
Ofcom is moving quickly in response to the allegations against Grok and has already conducted an initial assessment of the issue. Ofcom recently confirmed that they have requested information from X and are analysing evidence gathered from their investigation, and the next step is for them to issue a provisional decision on the investigation to X. However, they have noted that we should expect it to be a few months before this occurs.
Given the nature of the offence and the scale of X’s platform, there is political pressure on Ofcom to conduct its investigation expeditiously. And given Ofcom was one of the first regulators in the world to raise a formal investigation against X, many will be watching and waiting for the outcome.
While we wait for Ofcom’s provisional decision on this case, we recommend that all in-scope businesses ensure their risk assessments adequately address the heightened risks posed by generative content, AI technologies and chatbots. Those who fail to act now risk not only reputational damage but significant financial penalties.