Shadow AI: The governance gap that businesses can’t afford to ignore
Key contact
Despite the increased adoption of AI tools by businesses, many organisations continue to face the challenge of “shadow AI”.
What is shadow AI?
Shadow AI refers to the use of AI tools by employees for day-to-day tasks, such as drafting documents, analysing data, or summarising information, without their employer’s knowledge, approval, or oversight. A survey conducted by Microsoft in October 2025 revealed that 71% of UK employees have used unapproved consumer AI tools at work. According to a recent article, one of the reasons that employees are increasingly “smuggling” AI tools into their daily work routines without organisational approval is because official systems are often considered too slow or restrictive. Although not always the case, shadow AI often involves the use of AI in the absence of a dedicated AI policy.
Although using AI tools can improve employees’ efficiency and productivity, which is beneficial to the relevant organisation, using them without giving the organisation the opportunity to implement appropriate governance could expose it to legal and reputational issues.
What are the risks of shadow AI?
Data leakage
Free versions of AI platforms, which are the versions most likely to be used in a ‘shadow AI’ context, may operate under terms of service that grant the platform provider rights to use user inputs to improve the AI platform. In that scenario, any data entered by employees (including any proprietary, confidential, client, personal or sensitive information) is shared with the relevant AI platform provider and could potentially be included in responses to other users in the future.
Information inputted by employees into AI tools may be processed in inconsistently with an organisation’s contractual commitments, confidentiality obligations, or data protection responsibilities - leaving the organisation vulnerable to a breach or claim. For example, under UK and EU data protection laws, organisations remain responsible for how personal data is processed, even where an employee has used an unauthorised tool.
Intellectual property risks
Risks relating to intellectual property largely fall into two buckets:
- Risk of infringing someone else’s intellectual property rights
For example, if employees include works protected by copyright in their inputs to AI tools, the organisation may be exposed to a copyright infringement claim from the relevant third party.
There are a lot of lawsuits worldwide relating to AI and intellectual property, especially copyright. These lawsuits allege that using copyrighted works for training purposes and/or reproducing those works in AI-generated outputs infringes copyright. If employees use an AI tool that is found to infringe copyright (or another intellectual property right), their employer could face to an infringement claim. If the terms of use for the AI tool do not include an adequate indemnity for infringement, the employer could be left out of pocket. - Risk of undermining the organisation’s intellectual property strategy
Without sufficient human involvement, AI-generated content may not qualify for copyright protection. If employees use AI tools to create key assets, this could undermine the organisation’s approach to protecting and controlling the use of those assets.
Although it is far from settled law, there is also a risk that AI could be used to generate material that constitutes prior art. This could potentially invalidate subsequent patent or registered design applications.
Operational concerns
Shadow AI involves using AI tools without the necessary training and guidance. As a result, the quality of work outputs can be reduced, as employees may be unaware of the risk of hallucinations and the need to verify outputs from AI tools. Using the right AI tool for the task is imperative; without guidance on this, employees may use AI tools that are unsuited to the task at hand, which can lead to reduced productivity.
How can the risk of shadow AI be mitigated?
The organisation needs to set out its approach to the use of AI clearly, either in existing policies or in a dedicated AI policy. If it is looking to ban or substantially limit the use of AI tools in the workplace, the rationale for doing so should be made clear, in order to mitigate the risk of driving AI activity further underground.
An AI policy alone is unlikely to affect the level of shadow AI in a workplace. Organisations also need to implement processes for communicating and training on the policy, and consider how to identify and address policy breaches.
As we have previously discussed, there are some key issues to consider when preparing an AI policy, and importantly, when implementing and enforcing it. For example, organisations should aim to:
- Understand which AI tools employees are using and why;
- Vet AI tools before approving their use, ensuring there are appropriate contractual and security safeguards;
- Provide training on responsible use and any limitations (including in relation to the use of data and any IP implications);
- Align the approach to AI governance with existing data protection, cybersecurity and procurement frameworks;
- Adopt a clear and consistent approach to the recording of prompts inputted into AI tools and the labelling of AI-generated outputs.
Clear AI policies and effective AI governance can enable organisations to safely harness the benefits of AI, while maintaining compliance and protecting their reputation. Shining a light on shadow AI can also help organisations to identify opportunities to increase productivity and share lessons learned by employees across the organisation. This allows them to turn a potential liability into a strategic advantage by enabling employees to innovate openly and responsibly rather than secretly.