Artificial Intelligence in Policing: Emerging Opportunities and Escalating Risks
Key contacts
Artificial intelligence (“AI”) is rapidly reshaping the landscape of modern policing. This has not been without controversy. Last year, the West Midlands Police were criticised last year for using AI to formulate the evidential basis for banning Maccabi Tel Aviv fans from the match against Aston Villa (they initially denied using AI for this purpose). The AI had invented an entirely fictitious previous match between Maccabi and West Ham).
Nonetheless, the UK government has signalled a major change in strategy through its January 2026 policing White Paper. This includes a commitment to develop Police.AI, a national framework designed to support consistent, coordinated and ethically governed deployment of AI across policing.
AI Driven Misconduct Detection in UK Policing
In February 2026, reporting revealed that the Metropolitan Police Service had begun piloting AI tools to identify potential misconduct among officers. According to the reporting, the system analyses internal data sets such as sickness absence, overtime patterns and other behavioural indicators to detect concerning trends that may correlate with professional standards issues. The aim is to strengthen early intervention and rebuild public confidence following a period of intense scrutiny over vetting failures and cultural concerns.
However, the initiative has attracted criticism from the Police Federation, who have described the programme as a form of “automated suspicion”. Concerns include the opacity of the algorithms, the risk of false positives, and the danger that legitimate workload pressures or health related absences could be misinterpreted as signs of wrongdoing. This UK pilot underscores a broader trend of using AI to monitor workforces in public services, raising complex legal questions relating to data protection, employment rights, algorithmic accountability, and discrimination.
Lessons from the Tumbler Ridge Tragedy
A separate incident in Canada has highlighted the challenges of managing AI generated risk signals in public safety contexts. Reports indicate that the perpetrator of the Tumbler Ridge school shooting had previously used ChatGPT in ways that raised red flags at OpenAI. The account was flagged and banned several months prior to the attack; however, the activity did not meet the company’s current threshold for law enforcement referral. Following the incident, OpenAI shared information with the Royal Canadian Mounted Police regarding the user’s prior interactions, prompting political scrutiny and demands for clearer escalation and reporting protocols.
The case raises an interesting and complex regulatory question: at what point should AI companies be required to alert the authorities to potentially harmful user behaviour? As generative AI tools become embedded in everyday communication, balancing privacy, free expression, and safety will only grow more difficult.
Important Factors for Clients Considering AI in the Public Sector
- Governance and transparency: recent developments, including the UK Government’s Police.AI commitment, the need for clear governance frameworks around AI procurement, deployment and monitoring. This direction of travel points to future expectations for auditability, human oversight and transparency in algorithmic decision making.
- Data protection and employment considerations: AI systems analysing staff behaviour pose a heightened risks under the UK GDPR, employment law and equality legislation. Public sector bodies must ensure lawful bases for processing, conduct robust Data Protection Impact Assessments and consider discrimination risks, particularly where algorithms rely on proxies linked to protected characteristics.
- Escalation thresholds and safety duties: the Canadian case illustrates the uncertainty surrounding the point at which AI generated signals impose duties to act. In the future, private companies offering AI tools may be subject to clearer statutory reporting obligations, particularly as governments develop national AI policing frameworks.
- Public trust and reputational risk: both the UK and Canadian examples demonstrate an increase in public sensitivity to the role of AI in policing and safety. Missteps can quickly undermine trust, while well designed, transparent systems can help rebuild confidence.
Looking Ahead
AI will play an expanding role in policing and public governance. For clients, the key question is no longer whether to use AI, but how to deploy it responsibly. The introduction of Police.AI marks a decisive shift towards national coordination, shared standards, and greater central oversight in AI usage across policing.
As the UK moves towards a more technologically enabled policing model, ensuring robust human oversight, transparent processes, and strong legal and ethical safeguards will be essential to mitigating risk and ensuring accountability.