High Court upholds Metropolitan Police's live facial recognition policy: what this means for the public and private sector
Key contacts
On 21 April 2026, the High Court handed down its judgment in Thompson & Anor, R, dismissing a judicial review challenge to the Metropolitan Police Service's live facial recognition ("LFR") policy (the “Policy”). The decision offers important guidance for any organisation that deploys biometric or AI-driven surveillance technology.
Background and decision
The claimants did not argue that the use of LFR was itself inherently unlawful or that the police lacked the power to use it. Rather, that the Policy left too much discretion to individual officers regarding where, why, and against whom LFR could be deployed in violation of Articles 8, 10, and 11 of the European Convention on Human Rights (“ECHR”).
The Court held that the Policy imposed sufficient constraints structured around why, who, and where LFR could be deployed to meet the required "quality of law" standard. The key question in issue was foreseeability. Applying the "relativist approach" to foreseeability approved by the Court of Appeal in Bridges, the court concluded that the Policy was sufficiently clear and foreseeable to avoid arbitrariness.
Key takeaways
Whilst a public law challenge, the reasoning has direct relevance for both the public and private sector in respect of the use of AI for facial recognition. In particular, UK law recognises an actionable private civil law right of privacy, and it would be possible for individuals to bring claims against private entities in respect of the use of CCTV and equivalent image recognition systems. There is likely to be considerable read across of the Court's reasoning to any such private law actions.
For organisations deploying biometric or AI-driven surveillance, the judgment offers a practical roadmap – specificity is key.
Equality and discrimination considerations must also be handled with care. The court noted that a properly evidenced discrimination claim could succeed in future, and the Equality and Human Rights Commission highlighted that the human rights and equality impacts of digital services and AI are now a key priority. The judgment sits within the broader, rapidly developing context of AI governance. Whilst the court limited its analysis to this technology, public bodies and companies alike will need to remain rigorous in their governance, risk assessment, and due diligence, keeping human rights considerations under active review.
If you would like to read further analysis on the judgment and its significance, please see our Legal Update published here. We have also previously written about AI in policing as part of our AI Watch series and will continue to monitor how AI plays an expanding role in policing and the public sector more broadly.