High Court upholds Metropolitan Police's live facial recognition policy: Thompson & Anor, R (On the Application Of) v Commissioner of Police of the Metropolis
Key contacts
In a judicial review decision handed down on 21 April 2026, the High Court upheld the lawfulness of the Commissioner of the Police of the Metropolis’ 2024 policy on live facial recognition (“LFR”) (the “Policy”).
The judgment provides a timely discussion of the interaction between privacy and technology, an area that is likely to continue to develop as biometric and AI‑enabled technologies become more widespread.
Background
The first claimant was Mr Shaun Thompson, a community worker who was wrongly identified as his brother (who was on a police watchlist) by LFR technology and stopped by police in central London. When Mr Thompson declined to allow his fingerprints to be taken, he was threatened with arrest.
The second claimant was Ms Silkie Carlo, the director of Big Brother Watch, a civil liberties organisation. Ms Carlo has campaigned against the use of LFR by the Metropolitan Police Service (the “MPS”).
The Policy
LFR is a CCTV tool used for the detection and prevention of crime by locating individuals on police watchlists, known as “Sought Persons”. The LFR identifies and analyses individuals’ facial features and outputs these as a set of numerical values called a “Biometric Template”. This Biometric Template is compared against the Biometric Templates of those on the list of Sought Persons.
If a similarity threshold is exceeded, the LFR software generates an alert of a possible match and an Engagement Officer will assess whether the match is “viable”. In the absence of an alert, any Biometric Template created by LFR is immediately and automatically deleted. Any images of individuals who do not meet the similarity score are blurred in the footage watched by officers.
The Policy contains constraints on why, who and where LFR can be used, referred to in the judgment as “Use Cases”.
In summary, LFR may be deployed only at:
- crime and missing person hotspots;
- protective security operations (for example, major events); and
- deployments based on specific intelligence which indicates that a person eligible for inclusion on a LFR watchlist is likely to be present at a particular place.
If none of the Use Cases apply, under the Policy, LFR cannot be used.
The issues before the High Court
The claimants did not bring the judicial review on the basis that the use of LFR itself was unlawful, or that the police did not have the power to use it, but rather that LFR gives rise to “significant civil liberties concerns” which had heightened with the increased deployment of LFR in recent years.
They argued that the Policy left too much discretion to individual officers as to where, why and against whom LFR could be used, and in doing so it violated Articles 8, 10 and 11 of the European Convention on Human Rights (“ECHR”).
The claimants advanced two grounds of challenge:
- the Policy permitted interference with Article 8 of the ECHR, the right to respect for private life, which was not “in accordance with the law” under Article 8(2) (“Ground 1”); and
- the Policy permitted restrictions on the freedom of expression and freedom of assembly under Articles 10 and 11 of the ECHR which were not “prescribed by law” (“Ground 2”).
Ground 1 and Ground 2 raised the same issue: namely, whether the Policy had the required “quality of law”. The judges in assessing established case law, determined this to mean that it must be: (i) accessible to the persons concerned, (ii) foreseeable as to its consequences and (iii) compatible with the rule of law.
Foreseeability
The key question in issue was foreseeability. To meet this requirement a measure was required to have “sufficient clarity and foreseeability” to avoid arbitrariness (i.e., decision-making based on “whim, caprice, malice or predilection”).
The court applied the “relativist approach” to foreseeability approved by the Court of Appeal in R (Bridges) v Chief Constable of South Wales [2020] EWCA Civ 1058, meaning the more intrusive the act complained of, the more precise and specific the law must be to justify it. In Bridges, the court found that the police force in question’s legal framework for the use of LFR lacked the quality of law needed to avoid arbitrariness, particularly because it was unclear who could be placed on a watchlist or what criteria governed where LFR could be deployed.
The court was therefore required to determine whether the Policy left so much discretion to individual officers that the use of LFR depended on their will rather than the law itself. The claimants did not pursue arguments on justification or proportionality, which would only have arisen had the Policy failed to meet the foreseeability requirement.
The High Court’s decision
The court dismissed the claim. It held that the Policy imposed sufficient constraints structured around why, who and where LFR could be deployed, and therefore met the quality of law requirements for Articles 8, 10 and 11 ECHR.
In explaining its decision, the Court highlighted several themes:
- “Operational experience”: the court rejected the argument that the Policy’s definition of a crime hotspot being connected to “operational experience as to future criminality” was too opaque and subjective for it to be foreseeable.
- Repeat use of watchlists: the court determined that whilst repeated use of watchlists by the MPS may have occurred (which the claimants asserted was evidence of arbitrary deployment), it declined to reach a public law conclusion as the issue had not been properly pleaded.
- Intrusiveness: applying the relativist approach from Bridges, the court was not persuaded that the technology had changed significantly since that case, noting that LFR still did not involve intrusive techniques such as bugging or DNA sampling. Whilst AI posed risks of further intrusiveness, the court held that it was concerned only with the technology as it currently exists and the Policy under challenge.
- Discrimination: the claimants argued that the Policy could lead to disproportionate deployment in areas with higher ethnic minority communities. The court indicated that a properly evidenced discrimination challenge may succeed but it had not been made out in this case.
- Contextual considerations: the claimants argued that the volume of data processed by LFR was significant and could deter political protest. The court held that the Policy (i) gave individuals sufficient indication of when and where LFR would be deployed, enabling them to foresee consequences of travelling in an area where LFR is utilised, and (ii) expressly required decision-makers to consider its potential impact on protest.
- Foreseeability and discretion: the claimants’ key argument was that there was no restriction on LFR deployment in crime hotspots in terms of the who and where it would be deployed. The judges opined that there were constraints on where LFR was deployed governed by the Policy.
Comment
This decision provides helpful judicial consideration of the legal framework for police use of LFR. It raises issues of both public law and privacy.
This is particularly important because although this was a judicial review, a form of action available only against public authorities, there is in UK law an actionable private civil law right of privacy. It would under this right be possible for individuals to bring claims against private entities in respect of for example the use of CCTV and equivalent image recognition systems. There is likely to be considerable read across of the reasoning here to any such private law actions.
Accordingly, for organisations deploying biometric and / or AI-driven surveillance, the judgment offers a practical roadmap for building a lawful policy. The judgment emphasises that specificity is key.
Organisations should take care that equality and discrimination issues are handled sensitively. The Equality and Human Rights Commission, who was the intervener in this case, expressed in its evidence that the human rights and equality implications of digital services and AI have become a recent key priority for them.
The judgment sits within the broader and ever-developing context of AI and its use in surveillance technologies. Whilst the court limited its judgment to the specific existing technology in issue, public bodies and companies alike will need to remain rigorous in their governance, risk assessment and due diligence, and keep human rights considerations under review as these technologies continue to evolve.
We have previously written about AI in policing as part of our AI Watch series (article linked here) and will continue to monitor how AI plays an expanding role in policing and the public sector more broadly.
For further information, please email the authors or your usual CMS contact.