Who controls the AI ​​police? – New Statesman


In April, the Home Office announced £115m of new funding to set up the Police.AI centre. Its aim is to introduce a “rapid and responsive” deployment of AI tools across all 43 forces in England and Wales. The National Police Chief’s Council (NPCC) claims it will establish the UK police as “a global leader in applied and responsive AI”.

Currently, each police force in the UK largely determines how and when to procure, test and deploy AI tools, leading to a lack of standardization and inefficient and more costly procurement practices. This has inevitably led to concerns being raised about politics, ethics and accountability – and nowhere has the debate been more pronounced than around the use of Live Facial Recognition (LFR).

This is because LFR is already being used at scale. More than 25,000 “retrospective” facial recognition (RFR) searches are carried out per month, with images from CCTV, mobile phones, cameras, video doorbells and social media compared to a national police database of custody photographs. In London, between January 2024 and September 2025, over 1,300 suspects were arrested with the help of live face recognition technology (FRT).

In February, the government completed a consultation on legislation to regulate the use of technology by the police. The Home Office clearly signaled its intentions: “The consultation will pave the way for new laws so that all police forces can use this new technology more confidently and more often.”

Up to this point, police lawyers have emphasized that the patchwork of checks and balances in place, such as The Covenant of AI in PolicingAND Surveillance Camera Code of Practiceare sufficient to regulate the use of LFR. Critics counter that the public is being asked to rely on police assurances in what remains a legal gray area. Questions are also being raised about the reliability of the platforms currently in use.

Subscribe to the New Statesman today and save 75%

Mike Neville, a former detective in the Metropolitan police, believes officers are being led by technology, rather than doing the good old-fashioned police work the public expects. The belief within the force is that they can improve standards and save manpower. “The public is being failed by this arrogance,” he says.

A decade ago, Neville founded the world’s first ‘Super Recognizer’ police unit, made up of officers from across the Met, gifted with extraordinary facial recognition skills. The team spends its days cross-checking grainy CCTV across London against stock footage from previous arrests, looking, by eye alone, for a match.

These police officers still exist, but Neville says their role has been hampered by AI – and not to the benefit of the police. Through a request for information seen by In the center of attentionNeville revealed that between April 2024 and March 2025, the Met obtained 17,760 de-identified images of suspects in their database. Of these, 10,431 were identified. Of those 10,431, 2,126 were completed by facial recognition software. The remaining 80 percent were still identified by humans. “So why are they pouring money into technology and not staff?” he asks.

Lindsey Chiswick, the Met’s head of facial recognition, is confident their use of the technology is “lawful and follows the policy that is published online”.

Chiswick insists the public can be assured the technology is deployed using “rigorous safeguards”. “Our use of (live facial technology) is subject to strict controls, independent oversight and transparent governance, with robust safeguards to protect people’s rights and privacy,” she said in a statement to In the center of attention.

But there are a growing number of cases that question the reliability of the technology. Alvi Choudhury, a 26-year-old, was arrested at his home in Southampton after Thames Valley Police’s facial recognition software matched him with CCTV footage of a burglary 100 miles away in Milton Keynes.

Choudhury, who said he has never visited the city, was left baffled by the arrest after CCTV footage mistakenly identified him with another person of South Asian heritage.

Thames Valley, Neville points out, has around 40 trained super-cognitives on its books. None of them, he says, were consulted before the arrest. “I still train them,” he says. “So why weren’t they being used?”

The answer, says Neville, lies in institutional laziness and an unwillingness to question technology. The Surveillance Camera Code of Practice notes that any use of facial recognition requires “human intervention” before any decision affecting an individual is made. But any officer, he argues, can undergo a basic ‘decision-making’ to sign a facial recognition match.

The Police.AI Centre, which will oversee how these tools are developed and deployed nationally, can help raise standards. However, there remains a lack of clarity on what its structure and leadership will look like.

“The worst appointment would be someone who is just an AI expert,” says Neville. “You need an operative detective who knows that the goal is to catch criminals.

“People think facial recognition is like something out of Batman — that the machine is infallible. It’s not. The machine will tell you what it thinks, and sometimes it will tell you what you want to hear.”

Dr Daragh Murray, a researcher at Queen Mary University of London, evaluated the accuracy of the technology in six out of 10 police tests carried out by the Metropolitan Police. He found that, out of 42 matches, only eight people were correctly identified – an error rate of 81 per cent. And of those eight, four people were never caught – so a match could not be verified.

In the absence of legislation, the police effectively set their own standards. “What it ends up with,” he says, “is basically a situation where the police are testing AI tools on the public. That can’t be right.”

Without a unifying legal framework, he claims, Police.AI’s promised safeguards amount to self-regulation. “If you’re cynical,” he says, “you might say the police are taking advantage of delays in the court system to get the technologies out before the courts catch up.

According to for the NPCC, the proper use of AI will see the technology fill six million hours of work a year, equivalent to the release of 3,000 officers. But how big are the benefits if you can’t trust automated results without extensive human intervention? A single LFR deployment, Murray points out, involves only about 15 officers. “Whether that in itself provides an efficiency saving,” he says, “is completely debatable.”

William Webster, the UK’s biometrics and surveillance camera commissioner, is not fundamentally opposed to the use of AI in policing. However, facial recognition technology itself will require extensive policing. “What we need is an acknowledgment that (misidentification) is going to happen, and a system for how to deal with it,” he says. “That’s why we need very clear protocols on how it’s used.”

Webster fears the Crime and Policing Bill – which is supposed to include governing facial recognition – will not contain the detail he needs. The biggest focus in the bill has been around the abolition of the police and crime commissioners and the consolidation of forces. “My concern,” he said, “is that the pieces about meaningful surveillance of the technology, meaningful regulation of biometrics, are going to fade away.”

The previous government had moved to abolish Webster’s office entirely. And, at the moment, it has no control over the deployment or governance of facial recognition. The surveillance camera code of practice on which he advises is not owned by him but by the Home Secretary. And while the force currently uses its own live facial recognition cameras, much of the wider surveillance network, from council CCTV to transport systems, lies outside their direct control.

“The police may not own the cameras,” suggests Webster, “but they do own the consequences.”

In some cases, police have already shown they can respond to flaws in the technology. Essex Police it has recently stopped its use of live facial recognition after identifying problems with the algorithm, a move Webster describes as “good practice.” “They found there was a problem and went back and looked at it,” he says.

But the system cannot rely entirely on self-government. Even a new framework is no guarantee of providing the best standards. The danger, Webster argues, is that if the rules are written too narrowly around the police, they will miss the reality of how the technology is used.

“We must avoid a two-tiered scenario in which police systems are held to a high standard of proof, while footage from other sources, including vendors, is not.” Without that, he warns, the system risks collapsing into “a really complicated mess.”

For now, the government is pushing ahead with the rollout of facial recognition and other AI tools across the police force. But the legal and regulatory framework remains unresolved, fragmented and, in key respects, incomplete. The danger is not simply that the technology gets ahead of the law, but that it is introduced before it achieves meaningful control. By then, these tools may already have reshaped policing to an extent that is difficult to reverse.

Content from our partners



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *