London Metropolitan Police's Facial Recognition System Is Now Only Misidentifying People 81% Of The Time

from the any-year-now... dept

The London Metropolitan Police’s spectacular run of failure continues. Sky News reports the latest data shows the Met’s facial recognition tech is still better at fucking up than doing what it says on the tin.

Researchers found that the controversial system is 81% inaccurate – meaning that, in the vast majority of cases, it flagged up faces to police when they were not on a wanted list.

Needless to say, this has raised “significant concerns” by the sort of people most likely to be concerned about false positives. Needless to say, this does not include the London Metropolitan Police, which continues to deploy this tech despite its only marginally-improved failure rate.

In 2018, it was reported the Metropolitan Police’s tech was misidentifying people at an astounding 100% rate. False positives were apparently the only thing the system was capable of. Things had improved by May 2019, bringing the Met’s false positive rate down to 96%. The sample size was still pretty small, meaning this had a negligible effect on the possibility of the Metropolitan Police rounding up the unusual suspects the system claimed were the usual suspects.

Perhaps this should be viewed as a positive development, but when a system has only managed to work its way up to being wrong 81% of the time, we should probably hold our applause until the end of the presentation.

As it stands now, the tech is better at being wrong than identifying criminals. But what’s just as concerning is the Met’s unshaken faith in its failing tech. It defends its facial recognition software with stats that are literally unbelievable.

The Met police insists its technology makes an error in only one in 1,000 instances, but it hasn’t shared its methodology for arriving at that statistic.

This much lower error rate springs from Metropolitan Police’s generous accounting of its facial recognition program. Its method compares successful and unsuccessful matches with the total number of faces processed. That’s how it arrives at a failure rate that sounds much, much better than a system that is far more often wrong than right.

No matter which math is used, it’s not acceptable to deploy tech that’s wrong so often when the public is routinely stripped of its agency by secret discussions and quiet rollouts. Here in the US, two cities have banned this tech, citing its unreliability and the potential harms caused by its deployment. Out in London, law enforcement has never been told “No.” A city covered by cameras is witnessing surveillance mission creep utilizing notoriously unreliable tech.

The tech is being challenged in court by Big Brother Watch, which points out that every new report of the tech’s utter failure only strengthens its case. Government officials, however, aren’t so sure. And by “not so sure,” I mean, “mired in deep denial.”

The Home Office defended the Met, telling Sky News: “We support the police as they trial new technologies to protect the public, including facial recognition, which can help them identity criminals.

But it clearly does not do that.

It misidentifies people as criminals, which isn’t even remotely close to “identifying criminals.” It’s the exact opposite and it’s going to harm London residents. And the government offers nothing but shrugs and empty assurances of public safety.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “London Metropolitan Police's Facial Recognition System Is Now Only Misidentifying People 81% Of The Time”

Subscribe: RSS Leave a comment
19 Comments
Robert Beckman (profile) says:

Safest place in London

According to the Met, 999 of 1000 are correctly identified, and if 19% of the 1/1000 are correctly identified as criminals then there are only 19 criminals per 100,000 people, which is easily the lowest rate in London, let alone in the world.

Great job, London Metropolitan Police, you’ve shown you have the safest jurisdiction and must not need this fancy tech.

Anonymous Coward says:

"Misidentifying People 81% Of The Time" might be better than misidentifying 0.1% of the time. In principle, it means every cop should know they need to double-check each result closely, whereas a more accurate system could easily be seen as infallible. (Unless, as another person said, it’s just a pretext to hassle people.)

Pete Austin says:

Failure Rate = false positives and false negatives

It’s easy enough to vary the recognition threshold and decrease the percentage of false positives (mis-detecting fewer innocent people) at the expense of increasing the percentage of false negatives (detecting fewer criminals).

Unless we have both percentages, I strongly suspect this is what the police have done, and that would NOT be an improved failure rate.

Anonymous Coward says:

and in true UK fashion, no one in the Police Force or the UK government gives a flyin’ fuck! it doesn’t matter who gets falsely arrested, it doesn’t matter who gets away instead of caught and it definitely doesn’t matter who is harmed because of the screwed up, useless software that wont be usable until at least 2050! the UK used to be at the forefront of democracy, of freedom and of truth, now it’s a leader in how to fuck your own citizens, how to remove freedom and privacy and how to (try) to keep tabs on everyone except those it should (criminals and terrorists)! what a reversal!!

Anonymous Coward says:

Time for the Devil's advocate....

Seems to me that, depending on how they’re using this tech, their stats are being misinterpreted.

The FR tech is presumably being used to flag potential bad actors. As such, it is meant to eliminate 90% of the population as a first pass, so that the police then only have to concentrate on the final 10%, thus simplifying their detective work significantly.

As such, as long as their FP rate is below 100%, they are getting utility from the system. Without it, they’d have to use some other method to comb through the entire population to shrink the sample size, and the method they used to do that would most likely involve profiling, which has some nasty side effects.

The big question is: what’s their FN rate like? If it’s too high, that means that they’re mostly flagging up innocent people while failing to find the people they’re actually looking for. This would be bad, and should indicate they just drop the program as useless for policing.

But if the FN rate is reasonable, they likely end up in a situation where they get 10 people flagged, 1 of which is the perp. This looks suspiciously like the trusted police lineup situation, but with a random sampling of similar looking people as the faces in the lineup — potentially a better lineup than the meatspace ones. Because at that point, they can track the CCTV footage for these 10 people, make some targeted inquiries around the crime that actually took place, and quickly narrow things down to 1 suspect.

In the police’s view, the worst thing they could have is a system that only flags people who are criminals — because then they’d have a MUCH harder time figuring out which criminally minded person actually committed the crime.

Yes, there are plenty of logical fallacies in what I just wrote, but this is the logic behind their arguments, and this is what you need to refute to make any sort of a difference. Just going on about an 81% FP rate does nothing but spread ignorance about how this data is actually processed and what its significance is.

Mason Wheeler (profile) says:

Re: Time for the Devil's advocate....

This is basically correct. Also, it went from 100% false positives to 81% false positives over the course of a year. Extrapolate out that rate of improvement, and in another 4 years the false positive rate is likely to be close to 0. (And that’s making the rather pessimistic assumption that the rate of growth will only be linear; in the world of computing, Moore’s Law applies to most things.)

In other words, this is just another typical Libertarian hit piece by our resident Libertarian nutjob.

Anonymous Coward says:

Re: Re: Time for the Devil's advocate....

I wouldn’t go this far… the rate of improvement for this tech is on a logarithmic curve; it’s not exponential or even linear.

But even following a logarithmic curve, it will only take about 4 years to get to, say, a 12% FP rate. For the purposes they’re applying it to, assuming a reasonable FN rate, that’s leaps and bounds beyond what they could achieve using any other method.

Using modern Machine Learning techniques, it is highly unlikely they can get their FP rate below around 34%. However, the maths are always improving in this space, so I’d say that 12% is plausible, even if not likely.

But even a 34% rate is amazing. Remember: this isn’t using facial recognition to say "is that person a criminal?" It’s using facial recognition to say "around the scene of the crime, whose face looks like someone we know commits these sorts of crimes?"

It’s an investigation technique that mimics what humans already do — it’s not replacing a judge and jury.

Personally, I’m happy with an 81% FP rate. If they got it close to 0, law enforcement might be tempted to assume guilt when they got a match. And THEN we’d be in trouble similar to that documented in https://www.techdirt.com/articles/20190716/19112442601/public-records-request-nets-users-manual-palantirs-souped-up-surveillance-software.shtml

That One Guy (profile) says:

Easy fix

Step one: Point the cameras at the police and any politicians who support the program.

Step two: Any and all hits will result in an arrest/stop/further investigation by a third party who will have the same incentives to secure a ‘hit’ as the police do.

If either step if not followed, the program is tossed entirely. I imagine a week or so of those pushing the program being subject to it themselves would be enough for them to care about how accurate it isn’t.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...