NIST Study Confirms The Obvious: Face Masks Make Facial Recognition Tech Less Useful, More Inaccurate

from the for-now... dept

At the end of last year, the National Institute of Standards and Technology (NIST) released its review of 189 facial recognition algorithms submitted by 99 companies. The results were underwhelming. The tech law enforcement and security agencies seem to feel is a game changer is just more of the same bias we’ve been subjected to for years without any AI assistance.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Who were the winners in NIST’s facial recognition runoff? These guys:

Middle-aged white men generally benefited from the highest accuracy rates.

We have some good news and bad news to report from the NIST’s latest facial recognition study [PDF]. And the good news is also kind of bad news. (The bad news contains no good news, though.)

The bad news is that the COVID-19 pandemic is still ongoing. This leads to the good news: face masks — now a necessity and/or requirement in many places — are capable of thwarting facial recognition systems.

Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3% of the time. Masked images raised even these top algorithms’ failure rate to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.

But that’s also bad news. This increases the chance of both false positives and false negatives. Both of these are unwelcome side effects of face coverings. The tiny bit of good news is that it generates mostly unusable images for passive systems (like those installed in the UK) that collect photos of everyone who passes by their lenses. The other small bit of good news in this bad news sandwich is this: face masks reduce the risk of bogus arrests/detainments.

While false negatives increased, false positives remained stable or modestly declined.

NIST also noticed a couple of other quirks in its study. Mask coverage obviously matters. The more that’s covered, the less likely it is software will draw the correct conclusion. But color also matters. Black masks produced more bad results than blue masks.

Companies producing facial recognition tech (89 algorithms were tested by NIST for this project) aren’t content to wait out the pandemic. Many are already working on algorithms that use fewer features to generate possible matches. This is also bad news. While the tech may be improving, working around masks by limiting the number of data points needed to make a match is just going to generate more false positives and negatives. But companies are already training their AI on face-masked photos, many of which are being harvested from public accounts on social media websites. Dystopia is here to stay. The pandemic has only accelerated its arrival.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “NIST Study Confirms The Obvious: Face Masks Make Facial Recognition Tech Less Useful, More Inaccurate”

Subscribe: RSS Leave a comment
12 Comments
Koby (profile) says:

Remembering

I seem to remember reports from a few years back in which people would deliberately wear makeup or other head coverings which severely lowered the accuracy of facial recognition systems. It’s good to see that not much has changed, and that they can still often be fooled. But now that people have a deliberate excuse to wear a mask, I can’t wait to see masks and deliberately confusing features combined.

Picture a mask with an image of someone else’s face on it.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re: Remembering

"I reckon that I agree with about 80% of the articles written here."

Can you point to one that wasn’t dumb contrarianism or completely missing the point of the article (even sometimes missing the reality of your own argument)? Because usually you just say stupid shit, reply once or twice then disappear from the thread forever when people start telling you how wrong you are.

Scary Devil Monastery (profile) says:

Re: Re: Re: Remembering

"Am I not welcome here unless I meet some minimum threshold? 90% 95%? "

It’s not the percentage of agreement which is the problem. It’s the fact that in the articles where you feel compelled to grind an axe the arguments you bring to the table are more often than not flat-out lies.

"disagreeing" is having an argument with multiple people where the same facts are hashed out and multiple view points clash over the interpretation or on where the line is drawn in a compromise.

What you often do, however, is that for a few topics, like free speech online, your entire line of argumentation boils down to the metaphor of saying "Behold! My argument!" after which you squat down and take a dump.

THAT is the main issue here.

This comment has been deemed insightful by the community.
arp2 (profile) says:

Perhaps a way to convince the Anti-government types?

If you tell them that it’s harder for the government to track you, if you wear a mask, perhaps a few more COVIDiots will wear a mask. Who am I kidding, they’re not anti-government or anti-surveillance state, they’re against those things for themselves- they encourage it for anyone else.

This comment has been deemed insightful by the community.
Upstream (profile) says:

Before anyone says "Whoa! 0.3% sounds pretty good! And 5% ain’t too shabby either," let’s remember the base rate fallacy involved in these numbers. Example 3 has the least math. A 0.3% failure rate is seriously bad, 5% is way worse, and the rest, 20% to 50% failure rate are in the "you can’t be serious" category.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...