Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals

from the fresh-hells-delivered-daily dept

Given all we know about facial recognition tech, it is literally jaw-dropping that anyone could make this claim… especially without being vetted independently.

A group of Harrisburg University professors and a PhD student have developed an automated computer facial recognition software capable of predicting whether someone is likely to be a criminal.

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

There’s a whole lot of “what even the fuck” in CBS 21’s reprint of a press release, but let’s start with the claim about “no racial bias.” That’s a lot to swallow when the underlying research hasn’t been released yet. Let’s see what the National Institute of Standards and Technology has to say on the subject. This is the result of the NIST’s examination of 189 facial recognition AI programs — all far more established than whatever it is Harrisburg researchers have cooked up.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Why is this acceptable? The report inadvertently supplies the answer:

Middle-aged white men generally benefited from the highest accuracy rates.

Yep. And guess who’s making laws or running police departments or marketing AI to cops or telling people on Twitter not to break the law or etc. etc. etc.

To craft a terrible pun, the researchers’ claim of “no racial bias” is absurd on its face. Per se stupid af to use legal terminology.

Moving on from that, there’s the 80% accuracy, which is apparently good enough since it will only threaten the life and liberty of 20% of the people it’s inflicted on. I guess if it’s the FBI’s gold standard, it’s good enough for everyone.

Maybe this is just bad reporting. Maybe something got copy-pasted wrong from the spammed press release. Let’s go to the source… one that somehow still doesn’t include a link to any underlying research documents.

What does any of this mean? Are we ready to embrace a bit of pre-crime eugenics? Or is this just the most hamfisted phrasing Harrisburg researchers could come up with?

A group of Harrisburg University professors and a Ph.D. student have developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal.

The most charitable interpretation of this statement is that the wrong-20%-of-the-time AI is going to be applied to the super-sketchy “predictive policing” field. Predictive policing — a theory that says it’s ok to treat people like criminals if they live and work in an area where criminals live — is its own biased mess, relying on garbage data generated by biased policing to turn racist policing into an AI-blessed “work smarter not harder” LEO equivalent.

The question about “likely” is answered in the next paragraph, somewhat assuring readers the AI won’t be applied to ultrasound images.

With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.

There’s a big difference between “going to be” and “is,” and researchers using actual science should know better than to use both phrases to describe their AI efforts. One means scanning someone’s face to determine whether they might eventually engage in criminal acts. The other means matching faces to images of known criminals. They are far from interchangeable terms.

If you think the above quotes are, at best, disjointed, brace yourself for this jargon-fest which clarifies nothing and suggests the AI itself wrote the pullquote:

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,” Sadeghian said. “This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality.”

“Minute features in an image that are highly predictive of criminality.” And what, pray tell, are those “minute features?” Skin tone? “I AM A CRIMINAL IN THE MAKING” forehead tattoos? Bullshit on top of bullshit? Come on. This is word salad, but a salad pretending to be a law enforcement tool with actual utility. Nothing about this suggests Harrisburg has come up with anything better than the shitty “tools” already being inflicted on us by law enforcement’s early adopters.

I wish we could dig deeper into this but we’ll all have to wait until this excitable group of clueless researchers decide to publish their findings. According to this site, the research is being sealed inside a “research book,” which means it will take a lot of money to actually prove this isn’t any better than anything that’s been offered before. This could be the next Clearview, but we won’t know if it is until the research is published. If we’re lucky, it will be before Harrisburg patents this awful product and starts selling it to all and sundry. Don’t hold your breath.

Filed Under: , , , ,
Companies: harrisburg university

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Harrisburg University Researchers Claim Their 'Unbiased' Facial Recognition Software Can Identify Potential Criminals”

Subscribe: RSS Leave a comment
51 Comments
ECA (profile) says:

Re: We lie, and here is how you know that.

“We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection,”

I do like the comment, and have some real ugly types that would NEVER be considered NOT a Criminal.
Facial recog from computers is about 20% at the MAX, mostly because of Lighting, and angles and Other things that can change how a person looks.
Emotion detection? is only valid when you KNOW THAT PERSON.. I know allot of people that are MONOTONE, and you cant tell Squat about a joke until they tell you it was a JOKE. And Facial emotions ?? you have GOT to be kidding.

A friend and I thought about what to wear for Halloween, the Ugly thing that you dont want to get near or…that Nice looking SPIT eating grin of a person in a suit with candy, WHICh would be more scary??

This comment has been deemed insightful by the community.
That One Guy (profile) says:

Let's test that shall we?

For some reason I have the sudden urge to take their miracle technology and feed photos of all the researchers involved, every politician, and every cop they can get their hands on through it.

After all, if it’s capable of pre-crime then should be interesting to see who among that sampling is a criminal just waiting for their chance, and with an 80% accuracy rate well, that’s a lot of potential criminals to sort through and find, criminals who will have no excuse if they are flagged by such an amazingly accurate piece of technology since it couldn’t possibly be wrong.

Partial sarcasm aside I can but hope that this is a junk science PR stunt, with the hopes that after a while people will forget the ‘junk’ half of that description and only remember those involved for making something really cool, because if they actually think that they’re created a pre-crime facial recognition program then either they are running a scam that is likely to be all too successful given how eager cops are certain to be for yet another ‘violation of rights justification device’, or are so delusional that they have bought their own hype.

That One Guy (profile) says:

Re: Re: 'First you said it was terrible, now it's great, which is it?'

That may be true, however the point of running them through the system is that it becomes somewhat more difficult for them to support a system for ‘accurately spotting criminals’ if people can dig up quotes not too long before regarding them objecting to the fact that it flagged them as criminals and how that simply must be a mistake.

OGquaker says:

Re: Tested that.

I would turn myself in right now, but I’m sequestered at home:(

I had some FTA warrants a few years ago, brought my toothbrush to 77th Street Police Station late on a Friday; cause jail is cheaper than "bail" (revenue enchantment) but the Cop at the computer lied and sent me back to the white suburbs. I had to wait the seven years to drive again:(

Another time i got picked up with an "expired license", the judge put me on probation! I kicked my court-appointed attorney and blurted out "Civil", the judge paused and said "civil" and dropped my probation. High white cheeks are a blessing.

Anonymous Coward says:

Re: Let's test that shall we?

After all, if it’s capable of pre-crime then should be interesting to see who among that sampling is a criminal just waiting for their chance, and with an 80% accuracy rate well, that’s a lot of potential criminals to sort through and find, criminals who will have no excuse if they are flagged by such an amazingly accurate piece of technology since it couldn’t possibly be wrong.

Be careful what you wish for. There’s nothing authoritarians like more than pushing a button and claiming "mission accomplished."

So what if it flags innocent people as criminals? As long as it avoids flagging the right people, a.k.a. the party in power and their buddies, it’s a perfect system in their book. Your questioning of it may actually get you flagged by it as a pre-terrorist.

Partial sarcasm aside I can but hope that this is a junk science PR stunt

Or it’s just another appeal to power hoping to get some sweet funding and good graces.

This comment has been deemed insightful by the community.
Scary Devil Monastery (profile) says:

Re: Re:

"Minority Report now deserves that same distinction."

Worse still. As an AC stated below, this is essentially phrenology – the long-disproven theory that you could predict a persons personality and moral fiber from the topology of their skulls. To my knowledge the last ones to even try to apply that as a "method" was a bunch of third reich quacks under Mengele who wanted a method to find out whether someone who looked like properly teutonic might actually have jewish or romani blood..or worse by far, be a homosexual.

That a few scientists are desperate for grants should not excuse them for trying to peddle pseudoscientific garbage whose only defendants in modern times consisted of Hitler’s Quack Squad.

This comment has been deemed insightful by the community.
Anonymous Coward says:

I could do better...

80% accuracy is weak numbers. I have almost no program writing training or experience (Qbasic and Turbo Pascal in high school) and I’m pretty sure I could write a program that could identify whether or not someone is a criminal with 99% or better accuracy. All it would need to do is be fed a picture, and say "Yes".

With the complexity of US and state laws, I’d say it’s a pretty sure bet that almost every person living in the US has committed, or will commit a crime at least once in their lives.

This comment has been deemed insightful by the community.
That Anonymous Coward (profile) says:

Y’all missed something that was under the fold…

PHD Candidate and veteran NYPD.

https://twitter.com/dancow/status/1257824523585536000

I of course wanted to see the results when we gave the system the pictures of the cops who anally violated a detainee, the serial killer CBP agents, oh and those TSA guys who ran drugs & weapons.

This comment has been deemed insightful by the community.
Agammamon says:

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias. The prediction is calculated solely based on a picture of their face.

Here’s the deal – what does ‘being a criminal’ even mean?

What about an adulterer? That’s illegal in some places. Would a picture of a person who cheated on their spouse in a place where that’s not illegal get a pass while it would detect the ‘criminality’ of someone who did it in a jurisdiction where its illegal?

Or two pictures of the same person, an earlier picture of a cheater in a jurisdiction where its not illegal and a later picture of the person from the jurisdiction where it is illegal?

Scary Devil Monastery (profile) says:

Re: Re:

"This is going to help police prevent crime in what way, exactly?"

By pre-emptively locking up anyone identified by the system to be a future criminal, obviously.

Or fit them with ankle trackers, red-flag them in national police databases, kill their credit ratings, mandate they attend regular "parole" hearings, and blacklist them from any job having anything to do with government or security.

For a better example on how this might work – or not, as the case may be – google the wiki entry for "social credit score" as it’s being tested out in China.

This comment has been deemed insightful by the community.
Anonymous Coward says:

The New Phrenology

Back in the 1800’s they claimed that you could tell if a person was a criminal by how they looked. Until recently, it was considered nonsense. (But there is money involved, so now we have phrenology 2.0.)

I can make a system that is 100% correct. Since everyone is guilty of something, you mark everyone as a criminal. Done. Now pay me.

Anonymous Coward says:

Re: The New Phrenology

The sad and crazy thing is they could have gotten way better numbers if they just made it to recognize a mugshots or active warrants database.

Even with the known limitations and issues of facial recognition making it dubious (issues with large data sets of examples exceeding their resolution) that would be a way better idea on so many levels. But it seems bias laundering is the main market for machine learning for law enforcement.

Anonymous Coward says:

We all break at least 3 laws a day, on average correct?

So going with the statements that have been made (that there are so many laws, good and bad, on the books), saying that everyone breaks at least 3 laws a day…

So by extrapolation, EVERYONE will be a criminal at some point in their life (by breaking some law they may not even be aware of, pulled a tag off a mattress… busted just kidding but you know what I mean).

My new AI can predict with 150% accuracy whether or not someone will be a criminal at some point in some country in their life… spoiler everyone will be a criminal at some point…

Ok, cops pay me all the money now, all your bases are belonging to us…

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re: We all break at least 3 laws a day, on average correct?

everyone breaks at least 3 laws a day…

The quote was 3 felonies per day. If you count every instance of law-breaking in a day, including multiple violations of the same law, you’ll get a much bigger number. That would include misdemeanors, traffic law violations, etc.

Eldakka (profile) says:

The software is able to predict if someone is a criminal with 80% accuracy and with no racial bias.

We don’t need any sort of predictive facial or other sort of system to come up with 80% accuracy.

Based on the incredible number of laws, ranging from local to state to federal to international, it’s nearly impossible to not break some sort of law on a daily basis. I probably break several traffic laws on my daily commute to work. Daily internet activity probably breaks some law somewhere – most likely copyright.

Therefore, picking any random sampling of any group of people, the chances are that 80% of them have committed some sort of crime today – ranging from speeding, turning without indicating, reading an article online, listening to a downloaded (and downloading!) an .mp3, to, hell, exceeding autrhorised access to a compiuter system in some readings of that law – let alone in their entire lifetimes.

This comment has been flagged by the community. Click here to show it.

PaulT (profile) says:

Re: Re: Re:2 Re:

Exactly. The entire reason for using hashtags is so that when you’re on a platform like Instagram or Twitter that uses them, you click on to the tag to see other posts with that hashtag. If you use them on a platform that doesn’t support that functionality, it’s just noise and an indication you don’t understand what you’re typing.

PaulT (profile) says:

Re: Re: Re: Re:

"In fairness, you can use hashtags properly in Markdown"

No, you can’t because the formatting isn’t the issue. It doesn’t matter how you format #JustMarkdownThings because it’s still just text. You don’t go to other posts that used the hashtag #JustMarkdownThings no matter how much you click on it, and since that’s the entire purpose of hashtags, you fail when trying to use them here.

Zane (profile) says:

Press release removed

Looks like they’ve removed the press release: https://harrisburgu.edu/hu-facial-recognition-software-identifies-potential-criminals/
From experience, people who write press releases sometimes exaggerate to make the story more interesting, and they don’t always wait to get the researchers approval before publicising. It’s terrifying how much Comms departments and journalists will stretch the facts. I’ve had my work inaccurately described in the past. The issue could be more about bad journalism than bad research. But I’m speculating, based on the removal of the press release, and on the outlandish claims. It will be interesting to see the research once published.

This comment has been deemed insightful by the community.
Stephen T. Stone (profile) says:

Re:

The news release outlining research titled “A Deep Neural Network Model to Predict Criminality Using Image Processing” was removed from the website at the request of the faculty involved in the research. The faculty are updating the paper to address concerns raised.

Translation:

We fucked up, and we know we fucked up, but we can’t say “we fucked up”. We also can’t say “oops, we did phrenology”. So accept this long-winded bullshit instead.

Scary Devil Monastery (profile) says:

Doesn't sound too hard...

…hell, I can identify a criminal just by reading a few facts about them.

For instance, By using my own methods of deduction I can state with at least 80% accuracy that the Harrisburg researchers mentioned in the OP are fraudulent con men bucking for easy money by selling snake oil and a miracle cure. And are probably libras.

Amazing what you can tell just by a casual glance, if you know how. You guys think I should patent the method?

This comment has been deemed insightful by the community.
Anon says:

Surprised? Why?

This is the community that thinks polygraphs are 100% reliable – even more than computers, obviously since it’s a big electric thing with waving needle pens ans moving paper, so it must be scientific. They are considered proof positive anywhere but in court. FBI, CIA, Secret Service, most prosecutors’ offices, police forces…

So should we be surprised the same bunch think there’s something reliable to facial recognition, a tech that well-placed makeup or a facial hair or sunglasses can confuse?

Scary Devil Monastery (profile) says:

Re: 80% ?

"Any sensor that was only 80% accurate would go straight in the garbage."

Ah, but that’s science for scientific purposes.
For Law Enforcement all you need is "Yeah, he probably did it. Or will. Whatever, lock him up"*

The target demographic is regularly in the news for managing to shoot and kill people over not dropping smartphones or remote controls fast enough. I don’t think they’ll be bothered about a 20% inaccuracy rate. It’d probably be a great improvement over what they currently use to determine whether they should apply lethal force or not.

Scary Devil Monastery (profile) says:

Re: Re: 80% ?

"What happens when a machine is 80% accurate…"

You mean as in that machine then identifying 20% of everyone tested as a criminal? Yeah, rolled out across a larger demographic that may result in some future politician trying to include "1 in 5 people are CRIMININALS. Time to stop getting soft on crime!" in their platform.

This is why facial recognition tech – or ANY sort of automated algorithm meant to decide "suspicion" can’t be trusted. Even a 1% error margin becomes an incredible problem.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...