Detroit Skating Rink Horns In On Detroit PD's Facial Recognition Gaffe Racket, Denies Teen Girl Opportunity To Skate

from the even-if-our-software-is-wrong,-we-will-still-enforce-its-decision dept

It looks like Detroit, Michigan is trying to corner the market on bad facial recognition tech. The city’s police department is already associated with two false arrests based on bad matches by facial recognition software. This latest news, via Techdirt reader Jeffrey Nonken, shows mismatches aren’t just limited to the public sector.

A Black teenager in the US was barred from entering a roller rink after a facial-recognition system wrongly identified her as a person who had been previously banned for starting a fight there.

Lamya Robinson, 14, had been dropped off by her parents at Riverside Arena, an indoor rollerskating space in Livonia, Michigan, at the weekend to spend time with her pals. Facial-recognition cameras installed inside the premises matched her face to a photo of somebody else apparently barred following a skirmish with other skaters.

The teen told staff it couldn’t possibly be her since she had never visited the venue before. But it didn’t matter to the management of the rink, which is located in a Detroit suburb. She was asked to leave and now her parents are considering suing the rink over the false positive. Fortunately, no one at the rink felt compelled to call the police, which likely wouldn’t have helped anything considering local law enforcement’s track record with faulty facial recognition search results.

As for Riverside Arena, it’s apologetic but not exactly helpful. Management claims deploying facial recognition tech on patrons is part of the “usual” entry process. It also unexplained that it’s “hard to look into things when the system is running,” which I suppose means that’s why no one could double-check the match while Robinson was still there contesting the search results.

Being wrong some of the time is also good enough for non-government work, apparently.

“The software had her daughter at a 97 percent match. This is what we looked at, not the thumbnail photos Ms. Robinson took a picture of, if there was a mistake, we apologize for that.”

Obviously, there was a mistake. So, this should have just been an apology, not a half-hearted offer of an apology if, at some point in the future, someone other than the people directly affected by this automated decision steps forward to declare the mismatch a mismatch.

This is the other side of the facial recognition coin: private sector use. This is bound to result in just as many mismatches as government use does, only with software that’s perhaps undergone even less vetting and without any direct oversight outside of company management. False positives will continue to be a problem. How expensive a problem remains to be seen, but since private companies are free to choose who gets to use their services, lawsuits probably won’t be much of a deterrent to deploying and using unvetted software that will keep the wrong people out and, perhaps more disturbingly, let the wrong people in.

Filed Under: , , , ,
Companies: riverside arena

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Detroit Skating Rink Horns In On Detroit PD's Facial Recognition Gaffe Racket, Denies Teen Girl Opportunity To Skate”

Subscribe: RSS Leave a comment
27 Comments
This comment has been deemed insightful by the community.
David says:

"The software had her daughter at a 97 percent match."

There is no such thing as "at a 97 percent match". That is a meaningless expression. It doesn’t tell what is being matched to what with what underlying statistic.

More likely than not it isn’t even based on an actual statistic but just a number the programmers made up to feel more tangible.

Facial expression recognition tends to fare quite worse with black skin, particularly with less than optimal lighting: that’s just a matter of different contrast. If 3% of the patrons are black girls and the facial recognition does not manage to go further than that, you can claim a 97% match, for example.

And the software does not even need to be able to distinguish a black girl from a calico cat or a bike shed because the latter are not in the training and test sets.

Anonymous Coward says:

Re: Re: "The software had her daughter at a 97 percent matc

Data is not good, or bad. It is data.

The conclusions you draw from it may be good, or bad. The quality of the data compared to your needs may be good, or bad.

But data doesn’t go out saving lives, or shooting people.
Except on reruns of Star Trek, I guess.

Anonymous Coward says:

Re: Re: Re: "The software had her daughter at a 97 percent

Collecting data in an incomplete manner to bias the end results of the statistical analysis is very much a real thing. What would you call that kind of data if it’s not bad?

Police send officers to black neighborhood –> cops arrest black people –> department enters these arrests into something like COMPSTAT –> department now has "data" that shows the black neighborhoods have high crime rates –> deploy even more officers to black neighborhoods now that they have been deemed hotspots of crime.

It’s not very hard to launder racism through "data" and "technology".

Scary Devil Monastery (profile) says:

Re: Re: Re: "The software had her daughter at a 97 percent

"Data is not good, or bad. It is data."

Err…no. In the lab I could get data from a badly calibrated pH-meter. It’s what we in the biz i was in back then would definitely call "bad data". Context matters.

"The quality of the data compared to your needs may be good, or bad." I.e. Good or bad data. QED.

This comment has been deemed insightful by the community.
PaulT (profile) says:

This is the real problem with facial recognition – not the underlying tech, but the way in which incredulous fools deal with results.

It should be easy – a 97% match doesn’t really prove anything. It could be the same person in different lighting. It could be another 14 year old girl who happens to share some similar features (are these actually calibrated for minors, or are they likely to get more false positives, I wonder? We already know there’s unintended racial bias, but is ageism in there as well?).

This should be somewhat easy to resolve – the operator brings up the images against which the person in front of them is being matched, then they make a judgement call with human eyes. The problem is when they start offloading responsibility to the system with no human input – sorry, computer says no, Mr Buttle…

However accurate or otherwise the underlying tech is, the problem is always going to be when humans decide to defer to it to make decisions for them instead of merely advising human decisions.

Scary Devil Monastery (profile) says:

Re: Re:

"who’s idea was it to use white facial recognition on blacks and other non-whites?"

A number of morons being told by "security experts" that the facial recognition tech was 97% accurate and didn’t read the fine print of "…except if it concerns non-caucasian individuals, particularly black, latino or asians where it will struggle to tell Prince from Oprah Winfrey".

PaulT (profile) says:

Re: Private policy

Ah, once again, you go for the rights of corporations to negatively impact the rights of others rather than take the most basic responsibility for making sure that doesn’t happen.

As I mentioned, the tech isn’t the problem, it’s implication when the corporation decides it should replace nuance and thought – and we should always err on the side of the people whose lives are otherwise unnecessarily destroyed by convenience.

PaulT (profile) says:

Re: Re: Re: Private policy

"corporations->small business"

Well, corporation I used as an umbrella, but it’s true that here it’s a small business rather than a massive chain. Which actually raises the issue further of why they pass control off to another company to make decisions for them, more likely to be a corporate entity than a local security guard.

I could be wrong, but this seems to be yet another cost cutting measure that doesn’t actually cut costs in the long term.

"By trying to keep the rink safe for the majority."

Except, they didn’t. These decisions didn’t protect anyone (assuming the story presented is correct), and a possible outcome of them making these decisions is that the rink isn’t around to serve everyone. There is the issue of accepting that some people lives need to be unnecessarily reduced in order to serve the majority, but it’s possible that this won’t even achieve that. The next time might be a person with a local connection and a competing venue to use that they drive customers to.

There’s also the question of what would happen had the tech made a mistake in the other direction. What if, instead of a false positive that would have been easily remedied by human interaction, they had a false negative that allowed a person who had been banned to enter the premises? Given that they’re apparently blindly believing what an imperfect system tells them, they could have just as easily introduced more risk.

There may be more details to come, but I don’t see how the customers really benefit here. It seems to be a measure to ensure they don’t have to employ enough security guards to effectually run the place and pass off decisions on to a third party. Which doesn’t exactly scream "safety" to me, rather than they’re not be sufficiently staffed to deal with a real problem later.

"They don’t have to serve anyone."

Customers also don’t have to go anywhere in particular.

Scary Devil Monastery (profile) says:

Re: Private policy

"Be it algorithms or cameras, they can use whatever they want."

No one has said differently.

We have, however, with due affect, declared that use of algorithms and cameras varying degrees of foolish. It’s not a good look when it turns out your security system in practice has a racial bias and your use of it then becomes part of systemic racism.

Honestly, if you know from the start you’ve got an expensive security system which has a high chance of identifying black people, in particular, as any other black person flagged "asshole" in the database, then using that system isn’t a great plan. In fact it’ll be hard to swing casual disregard for bias against an entire demographic as anything other than negligent racism.

At some point any business owner needs to go over their planned purchases and think "Will any of these new toys make me look real bad within the week?". If the answer is "Yes" then don’t buy that piece of overpriced garbage.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...