Content Moderation Case Study: Lyft Blocks Users From Using Their Real Names To Sign Up (2019)
from the scunthorpe-again? dept
Summary: Users attempting to sign up for a new ride-sharing program ran into a problem from the earliest days of content moderation. The “Scunthorpe problem” dates back to 1996, when AOL refused to let residents of Scunthorpe, England register accounts with the online service. The service’s blocklist of “offensive” words picked out four of the first five letters of the town’s name and served up a blanket ban to residents.
Flash forward twenty-three years and services still aren’t much closer to solving this problem.
Users attempting to sign up for Lyft found themselves booted from the service for “violating community guidelines” simply for attempting to create accounts using their real names. Some of the users affected were Nicole Cumming, Cara Dick, Dick DeBartolo, and Candace Poon.
These users were asked to “update their names,” as though such a thing were even possible to do with a service that ties names to payment systems and internal efforts to ensure driver and passenger safety.
Decisions to be made by Lyft:
- Should names triggering Community Guidelines violations be reviewed by human moderators, rather than automatically rejected?
- Is the cross-verification process enough to deter pranksters and trolls from activating accounts with actually offensive names?
Questions and policy implications to consider:
- Considering the identification system is backstopped by credit cards and payment services that require real names, does deploying a blocklist actually serve any useful purpose?
- Given that potential users are likely to abandon a service that generates too much friction at sign up, does a blocklist like this do damage to company growth?
- Does global growth create a larger problem by adding other languages and possible names that will trigger rejections of more potential users? Can this be mitigated by backstopping more automatic processes with human moderators?
Resolution: The users affected by Lyft’s blocklist were reinstated. Lyft apologized for the rejections, pointing a finger at automated moderation efforts designed to keep people from creating offensive content using nothing more than the First Name/Last Name fields.
Unfortunately, the problem still hasn’t been solved. Candace Poon — whose first attempt to sign up for Lyft was rejected — just ran into the same issue attempting to create an account for new social media platform, Clubhouse.
Originally posted to the Trust & Safety Foundation website.
Filed Under: content moderation, filtering, keywords, names, scunthorpe problem
Companies: lyft
Comments on “Content Moderation Case Study: Lyft Blocks Users From Using Their Real Names To Sign Up (2019)”
And in case anybody else recognized his name: yes, the Dick DeBartolo mentioned in this story is Dick DeBartolo the prolific writer for Mad Magazine.
People who could have been affected.
I have relatives whose last name is Weiner, and they could’ve been affected by this ban. It just shows you: never outsource naughty-word moderation to machines when people can actually have them as their real names.
A new example of the Scunthorpe problem: Facebook yanked down the page for the French city of Bitche. (The page has since been restored, but still.)
Relevant article about the issue from the perspective of a programmer:
Falsehoods Programmers Believe About Names
The only thing automatic profanity filters are good for is amusing "pranksters and trolls" intentionally playing with them. Everyone else finds them irrelevant at best, and disastrous at worst.
AirBNB also has silly filters
If you try to put the following words as your employer or mention them in your bio AirBNB won’t allow it: Google, Twitter, Facebook (there are others including competitors like VRBO).
It’s not a well constructed filter, adding unicode zero width spaces between the letters fools it.