Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Scammers Targeting Scrabble Chat (2020)

from the it's-always-scrabble dept

Summary: In the spring of 2020, Mattel and Hasbro announced that the official mobile version of the game Scrabble would no longer be the game produced by Electronic Arts, but rather a new game called Scrabble Go created by a company called Scopely. The change drew the ire of fans (who have even started a petition for the old game to be brought back) for taking what had been a fairly standard mobile version of the popular word game, and introducing a new, flashier version that had some additional ?gamification? incentives and put the focus on playing against others, rather than the computer as was typical in the previous game.

This also introduced a new feature: chat. Since players are playing against other human beings, Scopely decided to add a chat feature, but apparently did not consider how such features may be regularly abused. In the months since Scrabble Go launched, there have been many reports of so-called ?romance scammers? trying to reach out to people via Scrabble Go?s chat feature.

Multiple reports of these kinds of approaches started appearing in various forums, with some examples of the scammers being quite persistent. At least in Australia, consumer protection officials noted that they have received multiple complaints of romance scammers approaching them via Scrabble Go. One woman in the UK noted that she has been approached by such scammers two to three times every week.

After three months of complaints, Scopely announced that it was rolling out an update that would allow players to ?mute? the chat function.

Decisions to be made by Scopely:

  • Does a mobile Scrabble game need a chat feature?
  • If scammers are bothering players so often is the game better off without it?
  • How will chat be monitored? Is there a program in place to catch and stop scammers?
  • Are there other tools to limit the abuse of the chat feature?
  • Should the default be that chat is open to all or should it be opt-in?

Questions and policy implications to consider:

  • Any system that allows for person-to-person communication can be abused. How should companies looking to add useful features take this into account?
  • How do you weigh the pros and cons of features like chat when comparing their usefulness for engagement against their trust-and-safety risks?

Resolution: A few months after launch, Scopely updated the app to allow players to mute the chat entirely. As complaints remained, it has also added an ability to only connect to friends you already know on Facebook or via your contacts (if you agree to upload your contacts to the service), effectively sandboxing the chat to only users the player has some connection with.

The company has also added the ability to ?report? a chat if the user feels it is inappropriate.

Finally, to address the broader complaints about the game, Scopely introduced a ?classic mode? to focus more on the traditional game, rather than all the bells and whistles of the full Scrabble Go.

Originally posted at the Trust & Safety Foundation website.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Scammers Targeting Scrabble Chat (2020)”

Subscribe: RSS Leave a comment
6 Comments

This comment has been flagged by the community. Click here to show it.

This comment has been flagged by the community. Click here to show it.

Anonymous Coward says:

Re: Say. WHO is "the community" here at TD controlling

Where the rules "the community" uses to judge?

What sort of "voting system" is it that has no upvotes even possible?

DOES an Administrator or other human make any decision at all in the "hiding" process?

Try answering any of those relevant to your own site, Maz, before presume to advise others.

sumgai (profile) says:

Re: Re: Say. WHO is "the community" here at TD control

AC,

You must be the new kid on the block, so I’ll clue you in:

a) There are no posted "Rules", everyone here is expected to act like mature adults.

b) The system is easy – at the top right of each comment is a set of voting icons. The two left-most are to promote either Insightful or Funny, which in my opinion are pretty much the same as upvotes. Just to be complete, the right-most is to demote as abusive/trolling/spam.

c) No. Masnick as demonstrated for more than 20 years that anyone can post here as they please, and that every poster is subject to the community’s judgment. The community is self policing, and trust me, it will self-police…. sometimes quite relentlessly.

d) He has, and he will, when necessary. But members like myself sometimes step in, just to shorten your waiting time. You’re welcome.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow