Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Studies: The Challenges In Moderating Information Regarding Eating Disorders (2012)

from the not-as-simple-as-you-might-think dept

Summary: In 2012, the Huffington Post did an expos? on eating disorder blogs, mainly on the site Tumblr. It discussed the world of ?thinspo? and ?thinspiration? blogs, that focused on building a community around losing unhealthy amounts of weight. In response, Tumblr announced that it was banning ?self harm? blogs, and classified eating disorder blogs among those no longer allowed.

Three years later, a study by Munmun De Choudhury discussed how there was still eating disorder information on Tumblr, but that it was mainly split into two different categories: those who were supportive of eating disorders such as anorexia (referred to as ?proana?) as well as communities built up around recovering from eating disorders. One interesting finding of the report was that the ?recovery? groups often used the same keywords and messaging, in an attempt to permeate among the ?proana? groups, in order to try to encourage those with eating disorders to seek support, therapy, and help towards recovery.

That same year, Amanda Hess argued in Slate that the rush to ban content about eating disorders on social media (or, in the case of France, where such things were outlawed) was the wrong approach.

“But while we know anorexia can kill, we?re not quite sure what happens to people who read about it online. In an article published last month, Canadian criminologists Debra Langan and Nicole Schott could find ‘no scholarly evidence’ that pro-ana blogs pose a threat to their audiences. If they do, there?s no proof that any of our social remedies?censorship, PSAs, or prison time?do anything to help. These campaigns are most obviously effective at flattering the egos of the lawmakers and tech execs who champion them. When a girl searches Tumblr for a pro-ana?adjacent term like #thinspo or #thighgap now, Tumblr intercepts her request with bland concern (‘Everything OK?’), then advises her to check out the cutesy motivational messaging on the National Eating Disorders Association?s Tumblr instead. However the girl responds, Tumblr can feel satisfied it?s performed its civic responsibility. The strategy recalls the one favored by a 19th-century doctor who believed that reading novels caused hysteria in women: He counseled men to confiscate their wives? fiction and replace it with a book on ‘some practical subject,’ like ‘beekeeping.'”

The following year, De Choudhury and other authors released another study detailing how pro-eating disorder groups would get around social media blocks on their content by changing words around, or slightly misspelling them, suggesting that the out-and-out blocking method was likely to continue to be ineffective.

Another article suggested that the blocks almost made it easier to find information about eating disorders, because dozens of new hashtags were created for the community, rather than just a few before social media sites began to ban such content.

A study released in the peer-reviewed New Media & Society journal in 2018 highlighted how easy it appeared to be for users to get around attempts to block content regarding certain eating disorders. The researcher, Ysabel Gerrard, looked mainly at Pinterest, Tumblr, and Instagram, finding that while all three had some policies in place regarding eating disorder information, it was not difficult to find groups or sites dedicated to such information.

As summarized by Wired:

“She immediately found that Instagram?s pro-ED hashtag ban has an easy workaround: You can search for people who have the keywords in their usernames, just not hashtagged in their posts. She identified 74 public accounts that had terms like ‘proana,’ ‘proanorexia,’ or ‘thighgap’ in their names or bios and who also posted pro-ED content. Then, she analyzed 1,612 of their posts?only 561 of which had hashtags?by cataloguing the content of the image and its caption.

“On Tumblr, Gerrard followed a number of terms related to pro-ED content, like ‘thinspo,’ ‘proana,’ and ‘bulimic.’ Tumblr allows you to follow topics without needing to follow specific users. For example, you can simply follow ‘movies’ without following any specific user who posts about that topic. Through this method, she found 50 pro-ED blogs and analyzed 20 posts from each, or 1,000 posts total. Only 218 of the posts were tagged.”

The report also found that the recommendation algorithms often drove users towards more such content. By saving a few ‘proana’ blogs, Gerrard found that Tumblr began recommending more. While it did also recommend some recovery blogs, Gerrard found them easy to exclude.

“Once I had followed ED-related terms – anorexia, anorexic, bulimia, bulimic, thinspiration, thinspo, proana, purge, purging – the platform delivered this content to me through my dashboard and also via email. Tumblr showed me relevant posts and suggested a list of users whose accounts I should follow. As some of these terms are not straightforwardly pro-ED (unlike, for example, proana), I was presented with blogs identifying as ?pro-recovery? in their biographies. But I excluded these blogs from the dataset as they were not the focus of my analysis. Tumblr recommended blogs that were, for example, ?big in proana? or ?like? other popular blogs. I identified fifty pro-ED users through this method.”

Another study from 2014 argued that even the ?proana? content represented a ?double-edged sword? and might help some of those either with eating disorders, or those at risk, to recognize that what they were exploring was unhealthy.

Decisions to be made by Tumblr/Instagram/Pinterest:

  • How do you deal with information about eating disorders? Is it actually possible to ban it?

  • How do you distinguish between ?proana? and ?recovery? content?

  • Are there other interventions that can be done, such as putting up warning labels, or directing users towards ?recovery? resources when they search on certain terms?

  • How should recommendation algorithms handle information about eating disorders? Does it need to be adjusted to avoid sending people towards content that glorifies such disorders?

Questions and policy implications to consider:

  • Does banning content of those promoting unhealthy eating disorders actually help prevent eating disorders?

  • Does reading about eating disorders function as a how-to guide for the vulnerable, or does it help those at risk recognize that risk? If both, how do you balance these two competing forces?

  • Can pointing people towards recovery content or other helpful resources lead to better outcomes?

Resolution: Various websites struggle with how to deal with eating disorder information and communities. Attempts to ban it have continued to fail, as the various communities continue to figure out ways to route around any ban. The research on the impact of this content remains mixed, however, and there have been some concerns that efforts to ban such content on certain platforms only makes it move to others that are less well organized to handle the issue.

The latest is a report that teens who were engaged in eating disorder discussions on Tumblr have now moved to TikTok. However, that same article also notes that, unlike Tumblr, TikTok seems to have a number of users who celebrate healthy eating and living, and that TikTok?s algorithm may be inserting such videos mixed in with those discussing more unhealthy eating behavior.

Originally published on the Trust & Safety Foundation website.

Filed Under: , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Studies: The Challenges In Moderating Information Regarding Eating Disorders (2012)”

Subscribe: RSS Leave a comment
1 Comment
Christenson says:

Crazy Websites and bad content

I hang out on a certain NSFW website…because it offers me a certain amount of interaction with people through a forum an chat and it seems to have reasonably healthy boundaries.

I suspect the difference between "good" and "bad" content on ED has more to do with who is reading and what kind of emotional support (typically through interaction of some kind, not just passive reading) is offered.

It would be an interesting experiment to try out some crowdsourced ranking systems on such content — maybe a quick quiz asking opinion on a blog after it has been navigated to. Not sure how well that would work for ranking a site like techdirt, though — I’m forever typing in http://www.techdirt.com, skipping over the search engine.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow