Can A Community Approach To Disinformation Help Twitter?

from the experiments dept

A few weeks ago Twitter announced Birdwatch as a new experimental approach to dealing with disinformation on its platform. Obviously, disinformation is a huge challenge online, and one that doesn't have any easy answers. Too many people seem to think that you can just "ban disinformation" without recognizing that everyone has a different definition of what is, and what is not disinformation. It's easy to claim that you would know, but it's much harder to put in place rules that can be applied consistently by a large team of people, dealing with hundreds of millions of pieces of content every day.

Facebook has tried things like partnering with fact checkers, but most companies just put in place their own rules and try to stick with it. Birdwatch, on the other hand, is an attempt to use the community to help. In some ways it's taking a page from (1) what Twitter does best (enabling lots of people to weigh in on any particular subject), and (2) Wikipedia, which has always had a community-as-moderators setup.

Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.

In this first phase of the pilot, notes will only be visible on a separate Birdwatch site. On this site, pilot participants can also rate the helpfulness of notes added by other contributors. These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate. Additionally, notes will not have an effect on the way people see Tweets or our system recommendations.

Will this work? There are many, many reasons why it might not. Wikipedia itself has spent years dealing with these kinds of questions, and had to build a kind of shared culture and informal and formal rules about what kind of content belongs on the site. It's a lot harder to retrofit that kind of thinking back onto a platform like Twitter where pretty much anything goes. There is, also, of course, the risk of brigading and mobs -- whereby a crew of people might attack a certain tweet or type of information with the goal of getting accurate information being declared "fake news" or something along those lines.

Twitter, I'm sure, recognizes these challenges. The details of how Birdwatch is set up certainly suggests that it's going to watch and iterate as it goes, but the company recognizes that if it can get this right, it could be quite useful. That's why, even if there's a high risk of failure, I still think it's an interesting and worthwhile experiment.

Some of the initial results, however... don't look great. A bunch of clueless Trumpists have been trying to minimize the traumatic experience that Alexandria Ocasio-Cortez recently described as her experience during the insurrection at the Capitol on January 6th. Because these foolish people don't understand that the Capitol complex is a set of interconnected buildings, they are arguing that AOC was "lying" when she talked about the fear she felt while initially hiding in her office during the raid -- since her office is in the connected Cannon Building, and not in the domed part of the Capitol complex. It turned out that some of the fear came from a Capitol police officer yelling "where is she?" and barging into the office. AOC, not realizing it was a Capitol police officer, recently spoke movingly about how afraid she was that it was an insurrectionist.

Since they started trying to make this argument on social media, AOC responded, pointing out that the entire Capitol complex was under attack (even if it wasn't, the fact that you're in a building across the street from a riotous mob that clearly wouldn't mind killing you, is a perfectly good reason to be afraid). She also mentioned the two pipe bombs that were found near the Capitol, which were not far from the Congressional office buildings.

However, if you go to Birdwatch, it shows a bunch of disingenuous people trying to present AOC's statements as disinformation.

Of course, this just shows exactly the problem of trying to deal with "disinformation." It is often used as a weapon against people you disagree with, where you might nitpick or argue technicalities, rather than the actual point.

I am hopeful that this experiment gets better at handling these situations, but I recognize the huge difficulty in doing this with any sort of consistency at scale, when you're always going to be dealing with disingenuous and dishonest actors trying to game the system to their own advantage.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: birdwatch, content moderation, crowdsourcing, disinformation, misinformation
Companies: twitter

Reader Comments

Subscribe: RSS

View by: Thread

  1. icon
    Blake C. Stacey (profile), 4 Feb 2021 @ 5:51pm

    Wikipedia has a policy of "NPOV" --- Neutral Point of View. That's not neutral in the sense that Republicans would want, i.e. false balance, but rather an ethos of sticking to what the sources say. They've also developed a lengthy guideline for what can qualify as a "reliable source". And there's a more specific guideline just for writing about fringe theories, and a guideline for the extra-careful standards for writing about medicine. The acronyms are thick on the ground, because Wikipedia editors are the sort of people who think that all the problems of education and epistemology can be solved by applying more acronyms. Mind you, this is just a slice through the rulebook --- we haven't even gotten to the guidelines for "notability", which say what topics deserve to have articles about them. Or the Manual of Style, or the rules for Conflict-of-Interest editing. In short, it ain't simple.

    The question raised by Birdwatch is, if you're trying to do community moderation to build a site that is fact-based and isn't a complete garbage fire, could the rules actually be any simpler? How do you write simple guidelines for a problem that is, itself, necessarily complicated?

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it

Email This

This feature is only available to registered users. Register or sign in to use it.