Can A Community Approach To Disinformation Help Twitter?

from the experiments dept

A few weeks ago Twitter announced Birdwatch as a new experimental approach to dealing with disinformation on its platform. Obviously, disinformation is a huge challenge online, and one that doesn't have any easy answers. Too many people seem to think that you can just "ban disinformation" without recognizing that everyone has a different definition of what is, and what is not disinformation. It's easy to claim that you would know, but it's much harder to put in place rules that can be applied consistently by a large team of people, dealing with hundreds of millions of pieces of content every day.

Facebook has tried things like partnering with fact checkers, but most companies just put in place their own rules and try to stick with it. Birdwatch, on the other hand, is an attempt to use the community to help. In some ways it's taking a page from (1) what Twitter does best (enabling lots of people to weigh in on any particular subject), and (2) Wikipedia, which has always had a community-as-moderators setup.

Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable. Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.

In this first phase of the pilot, notes will only be visible on a separate Birdwatch site. On this site, pilot participants can also rate the helpfulness of notes added by other contributors. These notes are being intentionally kept separate from Twitter for now, while we build Birdwatch and gain confidence that it produces context people find helpful and appropriate. Additionally, notes will not have an effect on the way people see Tweets or our system recommendations.

Will this work? There are many, many reasons why it might not. Wikipedia itself has spent years dealing with these kinds of questions, and had to build a kind of shared culture and informal and formal rules about what kind of content belongs on the site. It's a lot harder to retrofit that kind of thinking back onto a platform like Twitter where pretty much anything goes. There is, also, of course, the risk of brigading and mobs -- whereby a crew of people might attack a certain tweet or type of information with the goal of getting accurate information being declared "fake news" or something along those lines.

Twitter, I'm sure, recognizes these challenges. The details of how Birdwatch is set up certainly suggests that it's going to watch and iterate as it goes, but the company recognizes that if it can get this right, it could be quite useful. That's why, even if there's a high risk of failure, I still think it's an interesting and worthwhile experiment.

Some of the initial results, however... don't look great. A bunch of clueless Trumpists have been trying to minimize the traumatic experience that Alexandria Ocasio-Cortez recently described as her experience during the insurrection at the Capitol on January 6th. Because these foolish people don't understand that the Capitol complex is a set of interconnected buildings, they are arguing that AOC was "lying" when she talked about the fear she felt while initially hiding in her office during the raid -- since her office is in the connected Cannon Building, and not in the domed part of the Capitol complex. It turned out that some of the fear came from a Capitol police officer yelling "where is she?" and barging into the office. AOC, not realizing it was a Capitol police officer, recently spoke movingly about how afraid she was that it was an insurrectionist.

Since they started trying to make this argument on social media, AOC responded, pointing out that the entire Capitol complex was under attack (even if it wasn't, the fact that you're in a building across the street from a riotous mob that clearly wouldn't mind killing you, is a perfectly good reason to be afraid). She also mentioned the two pipe bombs that were found near the Capitol, which were not far from the Congressional office buildings.

However, if you go to Birdwatch, it shows a bunch of disingenuous people trying to present AOC's statements as disinformation.

Of course, this just shows exactly the problem of trying to deal with "disinformation." It is often used as a weapon against people you disagree with, where you might nitpick or argue technicalities, rather than the actual point.

I am hopeful that this experiment gets better at handling these situations, but I recognize the huge difficulty in doing this with any sort of consistency at scale, when you're always going to be dealing with disingenuous and dishonest actors trying to game the system to their own advantage.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: birdwatch, content moderation, crowdsourcing, disinformation, misinformation
Companies: twitter

Reader Comments

Subscribe: RSS

View by: Thread

  1. icon
    PaulT (profile), 9 Feb 2021 @ 11:50pm

    Re: Re: Re: Re: Re: Re: Re: Re: Re: Re:

    "Really the problem I see we have isn't "what" to filter out, it's how to identify it at scale"

    Well, no, I'd argue the first is the most important thing. If you're filtering based on certain criteria (reports from users, links to know false propaganda), it's actually relatively easy to filter.

    "Whether you are trying to fact check stuff, or just check things to ideologically match what your platform wants you face the same problem.. how can actually get it done?"

    These are very different things, though. If you're fact checking, then you do what Twitter do - label things with warnings that certain facts are disputed, and take further action if accounts seem to be requiring a lot of fact check warnings. Ideology is vastly more complicated, but what that should essentially boil down to is community guidelines.

    Let's use a non-political example. I'm a member of a number of movie buff groups, some focussing on mainstream stuff, some on cult and horror stuff that's my preferred genre. Because I'm not an asshole, I keep to the general community guidelines, and don't try to force more divisive and controversial subject on to the mainstream sites. If I were to be an asshole and I kept posting screenshots of my favourite scenes from Cannibal Holocaust in between people trying to discuss the latests Pixar and Marvel movies, I'd expect to get a lot of complaints and for my posts to be removed and even to be banned when I kept doing it. That's not difficult to understand or for the community to enact, exactly, but then I'm not the kind of asshole that insists that because i want something on a specific community, that doesn't mean the community have to accept me.

    That all this boils down to, really - people with niche, unpopular, even offensive, views are trying to force them into the mainstream that don't want them. It's up to each community to work out what's acceptable, and you can't please everyone, so the larger, more mainstream sites will stick to what's least controversial. It's not their fault if certain types of people deliberately pretend that their opinions override those of the community they're trying to engage with.

    "Techdirt system works well at scale"

    I very much doubt that. It works here because it's a relatively small community, with the most popular threads getting a few hundred posts. But, already we have problems with threads being derailed by the deliberately ignorant, repeating oft-debunked lies. That won't scale to threads of tens of thousands of posts with only "more speech" in response to the lies. Something else needs to happen at that scale.

    "Wikipedia system also works pretty well at scale"

    Largely because of things like locking pages that keep getting edited with misinformation, and pages upon pages of angry discussion behind the scenes as to what's acceptable on a specific page.

    "From my viewpoint Trump was tolerated for years because he was the president and twitter felt like they didn't have the moral authority"

    LOL, no. He was tolerated because he generated a huge amount of traffic to the site, both from the tweets themselves and from the media breathlessly reporting on every stupid thing he said there. They ditched him once he because too controversial and the impending backlash would lead to both a drop in traffic and legal issues (wrong-headed as it might be, you can guarantee that their role in the insurrection as a result of tolerating election misinformation is going to be used against them in attempts to both remove their legal protections and add new liabilities).

    Despite the whining from the Q types, this is all about business at the end of the day, not morality or politics.

    "I just think the decision on what is garbage should try to be based on what lines up with objective reality rather than a particular philosophy."

    The inability of some people to define objective reality or understand that their subjective understanding of it might not mesh with the needs of the communities they're trying to gatecrash is what got us here. I prefer the idea of each community being able to moderate as their community sees fit, and for people to go to a community that actually wants them there if they dislike what that community does.

Add Your Comment

Have a Techdirt Account? Sign in now. Want one? Register here

Subscribe to the Techdirt Daily newsletter

Comment Options:

  • Use markdown. Use plain text.
  • Remember name/email/url (set a cookie)

Follow Techdirt
Special Affiliate Offer

Report this ad  |  Hide Techdirt ads
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Chat
Report this ad  |  Hide Techdirt ads
Recent Stories
Report this ad  |  Hide Techdirt ads

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it

Email This

This feature is only available to registered users. Register or sign in to use it.