It's Not Personal: Content Moderation Always Involves Mistakes, Including Suspending Experts Sharing Knowledge
from the it happens dept
I keep pointing out that content moderation at scale is impossible to do well. There are always going to be mistakes. And lots of them. We’ve spent years highlighting the many obvious mistakes that websites trying to make moderation decisions on thousands, hundreds of thousands, or even millions of pieces of content are going to make every day. It’s completely natural for those who are on the receiving end of obviously bogus suspensions to take it personally — though there does seem to be one group of people who have built an entire grievance complex on the false belief that the internet companies are targeting them specifically.
But if you look around, you can see examples of content moderation “mistakes” on a daily basis. Here’s a perfect example. Dr. Matthew Knight, a respiratory physician in the UK, last week tweeted out a fairly uncontroversial statement about making sure there was adequate ventilation in the hospitality industry in order to help restart the economy. At this point, the scientific consensus is very much that good ventilation is absolutely key in preventing COVID transmission, and that the largest vector of super spreader events are indoor gatherings with inadequate ventilation. As such this tweet should be wholly uncontroversial:
To get hospitality working safely in the U.K. a significant investment is required in ventilation systems. The standards should be set ASAP and funding made available. Covid is airborne (as are other infections) and ventilation vital part of prevention. Good air quality vital
— DR MATTHEW KNIGHT MBE (@drmknight) May 19, 2021
And yet… despite this perfectly reasonable tweet from a clearly established expert, Twitter suspended his account for “spreading misleading and potentially harmful information related to COVID-19.” It then rejected Dr. Knight’s appeal.
There?s something wrong with Twitter?s censorship function. An Aerosol Scientist and a Respiratory Physician have both been blocked from accessing their accounts for ?spreading misleading & potentially harmful information related to COVID-19?. How is this misleading/harmful? pic.twitter.com/wodknwKUKw
— Kristen K. Coleman (@drkristenkc) May 27, 2021
Thankfully it appears that Twitter eventually realized its mistake and gave Dr. Knight his account back. Lots of people are (understandably) asking why Twitter is so bad at this, and it’s a fair enough question. But the simple fact is that the companies are all put in an impossible spot. When they weren’t removing blatant mis- and disinfo about COVID-19, they were getting slammed from plenty of people (also for good reason). So they ramped up the efforts, and it still involves a large group of (usually non-experts) having to make a huge number of decisions very quickly.
There are always going to be mistakes. As Harvard’s Evelyn Douek likes to note, content moderation is all about error rates. Each choice you make is going to have error rates. The biggest questions are what kinds of errors are preferable, and how many are you willing to deal with. Should the focus be on minimizing false positives? Or false negatives? Or somehow trying to balance the two? And the answers to that may vary given the circumstances and may change over time. But one thing that is clear is that no matter what choices are made, mistakes inevitably come with them, because content moderation at scale is simply impossible to do well.