from the the-fun-of-content-moderation dept
There’s been a lot of talk in the last few weeks about political ads online, kicked off by Facebook “clarifying” that its fact checking rules for regular advertisements don’t apply to political ads, after President Trump’s campaign ran some ads that were laughably inaccurate. That kicked off a series of political stunts, including Elizabeth Warren taking out her own misleading ads to call out Facebook (though, as we noted, that whole stunt seemed particularly silly since she had previously complained that Facebook shouldn’t be blocking political ads — when they were her own). The debate rages on with everyone insisting that their viewpoint is correct, and with few acknowledging that there is no good answer.
If you fact check political ads, you will undoubtedly be accused of bias against those whose ads get blocked. And a big part of the problem is not about whether or not something is “factual” but about nitpicking around the semantics of what is and what is not a fact, or in how it’s presented. This is why most fact checking operations constantly get called out, since so much is a judgment call. And, because of that, there is a reasonable position that Facebook has staked out that when it comes to politics, it doesn’t want to be in the business of judging the veracity of one side or another. Of course, that response is wholly unsatisfying and is easy to spin as “letting politicians lie.”
And, unsurprisingly, we’re now seeing stunts like the one attempted by political activist Adriel Hampton, who has registered to run for governor of California solely to be exempted from having to post truthful ads (or, more realistically, solely to make a protest-point about what he thinks about Facebook’s political ads policy). Facebook has already said that they won’t allow him to run false political ads on its platform, and Hamptom says he’s “considering legal action.” Any such legal action would flop, thanks to CDA 230. Once again, content moderation at scale runs into lots of challenges and obstacles, no matter what you do — and it’s particularly fraught in the political advertising context.
Facebook execs have tried to make this point recently, though it’s doubtful that anyone is truly convinced:
Anyone who thinks Facebook should decide which claims by politicians are acceptable might ask themselves this question: Why do you want us to have so much power?
In our view, the only thing worse than Facebook not making these calls is for Facebook to make these calls.
Part of the issue is that everyone is conflating a few different issues — including the powerful position these companies have within the advertising ecosystem, the ability of politicians to target ads, the success of those advertising campaigns, and the nature of truth itself. Each of those are challenging issues, and not all solutions work the same for each — yet they all get lumped together. And, inevitably, that means a dissatisfying result for all.
But there’s another option: which is not to play at all.
Amusingly, the very next paragraph in Facebook’s attempted defense of its policy is to try to tie itself to other scrutinized platforms:
Our approach is consistent with companies like YouTube and Twitter. And broadcasters are required by federal law not to censor candidate ads.
To which Twitter has replied: “Nuh uh!” and officially announced it won’t allow any political ads on its platform at all:
Twitter is planning to ban political ads from its service globally, the company announced Wednesday via a series of tweets from its CEO Jack Dorsey. The ban will go into effect Nov. 22.
Dorsey said the ban will cover ads about specific candidates and issues ? the broadest possible ban. Some ads will be allowed to remain, including those encouraging people to vote. According to a Twitter spokesperson, news organizations are currently exempt from its rules on political advertising, and the company will release full details on exemptions next month.
It will be interesting to see how this plays out in practice — and I can already predict that there will be judgment calls about what is and what is not a political ad in the coming weeks and months, and plenty of criticism will be leveled when groups think the company decides incorrectly (one way or the other). Again, this remains something of a no win situation in which lots of people will be unhappy.
However, this brought to mind a larger point that I thought was worth making. One of the trite arguments that people keep making about these companies and the decisions they make is that they’re entirely about what will increase revenue — and that these companies want to accept whatever ads they can to maximize that revenue at every opportunity. But, of course, revenue is only one side of the equation. How much that revenue “costs” is a big deal as well. And, it seems pretty clear that Twitter (while watching what Facebook was going through) decided that the headache of dealing with this question was likely way too big of a “cost,” even if it wasn’t directly a monetary cost.
For what it’s worth, Mark Zuckerberg himself has argued that the revenue from political ads is negligible in the grand scheme of things, so the company could ban them without a significant hit. It has just chosen not to, for whatever reasons (Facebook tries to suggest lofty ideals about “giving people a voice” which seems like utter nonsense, because paid advertising has nothing to do with “giving people a voice.”)
I raise all of this to go back to my recent paper on “Protocols, Not Platforms.” One of the most common criticisms I’ve heard of that paper is that none of the big social media companies would ever adopt such a system, because it would likely mean giving up control and some amount of revenue (advertising or otherwise). In the paper I try to argue that this is not necessarily true. For one, there are some possible new business models that could replace advertising, but more importantly the costs of continuing to deal with complaints about content moderation are likely to continue to grow at an increasing rate — and some of those platforms may decide that it’s just not worth it any more. And then they may decide that they need to pick another path — and moving to a protocols-based solution, in which they shift the power away from their own centralized control, and out to the ends of the network, could become much more appealing.
And that’s why I find Twitter’s decision here quite interesting. It’s not going nearly as far as I hope these companies will go eventually — but it does show that the headaches created by setting themselves up as arbiters (or not) of truth might be so painful and costly that companies will look for ways to get out of the business altogether. That’s actually an encouraging sign. It’s a hell of a lot better than a company insisting that it can somehow magically deal with this mess and choose which political ads are okay and which are not without pissing everyone off.
Filed Under: advertisement, benefit, content moderation, content moderation at scale, cost, political ads, revenue, social media, truth
Companies: facebook, twitter