Elon Musk’s Vision Of Trust & Safety: Neither Safe Nor Trustworthy
from the who-could-have-predicted-it? dept
Even as Elon first made his bid for Twitter, we highlighted just how little he understood about content moderation and trust & safety. And, that really matters, because, as Nilay Patel pointed out, managing trust & safety basically is the core business of a social media company: “The essential truth of every social network is that the product is content moderation.” But, Elon had such a naïve and simplistic understanding (“delete wrong and bad content, but leave the rest”) of trust & safety that it’s no wonder advertisers (who keep the site in business) have abandoned the site in droves.
We even tried to warn Elon about how this would go, and he chose to go his own way, and now we’re seeing the results… and it’s not good. Not good at all. It’s become pretty clear that Elon believes that trust & safety should solely be about keeping him untroubled. His one major policy change (despite promising otherwise) was to ban an account tweeting public information, claiming (falsely) that it was a threat to his personal safety (while simultaneously putting his own employees at risk).
Last week, Twitter excitedly rolled out its new policy on “violent speech,” which (hilariously) resulted in his biggest fans cheering on this policy despite it being basically identical to the old policy, which they claimed they hated. Indeed, the big change was basically that the new rules are written in way that is way more subjective than the old policy, meaning that Twitter and Musk can basically apply them much more arbitrarily (which was a big complaint about the old policies).
Either way, as we noted recently, by basically firing nearly everyone who handled trust & safety at the company, Twitter was seeing its moderation efforts falling apart, raising all sorts of alarms.
A new investigative report from the BBC Panorama details just how bad it’s gotten. Talking to both current and former Twitter employees, the report highlights a number of ways in which Twitter is simply unable to do anything about abuse and harassment.
- Concerns that child sexual exploitation is on the rise on Twitter and not being sufficiently raised with law enforcement
- Targeted harassment campaigns aimed at curbing freedom of expression, and foreign influence operations – once removed daily from Twitter – are going “undetected”, according to a recent employee.
- Exclusive data showing how misogynistic online hate targeting me is on the rise since the takeover, and that there has been a 69% increase in new accounts following misogynistic and abusive profiles.
- Rape survivors have been targeted by accounts that have become more active since the takeover, with indications they’ve been reinstated or newly created.
Among things noted in that report is that Elon himself doesn’t trust any of Twitter’s old employees (which is perhaps why he keeps laying them off despite promising the layoffs were done), and goes everywhere in the company with bodyguards. Apparently, Elon believes in modeling “trust & safety” by not trusting his employees, and making sure that his own safety is the only safety that matters.
Also, an interesting tidbit is that Twitter’s interesting “nudge” experiment (in which it would detect if you were about to say something that might escalate a flame war, and suggest you give it a second thought — an experiment that was generally seen as having a positive impact) seems to be either dead or on life support.
“Overall 60% of users deleted or edited their reply when given a chance through the nudge,” she says. “But what was more interesting, is that after we nudged people once, they composed 11% fewer harmful replies in the future.”
These safety features were being implemented around the time my abuse on Twitter seemed to reduce, according to data collated by the University of Sheffield and International Center for Journalists. It’s impossible to directly correlate the two, but given what the evidence tells us about the efficacy of these measures, it’s possible to draw a link.
But after Mr Musk took over the social media company in late October 2022, Lisa’s entire team was laid off, and she herself chose to leave in late November. I asked Ms Jennings Young what happened to features like the harmful reply nudge.
“There’s no-one there to work on that at this time,” she told me. She has no idea what has happened to the projects she was doing.
So we tried an experiment.
She suggested a tweet that she would have expected to trigger a nudge. “Twitter employees are lazy losers, jump off the Golden Gate bridge and die.” I shared it on a private profile in response to one of her tweets, but to Ms Jennings Young’s surprise, no nudge was sent.
Meanwhile, a New York Times piece is detailing some of the real world impact of Musk’s absolute failures: Chinese activists, who have long relied on Twitter, can no longer do so. Apparently, their reporting on protests in Beijing was silenced, after Twitter… classified them as spam and “government disinformation.”
The issues have also meant that leading Chinese voices on Twitter were muffled at a crucial political moment, even though Mr. Musk has championed free speech. In November, protesters in dozens of Chinese cities objected to President Xi Jinping’s restrictive “zero Covid” policies, in some of the most widespread demonstrations in a generation.
The issues faced by the Chinese activists’ Twitter accounts were rooted in mistakes in the company’s automated systems, which are intended to filter out spam and government disinformation campaigns, four people with knowledge of the service said.
These systems were once routinely monitored, with mistakes regularly addressed by staff. But a team that cleaned up spam and countered influence operations and had about 50 people at its peak, with about a third in Asia, was cut to single digits in recent layoffs and departures, two of the people said. The division head for the Asia-Pacific region, whose responsibilities include the Chinese activist accounts, was laid off in January. Twitter’s resources dedicated to supervising content moderation for Chinese-language posts have been drastically reduced, the people said.
So when some Twitter systems recently failed to differentiate between a Chinese disinformation campaign and genuine accounts, that led to some accounts of Chinese activists and dissidents being difficult to find, the people said.
The article also notes that for all of Elon’s talk about supporting “free speech” and no longer banning accounts, a bunch of Chinese activists have had their accounts banned.
Some Chinese activists said their Twitter accounts were also suspended in recent weeks with no explanation.
“I didn’t understand what was going on,” said Wang Qingpeng, a human rights lawyer based in Seattle whose Twitter account was suspended on Dec. 15. “My account isn’t liberal or conservative, I never write in English, and I only focus on Chinese human rights issues.”
And, perhaps the saddest anecdote in the whole story:
Shen Liangqing, 60, a writer in China’s Anhui province who has spent over six years in jail for his political activism, said he has cherished speaking his mind on Twitter. But when his account was abruptly suspended in January, it reminded him of China’s censorship, he said.
So, Elon’s plan to focus on “free speech” means he’s brought back accounts of harassers and grifters, but he’s suspending actual free speech activists, while the company’s remaining trust & safety workers can’t actually handle the influx of nonsense, and they’ve rewritten policies to let them be much more arbitrary (and it’s becoming increasingly clear that much of the decision-making is based on what makes Elon feel best, rather than what’s actually best for users of the site).
Last week, we wrote about how Musk has insisted over and over again that the “key to trust” is “transparency,” but since he’s taken over, the company has become less transparent.
So combine all of this, and we see that Elon’s vision of “trust & safety” means way less trust, according to Elon’s own measure (and none from Elon to his own employees), and “safety” means pretty much everyone on the site is way less safe.
Filed Under: abuse, activism, content moderation, elon musk, free speech, harassment, nudge, safety, transparency, trust, trust & safety
Companies: twitter