Research Shows Twitter Was Missing Known Child Sex Abuse Material
from the this-is-bad dept
Soon after Elon Musk took over Twitter, he insisted that stopping child sexual abuse material (CSAM) was his top priority, and while some of his fans insisted that he had magically done so, the fact is that he fired nearly the entire team that was handling that issue, meaning that CSAM was running rampant on the site, and the company seemed to be doing little about it.
I’m guessing that all of the stories about this resulted in the folks at the Stanford Internet Observatory (SIO) researching how well Twitter was handling known CSAM images. As you may know, the “standard” for most big sites is to use a tool managed by Microsoft called PhotoDNA, which has hashes of a large database of known CSAM images, as determined by the National Center on Missing and Exploited Children (NCMEC). PhotoDNA has its issues, but the one thing it’s generally pretty good at is catching and stopping attempts to reupload images in its database.
So, SIO ran an experiment, in which they hooked up a PhotoDNA system to search Twitter and see if it found any such known images. The team at SIO wasn’t checking the images, but had them sent to NCMEC.
Making sure you have no PhotoDNA matches on your site is basically table stakes for any decently large internet platform that hosts images or video. If you can’t stop images in PhotoDNA, you’re failing, badly. And Twitter failed badly.
In just over two months, from March 12 to May 20, the researchers’ system detected more than 40 images posted to Twitter that were previously flagged as child sexual abuse material, based on a data set of roughly 100,000 tweets, said David Thiel, chief technologist of the Stanford Internet Observatory and a co-author of the report.
The appearance of the images on Twitter was striking because they had been previously flagged as child sexual abuse material, or CSAM, and were part of databases companies can use to screen content posted to their platforms, the researchers said. “This is one of the most basic things you can do to prevent CSAM online, and it did not seem to be working,” Thiel said.
Dealing with CSAM beyond PhotoDNA is a much bigger challenge, but the fact that the company couldn’t even do the basics correctly is terrifying.
In a thread on Bluesky, Renee Diresta, who worked on the research noted that they tried to reach out to Twitter to alert them that their PhotoDNA setup was missing stuff, but initially couldn’t find anyone to talk to, which is another strikes against Elon’s trust & safety team, as basically every mid- to large internet company has at least someone who knows people at SIO. It’s bad if a company doesn’t.
Eventually, the SIO team had to find a “third-party intermediary” to reintroduce them to Twitter, and somehow that finally got someone at the company to pay attention and fix the issue.
Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20.
Again, there are reasons why you have a strong trust & safety department, and that includes being able to deal with illegal content like CSAM. Yet, despite Musk claiming it was the company’s top priority, they completely fell down on the job.
Filed Under: csam, ncmec, photodna, sio, stanford
Companies: twitter