Notice: Use of undefined constant EDITION_TOKEN - assumed 'EDITION_TOKEN' in /home/beta6/deploy/itasca_20201215-3691-c395/rss.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss.php on line 2

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss-template.inc on line 2
Techdirt. Stories filed under "moral panic" Easily digestible tech news... https://beta.techdirt.com/ en-us Techdirt. Stories filed under "moral panic"https://beta.techdirt.com/images/td-88x31.gifhttps://beta.techdirt.com/ Thu, 25 Mar 2021 06:27:34 PDT Utah Governor Signs New Porn Filter Law That's Just Pointless, Performative Nonsense Karl Bode https://beta.techdirt.com/articles/20210322/07595646464/utahs-latest-porn-filter-bill-is-pointless-performative-nonsense.shtml https://beta.techdirt.com/articles/20210322/07595646464/utahs-latest-porn-filter-bill-is-pointless-performative-nonsense.shtml For decades now Utah legislators have repeatedly engaged in theater in their doomed bid to filter pornography from the internet. And repeatedly those lawmakers run face first into the technical impossibility of such a feat (it's trivial for anybody who wants porn to bypass filters), the problematic collateral damage that inevitably occurs when you try to censor such content (filters almost always wind up with legit content being banned), and a pesky little thing known as the First Amendment. But annoying things like technical specifics or the Constitution aren't going to thwart people who just know better.

For months now Utah has been contemplating yet another porn filtering law, this time HB 72. HB 72 pretends that it's going to purge the internet of its naughty bits by mandating active adult content filters on all smartphones and tablets sold in Utah. Phone makers would enable filters by default (purportedly because enabling such restrictions by choice is just to darn difficult), and require that mobile consumers in Utah enter a pass code before disabling the filters. If these filters aren't enabled by default, the bill would hold device manufacturers liable, up to $10 per individual violation.

On Tuesday, Utah Governor Spencer Cox signed the bill into law, claiming its passage would send an “important message” about preventing children from accessing explicit online content:

"Rep. Susan Pulsipher, the bill’s sponsor, said she was “grateful” the governor signed the legislation, which she hopes will help parents keep their children from unintended exposure to pornography. She asserts that the measure passes constitutional muster because adults can deactivate the filters, but experts said it still raises several legal concerns."

The AP story takes the "view from nowhere" or "both sides" US journalism approach to the story, failing to note that it's effectively impossible to actually filter porn from the internet. Usually because the filters (be they adult controls on a device or DNS blocklists) can usually be disabled by a toddler with a modicum of technical aptitude. Or that filters almost always cause unintended collateral damage to legitimate websites.

The AP also kind of buries the fact that the bill is more about performative posturing than productive solutions. The law literally won't take effect unless five other states pass equal laws, something that's not going to happen in part because most states realize what a pointless, Sisyphean effort this is:

"Moreover, the rule includes a huge loophole: it doesn’t take effect until five other states pass equivalent laws. If none pass before 2031, the law will automatically sunset. And so far, Utah is the only place that’s even got one on the table. “We don’t know of any other states who are working on any plans right now,” says Electronic Frontier Foundation media relations director Rebecca Jeschke."

There's also, again, that whole First Amendment thing. There is apparently something in the water at the Utah legislature that makes state leaders incapable of learning from experience when it comes to technical specifics or protected speech:

Obviously, this will go about as well as all the previous efforts of this type, including the multi-state effort by the guy who tried to marry his computer to mandate porn filters in numerous states under the false guise of combatting "human trafficking." And it will fail because these are not serious people or serious bills; they're just folks engaged in performative nonsense for a select audience of the perpetually aggrieved. Folks who simply refuse to realize that the solution to this problem is better parenting and personal responsibility, not shitty, unworkable bills or, in this case, legislation that does nothing at all.

]]>
round-and-round-you-go https://beta.techdirt.com/comment_rss.php?sid=20210322/07595646464
Wed, 3 Feb 2021 13:44:46 PST Federal Court Orders Destruction Of Illegally-Obtained Sex Trafficking Sting Recordings Tim Cushing https://beta.techdirt.com/articles/20210131/12195046158/federal-court-orders-destruction-illegally-obtained-sex-trafficking-sting-recordings.shtml https://beta.techdirt.com/articles/20210131/12195046158/federal-court-orders-destruction-illegally-obtained-sex-trafficking-sting-recordings.shtml The expiring breaths of a sensationalistic failure are emanating from a Florida sex trafficking investigation's soon-to-be corpse. A massive sting operation -- built on surreptitious recordings of massage parlor employees and their customers -- ended with nothing more than a bunch of solicitation charges. The alleged massive sex trafficking operation was actually just a bunch of consensual activity, with massage parlor employees free to come and go as they pleased.

It still made headlines, mainly because New England Patriots owner Robert Kraft was one of those caught on camera. But nearly every attempted prosecution has been thwarted by the actions of law enforcement officers, whose recordings illegally intruded into private spaces, violating the Fourth Amendment. The Appeals Court of Florida tossed the allegedly incriminating recordings, finding them unconstitutional.

For some reason, the agencies that made the surreptitious, illegal recordings are still holding onto them. The state attorney's office has allowed the retention of the videos, claiming they might be useful to plaintiffs suing law enforcement officers and agencies over violated rights.

On the face of it, this seems like a reasonable assertion. There is at least one federal lawsuit involving this sting operation underway. But the state attorney -- David Aronberg -- thinks immunity (qualified or absolute) will allow him and several law enforcement agencies to escape unscathed. Until that happens, Aronberg wants the recordings to remain intact until this litigation concludes, claiming his office can't "legally or ethically" order the destruction of potential evidence against him.

But his arguments aren't working. As Elizabeth Nolan Brown reports for Reason, a federal judge has ruled against the state attorney.

In his January 22 order, Ruiz granted John Doe's motion to compel destruction of the massage room video. Ruiz ruled that the defendants "shall destroy the videos unlawfully obtained through the surveillance of the Orchids of Asia Day Spa […] from January 18, 2019 to January 22, 2019, including any body camera footage obtained during associated traffic stops as well as any copies thereof."

The motion to compel destruction was unopposed, and Ruiz noted that the destruction is "pursuant to the terms of the parties' settlement agreement."

So, let's sort this all out. The state attorney claimed the footage needed to be retained because these plaintiffs might want to use it as evidence in their lawsuit. But the plaintiffs actually wanted the footage destroyed and had to get the court to order the destruction the state attorney claimed wasn't "legal or ethical."

Retaining the footage plaintiffs wanted destroyed was, at the very least, unethical. And this order makes any further retention illegal. It would have seemed apparent destruction was the right way to go unless the plaintiffs requested otherwise, given that the state appeals court ruled last year that the recordings were illegally obtained and could not be used as evidence in the state's prosecutions.

This about wraps up this sordid little law enforcement escapade. And another sex trafficking sting resulting in the arrest of zero sex traffickers is par for the course for law enforcement agencies which appear to be looking for any excuse to engage in titillating wastes of taxpayers' time and money.

]]>
no-sex-traffickers-were-harmed-during-the-course-of-this-investigation https://beta.techdirt.com/comment_rss.php?sid=20210131/12195046158
Thu, 21 Jan 2021 09:35:00 PST New York Times Decides Kids Are Playing Too Many Video Games During The Pandemic Timothy Geigner https://beta.techdirt.com/articles/20210119/10224946078/new-york-times-decides-kids-are-playing-too-many-video-games-during-pandemic.shtml https://beta.techdirt.com/articles/20210119/10224946078/new-york-times-decides-kids-are-playing-too-many-video-games-during-pandemic.shtml One of the most predictable things in the world is that if anything is going on in the universe, people will try to find some way to make video games into a villain over it. This is doubly true if there are children within a thousand miles of whatever is going on. Notable when these claims arise is the velocity with which any nuance or consideration of a counter-vailing opinion is chucked out the window.

Meanwhile most of the world, and the United States in particular, is suffering in all manner of ways from the COVID-19 pandemic. Hundreds of thousands dead. Millions falling ill. Economic fallout for large swaths of the public. High tensions due to all of this, compounded by a mad would-be-king inciting violence in the house of government. And, even for those not suffering health or massive economic crises, there's the simple matter that we're all more isolated, all home more often, and all mired in a severe lack of socialization and life-affirming activities.

And it's in this environment, apparently, that the New York Times has decided to chide parents for letting their kids play video games and allowing more screen time more generally.

The article, which ran on January 16, quoted some experts and presented a lot of “scary” numbers about screentime. But it also glossed over the fact that video games and the internet have helped many people, kids and adults, stay connected and sane during this terrible time.

The whole post is also oddly bookended by a random small family that is currently struggling during the pandemic. Their son plays a lot of video games as a way to connect with his friends. His father and mother are concerned about how much time he spends in front of the screen, but also know it’s one of the few ways he has to safely socialize while covid-19 runs wild across the world. This is a hard situation I imagine many parents around the globe are going through right now. But highlighting only kids and how much screentime they are using ignores that all of us, not just children and teens, are dealing with increased screentime and a lack of real human interaction. Instead, the article goes on and on about how potentially unhealthy and dangerous all this screentime could be for kids. How kids need to disconnect more. How kids are playing too much Roblox.

This whole diatribe is off for a number of reasons. First, let's start off with the obvious: these are not normal times. If experts want to make arguments or present data that one amount of screen time or another, or even certain amounts of video game playing, is harmful to children, I'm open to those arguments. They need to come with actual scientific data, but I'm open to them. But during a pandemic, when most children are incredibly isolated form their normal activities -- team athletics, outdoor play with other children, school and after-school activities, etc. -- someone is going to have to tell me how increased time playing video games or in front of a computer screen is somehow more harmful than the void of any affirming activity. There are only so many books a child is going to read. Only so many games of cards. Only so much time in imaginative play, or in discussion with his or her parents. Now is not a normal time, so why are we grading parents by normal rules?

Hell, even the experts on the matter have made their recommendations for screen time during the pandemic a moving target.

Dr. Jenny Radesky, a pediatrician who studies children’s use of mobile technology at the University of Michigan, said she did countless media interviews early in the pandemic, telling parents not to feel guilty about allowing more screen time, given the stark challenges of lockdowns. Now, she said, she’d have given different advice if she had known how long children would end up stuck at home.

“I probably would have encouraged families to turn off Wi-Fi except during school hours so kids don’t feel tempted every moment, night and day,” she said, adding, “The longer they’ve been doing a habituated behavior, the harder it’s going to be to break the habit.”

It's also very much worth keeping in mind that discussions on recommended limits to screen time and, even more so video games, are relatively new things given the rapid pace with which technology has been developed. And those recommendations regarding screen time for children have been moving targets over the years. New studies come out all the time on the topic and recommendations from experts likewise get updated.

Moving targets upon moving targets. If you're getting the sense that what experts say about all of this during the COVID-19 pandemic has a make-it-up-as-we-go quality to it, ding ding ding!

And instead of any nuance afforded to the fact that video games have changed wildly to become multiplayer social platforms as much as games, and what that means for children who need to socialize during a pandemic, the article instead just further vilifies game-makers.

Children turn to screens because they say they have no alternative activities or entertainment — this is where they hang out with friends and go to school — all while the technology platforms profit by seducing loyalty through tactics like rewards of virtual money or “limited edition” perks for keeping up daily “streaks” of use.

“This has been a gift to them — we’ve given them a captive audience: our children,” said Dr. Dimitri Christakis, director of the Center for Child Health, Behavior and Development at Seattle Children’s Research Institute. The cost will be borne by families, Dr. Christakis said, because increased online use is associated with anxiety, depression, obesity and aggression — “and addiction to the medium itself.”

To give the Times an ounce of credit, that quote is immediately followed by an acknowledgement that Christakis' claims aren't actually born out by anything other than association metrics. In other words, correlation rather than causation. So why bother even including the quote at all?

To conclude: these are not normal times. An over-indulgence of video games in lieu of other healthy activities is surely not optimal for the health and growth of children. But right now there are severe limits on those other healthy activities. And if some gaming gets children in touch with their friends who they can't see otherwise, vilifying video games makes zero sense.

]]>
sigh https://beta.techdirt.com/comment_rss.php?sid=20210119/10224946078
Mon, 4 Jan 2021 09:31:46 PST 60 Minutes Episode Is Pure Misleading Moral Panic About Section 230; Blames Unrelated Issues On It Mike Masnick https://beta.techdirt.com/articles/20210104/01172345990/60-minutes-episode-is-pure-misleading-moral-panic-about-section-230-blames-unrelated-issues-it.shtml https://beta.techdirt.com/articles/20210104/01172345990/60-minutes-episode-is-pure-misleading-moral-panic-about-section-230-blames-unrelated-issues-it.shtml I have a browser open with about a dozen different bad and wrong takes on Section 230 that one day I may write about, but on Sunday night, 60 Minutes jumped to the head of the line with an utterly ridiculous moral panic filled with false information on Section 230. The only saving grace of the program was that at least they spoke with Jeff Kosseff, author of the book on Section 230 (which is an excellent read). However, you can tell from the way they used Jeff that someone in the editorial meeting decided "huh, we should probably find someone to be the "other" side of this debate, so we can pretend we're even-handed" and then sprinkled in Jeff to explain the basics of the law (which they would then ignore in the rest of the report).

It's almost difficult to describe just how bad the 60 Minutes segment is. It is, quite simply, blatant disinformation. I guess somewhat ironically, much of the attack on 230 talks about how that law is responsible for disinformation. Which is not true. Other than, perhaps, this very report that is itself pure disinformation.

What's most astounding about the piece is that almost everything it discusses has nothing to do with Section 230. As with so many 230 stories, 60 Minutes producers actually seem upset about the 1st Amendment and various failures by law enforcement. And somehow... that's the fault of Section 230. It's somewhat insane to see a news organization like 60 Minutes basically go on an all-out assault on the 1st Amendment.

The central stories in the piece involve people who (tragically!) have been harassed online. One case involves a woman that was falsely blamed by some nutjob conspiracy theorists of having brought COVID-19 to the United States. Because of that, she and her family received death threats, which is absolutely terrible, but has nothing to do with Section 230. 60 Minutes points out that law enforcement didn't care and said that the death threats weren't enough of a crime. But... uh... then shouldn't 60 Minutes be focused on the failures of law enforcement to deal with threats (which actually can be a crime if they fall into the category of "true threats")? Instead, somehow this is Section 230's fault? How?

And it gets worse. 60 Minutes trots out the bogeyman of "anonymous internet trolls," even though this comes right after 60 Minutes shows that the nutjob conspiracy theorist who started this has a name and is well known (as a nutjob conspiracy theorist). The whole setup here is bizarre. The death threats are awful, and if they are criminal, then the problem is with the police and the FBI who the show says did nothing. If they're not criminal, then they're not breaking the law. So, the reason there's "no one to sue" is not because of Section 230, but because no laws were broken. But that's not how 60 Minutes' Scott Pelley frames it.

Right about now you might be thinking, they should sue. But that's the problem. They can't file hundreds of lawsuits against internet trolls hiding behind aliases. And they can't sue the internet platforms because of that law known as Section 230 of the Communications Decency Act of 1996. Written before Facebook or Google were invented, Section 230 says, in just 26 words, that internet platforms are not liable for what their users post.

Over and over again, the report blames Section 230 for all of this. Incredibly, at the end of the report, they admit that the video from that nutjob conspiracy theorist was taken down from YouTube after people complained about it. In other words Section 230 did exactly what it was supposed to do in enabling YouTube to pull down videos like that. But, of course, unless you watch the entire 60 Minutes segment, you'll miss that, and still think that 230 is somehow to blame.

The second half is basically more of the same. It talks about two more unfortunate stories that actually suggest Section 230 is working correctly. The first involves Lenny Pozner, who has been fighting back against insane conspiracy theorists who have gone after him since his son was killed in the Sandy Hook shooting in 2012. But, again, Pozner's story shows that Section 230... works? After going on for a few moments about how legitimately awful Pozner's situation is, Pelley reveals that Facebook, YouTube and others have been super responsive to Pozner and are quick to pull down information that he, and a non-profit he set up, flag as problematic. The segment talks about how Pelley sent an open letter to Mark Zuckerberg, and then admits that since then Facebook has been super responsive:

After the letter, a Facebook manager called Pozner.

Lenny Pozner: It began a relationship with Facebook that helped them learn about the material that is being posted on their platform and how it is abusive, defamatory

Scott Pelley: Have you seen a difference, a practical difference in Facebook?

Lenny Pozner: Yes, it's almost all gone.

So, um, why are we blaming Section 230 again? It sounds like the system is working. The same is true in the next story. Andy Parker, the father of the tragically murdered Alison Parker -- a reporter who was murdered by a fired co-worker live on air in the middle of an interview. Parker has wanted those videos off of social media. And... that's basically what happened.

Lenny Pozner flagged Alison Parker videos for YouTube to remove.

YouTube wrote us, "There is no place on YouTube for content that exploits this horrendous act, and we've spent the last several years investing in tools and policies to quickly remove it." YouTube told us it now prioritizes all requests from Pozner's HONR Network.

But 60 Minutes says this is proof that the platforms moderation doesn't work?

Andy Parker: I really expected them to do the right thing. their motto was, "Don't be evil." And for a while, they did a pretty good job of it. But now, they are the personification of evil.

Huh? What? But...?

That's when the report finally admits that the first couple profiled, falsely blamed for bringing COVID-19 to the US, also were successful in getting the video pulled down. And... then they still blame Section 230 -- the same Section 230 that enables YouTube to pull down those videos:

Scott Pelley: Based on what you've had to learn about all of these things, what do you think the solution could be?

Matt Benassi: This is really, really hard, right? 'Cause Section 230. When that was written, it was probably done with the intent that social media companies would police themselves in some manner. And social media companies haven't done that very well.

Except... the segment shows they did police themselves.

And then the segment ends in the most bizarre fashion, trying to at least nod towards the point that Section 230 being revoked would completely change the internet, but... I mean... this is just word salad:

But making social media liable would also mean Facebook, Twitter, even Wikipedia and Yelp, couldn't exist as we know them. President-elect Biden wants to revoke Section 230. The federal government is already suing to break up Facebook and Google. No one can say what social media 2.0 will look like or whether the innocent will ever be protected from a world wide web of lies.

What do the antitrust lawsuits have to do with Section 230? Why mention that? It's a total non sequitur. And getting rid of Section 230 does not "protect the innocent from a world wide web of lies." Because most lies are protected by the 1st Amendment, not 230. And in the rare cases they are not, that's an issue for law enforcement, not Section 230.

It's really becoming difficult to not believe that major media companies are, themselves, choosing to air blatant anti-internet propaganda. You may recall that one of the revelations from the Sony hack a few years back was that the big movie studios got together to plot out a strategy for undermining the internet, which included using media properties they own to run a smear campaign of reports and articles. Whether or not that's the intention, it certainly has the same effect here. CBS provides no disclaimer about the fact that it is owned by Viacom, one of the companies who was involved in that plot.

Nor does 60 Minutes note that its own site is protected by Section 230. Nor does the segment point out that Section 230 protects free speech online and protects users themselves. The brief clip of Jeff Kosseff just gives a basic description of part of the law, but not any of the important nuance (that Jeff knows and explains literally every day).

It's pure propaganda. And it's an online piece that seems to be suggesting (falsely) that without 230, we'd no longer have misinformation online. It's bonkers.

And, finally, it's insane that a news organization like CBS, which has faced many defamation cases over the years, is more or less promoting more defamation cases. I've never quite seen anything like it. But CBS/Viacom and 60 Minutes should be ashamed of putting on this garbage. It's not informing people. It's misinforming them.

]]>
oh-come-on https://beta.techdirt.com/comment_rss.php?sid=20210104/01172345990
Wed, 9 Dec 2020 09:32:32 PST Biden's Top Tech Advisor Trots Out Dangerous Ideas For 'Reforming' Section 230 Mike Masnick https://beta.techdirt.com/articles/20201208/17023245848/bidens-top-tech-advisor-trots-out-dangerous-ideas-reforming-section-230.shtml https://beta.techdirt.com/articles/20201208/17023245848/bidens-top-tech-advisor-trots-out-dangerous-ideas-reforming-section-230.shtml It is now broadly recognized that Joe Biden doesn't like Section 230 and has repeatedly shown he doesn't understand what it does. Multiple people keep insisting to me, however, that once he becomes president, his actual tech policy experts will understand the law better, and move Biden away from his nonsensical claim that he wishes to "repeal" the law.

In a move that is not very encouraging, Biden's top tech policy advisor, Bruce Reed, along with Common Sense Media's Jim Steyer, have published a bizarre and misleading "but think of the children!" attack on Section 230 that misunderstands the law, misunderstands how it impacts kids, and which suggests incredibly dangerous changes to Section 230. If this is the kind of policy recommendations we're to expect over the next four years, the need to defend Section 230 is going to remain pretty much the same as it's been over the last few years.

Let's break down the piece and its myriad problems.

Mark Zuckerberg makes no apology for being one of the least-responsible chief executives of our time. Yet at the risk of defending the indefensible, as Zuckerberg is wont to do, we must concede that given the way federal courts have interpreted telecommunications law, some of Facebook's highest crimes are now considered legal.

Uh, wait. No. There's a very sketchy sleight-of-word right here in the opening, claiming that "Facebook's highest crimes are now considered legal." That is wrong. Any law that Facebook violates, it is still held liable for. The point of Section 230 is that Facebook (and any website) should not be held liable for any laws that its users violate. Reed and Steyer seek to elide this very important distinction in a pure "blame the messenger" way.

It may not have been against the law to livestream the massacre of 51 people at mosques in Christchurch, New Zealand or the suicide of a 12-year-old girl in the state of Georgia. Courts have cleared the company of any legal responsibility for violent attacks spawned by Facebook accounts tied to Hamas. It's not illegal for Facebook posts to foment attacks on refugees in Europe or try to end democracy as we know it in America.

This is more of the same. The Hamas claim is particularly bogus. The lawsuit in that case involved some plaintiffs who were harmed by Hamas... and decided that the right legal remedy was to sue Facebook because some Hamas members used Facebook. There was no attempt to even show that the injuries the plaintiffs faced had anything to do with Hamas using Facebook. The cases were tossed because Section 230 did exactly the right thing: note that the legal liability should be on the parties actually responsible. We don't blame AT&T when a terrorist makes a phone call. We don't blame Ford because a terrorist drives a Ford car. We shouldn't blame Facebook just because a terrorist uses Facebook.

This is fairly basic stuff, and it is shameful for Reed and Steyer to misrepresent things in such a way that is designed to obfuscate the actual details of the legal issues at play, while purely pulling at heartstrings. But the heartstring-pulling was just beginning, because this whole piece shifts into the typical "but think of the children!" pandering quite quickly.

Since Section 230 of the 1996 Communications Decency Act was passed, it has been a get-out-of-jail-free card for companies like Facebook and executives like Zuckerberg. That 26-word provision hurts our kids and is doing possibly irreparable damage to our democracy. Unless we change it, the internet will become an even more dangerous place for young people, while Facebook and other tech platforms will reap ever-greater profits from the blanket immunity that their industry enjoys.

Of course, it hasn't been a get out of jail card for any of those companies. The law has never barred federal criminal prosecutions, as federal crimes are exempt from the statute. Almost every Section 230 case has been about civil disputes. It's also shameful that Reed and Steyer seem to mix-up the differences between civil and criminal law.

Also, I'd contest the argument that it's Section 230 that has made the internet a dangerous place for kids or democracy. Section 230 has enabled many, many forums and spaces for young people to congregate and communicate -- many of which have been incredibly important. It's where many LGBTQ+ kids have found like minded people to discover they're not alone. It's where kids who are interested in niche areas or specific communities have found others with similar views. All of that is possible because of Section 230.

Yes, there is bullying online, and that's a problem, but Section 230 has also enabled tremendous variation and competition in how different websites respond to that, with many creating quite clever ideas in how to deal with the downsides of purely open communication. Changing Section 230 will likely remove that freedom of experimentation.

It wasn't supposed to be this way. According to former California Rep. Chris Cox, who wrote Section 230 with Oregon's Sen. Ron Wyden, "The original purpose of this law was to help clean up the internet, not to facilitate people doing bad things on the internet." In the 1990s, after a New York court ruled that the online service provider Prodigy could be held liable in the same way as a newspaper publisher because it had established standards for allowable content, Cox and Wyden wrote Section 230 to protect "Good Samaritan" companies like Prodigy that tried to do the right thing by removing content that violated their guidelines.

But through subsequent court rulings, the provision has turned into a bulletproof shield for social media platforms that do little or nothing to enforce established standards.

This is just flat out wrong, and it's embarrassing that Reed and Steyer are repeating this out and out myth. You will find no sites out there, least of all Facebook (the main bogeyman named in this article) "that do little or nothing to enforce established standards." Facebook employs tens of thousands of content moderators, and has a truly elaborate system for reviewing and modifying its ever changing standards, which it tries to enforce.

We can agree that the companies may fail to catch everything, but that's not because they're not trying. It's because it's impossible. That was the very basis of 230: recognizing that an open platform is literally impossible to fully police, and 230 would enable sites to try different systems for policing it. What Reed and Steyer are really saying is that they don't like how Facebook has chosen to police its platform. Which is a reasonable argument to make, but it's not because of 230. It seems to be because Steyer and Reed are ignorant of what Facebook has actually done.

Facebook and other platforms have saved countless billions thanks to this free pass. But kids and society are paying the price. Silicon Valley has succeeded in turning the internet into an online Wild West — nasty, brutal, and lawless — where the innocent are most at risk.

Bullshit. Again, Facebook employs tens of thousands of moderators and actually takes a fairly heavy hand in its moderation practices. To say that this is a "Wild West" is to express near total ignorance about how content moderation actually works at Facebook. Facebook spends more on moderation that Twitter makes in revenue. To say that it's "saving billions" thanks to this "free pass" is to basically say that you don't know what you're talking about.

The smartphone and the internet are revolutionary inventions, but in the absence of rules and responsibilities, they threaten the greatest invention of the modern world: a protected childhood.

This is "but think of the children" moral panicking. Yes, we should be concerned about how children use social media, but Facebook, like most other sites doesn't allow users to have accounts if they're under 13-years old, and the problem being discussed is not about 230, but rather about teaching children how to be more discerning digital citizens when they're online. And this is important, because it's a skill they'll need to learn. Trying to shield them from absolutely everything -- rather than giving them the skills to navigate it -- is a dangerous approach that will leave kids unprepared for life on the internet.

But Reed and Steyer are full in on the "think of the children" moral panic... so much that they (and I only wish I was joking) compare children using social media... to child labor and child trafficking:

Since the 19th century, economic and technological progress enabled societies to ban child labor and child trafficking, eliminate deadly and debilitating childhood diseases, guarantee universal education and better safeguard young children from exposure to violence and other damaging behaviors. Technology has tremendous potential to continue that progress. But through shrewd use of the irresponsibility cloak of Section 230, some in Big Tech have turned the social media revolution into a decidedly mixed blessing.

Oh come on. Those things are not the same. This entire piece is a masterclass in extrapolating a few worst case scenarios and insisting that they're happening much more frequently than they really are. Eventually the piece finally gets to its suggestion on "what to do about it." And the answer is... destroy Section 230 in a way that won't actually help.

But treating platforms as publishers doesn't undermine the First Amendment. On the contrary, publishers have flourished under the First Amendment. They have centuries of experience in moderating content, and the free press was doing just fine until Facebook came along.

That... completely misses the point. Publishers handle things because they review every bit of content that goes out in their publication. The reason why we have 230 treat sites that host 3rd party content different than publishers who are publishing their own content is because the two things are not the same. And if websites had to review every bit of user content, like publishers do, then... we'd have many fewer spaces online where people can communicate. It would stifle speech online massively.

The tech industry's right to do whatever it wants without consequence is its soft underbelly, not its secret sauce.

But it's NOT a "right to do whatever it wants without consequence." Not even remotely. The sites themselves cannot break the law. The sites have very, very strong motivations to moderate -- including pressure from their own users (because if they don't do the right thing, their users will go elsewhere), the press, and (especially) from advertisers. We've seen just in the past few months that advertisers pulling their ads from Facebook has been an effective tool in getting Facebook to rethink its policies.

The idea that because 230 is there, Facebook and other sites do nothing is a myth. It's a myth that Reed and Steyer are exploiting to make you think that you have to "save the children." It's bullshit and they should be ashamed to peddle myths. But they lean hard into these myths:

Instead of acknowledging Facebook's role in the 2016 election debacle, he slow-walked and covered it up. Instead of putting up real guardrails against hate speech, violence, and conspiracy videos, he has hired low-wage content moderators by the thousands as human crash dummies to monitor the flow. Without that all-purpose Section 230 shield, Facebook and other platforms would have to take responsibility for the havoc they unleash and learn to fix things, not just break them.

This is... not an accurate portrayal of anything. It's true that Zuckerberg was initially reluctant to believe that it had a role in 2016 (and there are still legitimate questions as to how much of an impact Facebook actually had or whether it was just a convenient scapegoat for a poorly-run Hillary Clinton campaign). But by 2017, Facebook had found religion and completely revamped its moderation processes regarding election content. Yes, it did hire thousands of content moderators. But it's bizarre that Reed and Steyer finally admit this way down in the article after paragraphs upon paragraphs insisting that Facebook does no moderation, doesn't care, and doesn't need to do anything.

But more to the point, if they don't want Facebook to hire all those content moderators, but do want Facebook to stop all the bad stuff online... how the hell do they think Facebook can do that? The answer to them is the same as "wave a magic wand." They say to take away Facebook's 230 protections, like that will magically solve stuff. It won't.

It would mean much greater taking down of content, including content from marginalized voices. It would mean Facebook would likely have to hire many more of those content moderators to review much more content. And, most importantly, it means that no competitor could ever be built to compete with Facebook because it would be the only company that could afford to take on such compliance costs.

And, the article gets worse. Reed and Steyer point to FOSTA as an example of how to reform 230. Really.

o the simplest way to address unlimited liability is to start limiting it. In 2018, Congress took a small step in that direction by passing the Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act. Those laws amended Section 230 to take away safe harbor protection from providers that knowingly facilitated sex trafficking.

Right, and what was the result? It certainly didn't do what the people promoting it expected. Craigslist shut down its dating section, clearing the field for Facebook to launch its own dating site. In other words, it gave more power to Facebook.

More importantly, it has been used to harm sex workers putting many lives at risk, and shutting down places where adults could discuss sex, all while making it harder for police to find sex traffickers. The end result has actually been an increase rather than a decrease in ads for sex online.

In other words, citing FOSTA as a "good example" of how to amend Section 230 suggests whoever is citing it doesn't know what they're talking about.

Congress could continue to chip away by denying platform immunity for other specific wrongs like revenge porn. Better yet, it could make platform responsibility a prerequisite for any limits on liability. Boston University law professor Danielle Citron and Brookings Institution scholar Benjamin Wittes have proposed conditioning immunity on whether a platform has taken reasonable efforts to moderate content.

We've debunked this silly, silly proposal before. There are almost no sites that don't do moderation. They all have "taken reasonable efforts" to moderate, except for perhaps the most extreme. Yet this whole article was about Facebook and YouTube -- both of which could easily show that they've "taken reasonable efforts" to moderate content online.

So, if this is their suggestion... it would literally do nothing to help the "problems" they insisted were there for YouTube and Facebook. And, instead, what would happen is smaller sites would never get a chance to exist, because Facebook and YouTube would set the "standard" for how you deal with content moderation -- just like how the EU has now set YouTube's expensive ContentID as "the standard" for any site dealing with copyright-covered content.

So this proposal does nothing to change Facebook or YouTube's policies, but locks them in as the dominant players. How is that a good idea?

But Reed and Steyer suggest maybe going further:

Washington would be better off throwing out Section 230 and starting over. The Wild West wasn't tamed by hiring a sheriff and gathering a posse. The internet won't be either. It will take a sweeping change in ethics and culture, enforced by providers and regulators. Instead of defaulting to shield those who most profit, the United States should shield those most vulnerable to harm, starting with kids. The "polluter pays" principle that we use to mitigate environmental damage can help achieve the same in the online environment. Simply put, platforms should be held accountable for any content that generates revenue. If they sell ads that run alongside harmful content, they should be considered complicit in the harm. Likewise, if their algorithms promote harmful content, they should be held accountable for helping redress the harm. In the long run, the only real way to moderate content is to moderate the business model.

Um. That would kill the open internet. Completely. Dead. And it's a stupid fucking suggestion. The "pollution" they are discussing here is 1st Amendment protected speech. This is why thinking of it as analogous to pollution is so dangerous. They are advocating for government rules that will stifle free speech. Massively. And, again, the few companies that can do something are the biggest ones already. It would destroy smaller sites. And it would destroy the ability for you or me to talk online.

There's more in the article, but it's all bad. That this is coming from Biden's top tech advisor is downright scary. It is as destructive as it is ignorant.

]]>
this is a problem https://beta.techdirt.com/comment_rss.php?sid=20201208/17023245848
Wed, 7 Oct 2020 10:44:00 PDT Texas Grand Jury Indicts Netflix For 'Lewd Exhibition' Of Children In Its Movie 'Cuties' Tim Cushing https://beta.techdirt.com/articles/20201006/15035245453/texas-grand-jury-indicts-netflix-lewd-exhibition-children-movie-cuties.shtml https://beta.techdirt.com/articles/20201006/15035245453/texas-grand-jury-indicts-netflix-lewd-exhibition-children-movie-cuties.shtml It seems impossible that 2020 could get any stupider. But here we are, watching in bemusement as a showboating prosecutor talks a grand jury in a tiny Texas county into indicting an online streaming service for… let's check the record… "promotion of lewd visual material depicting child."

Here's "liberty loving conservative" (and state rep) Matt Schaefer's tweet, which contains a snapshot of the indictment.

Here's what the tweet says above the indictment photo:

Netflix, Inc. indicted by grand jury in Tyler Co., Tx for promoting material in Cuties film which depicts lewd exhibition of pubic area of a clothed or partially clothed child who was younger than 18 yrs of age which appeals to the prurient interest in sex

Go ahead and jump to the replies if you enjoy watching a bunch of people who don't understand the First Amendment or state law cheer on this showy act of futility.

The indictment [PDF] states that Netflix broke the law by distributing the film "Cuties" via its streaming service. Jurisdiction is presumably proper because even Tyler County residents can subscribe to the service. If you're not familiar with "Cuties," it's a coming-of-age film dealing with a Sengalese preteen who begins to emulate the sexualization of other females while growing up as a Muslim in Paris, France. It won awards at the Sundance Film Festival and flew under the radar until Netflix began its promotion of the film, which centered on the more questionable depictions of underage girls engaging in hyper-sexualized behavior.

All hell broke loose for a few weeks last month. Calls to boycott Netflix filled social media services and a number of US politicians decided this was the thing they should be spending their time on as thousands died from COVID, businesses closed forever and unemployment remained high.

Here's just some of the legislative-level furor that followed Netflix's release of the film.

Senator Bob Hall (R-Canton) swore to file a bill that would outlaw pedophilia in the state constitution, Texas Attorney General Ken Paxton joined two other state attorney generals in a letter asking Netflix to remove the film, and U.S. Senator Ted Cruz (R-TX) asked U.S. Attorney General William Barr to investigate the company for “the filming of minors engaging in sexually explicit conduct.” State Rep. James White (R-Tyler) likewise wrote Paxton asking for an investigation into the film.

Now that we're all caught up, let's look at the indictment and see if we can find the fatal flaw:

"Knowingly promote[d] visual material which depicts the lewd exhibition of the genitals or pubic area of a clothed or partially clothed child who was younger than 18 years of age at the time the visual material was created, which appeals to the prurient interest in sex, and has no serious literary, artistic, political, or scientific value…"

Anyone can get an indictment. This much is known about grand juries. But securing a conviction is going to be a hell of a lot more difficult. The prosecutor is going to have to convince a judge (and possibly a jury) that a film that won awards at an international film festival contains no "serious literary or artistic value." That's even harder to argue under the Miller test erected by the Supreme Court, which says the work as a whole has to be considered in terms of artistic merits, not just the cringier parts that prompted backlash all over the internet. That should be enough to nullify the criminal case Tyler County DA Lucas Babin is bringing against Netflix.

There's some pandering going on here, but it originates in the prosecutor's office. This will score points with the kind of people willing to award points for performative wastes of public funds. No one else will be affected. "Cuties" will enjoy another few days of internet infamy, along with its US distributor. But no one's going to jail because Netflix distributed this movie, no matter how much one prosecutor wants it to happen.

]]>
child-beauty-pageants-expected-to-remain-unaffected https://beta.techdirt.com/comment_rss.php?sid=20201006/15035245453
Thu, 23 Jan 2020 10:44:00 PST Academic Consensus Growing: Phones And Social Media Aren't Damaging Your Kids Mike Masnick https://beta.techdirt.com/articles/20200119/17264343766/academic-consensus-growing-phones-social-media-arent-damaging-your-kids.shtml https://beta.techdirt.com/articles/20200119/17264343766/academic-consensus-growing-phones-social-media-arent-damaging-your-kids.shtml We've pointed out for a while now that every generation seems to have some sort of moral panic over whatever is popular among kids. You're probably aware of more recent examples, from rock music to comic books to Dungeons and Dragons to pinball machines (really). Of course, in previous generations there were other things, like chess and the waltz. Given all that, for years we've urged people not to immediately jump on the bandwagon of assuming new technology must also be bad for kids. And, yet, so many people insist they are. Senator Josh Hawley has practically trademarked his claim that social media is bad for kids. Senator Lindsey Graham held a full hearing all of which was evidence free, moral panicking about social media and the children -- and because of that he's preparing a new law to completely upend Section 230 in the name of "protecting the children" from social media.

Not that it's likely to stop grandstanding politicians, but over in academia, where they actually study these things, there's a growing consensus that social media and smart phones aren't actually bad for kids. While some academics made claims about potential harm a decade or so ago, none of their predictions have proven accurate, and even some of those academics have revised their earlier research, and in one case even admitting that they caused an unnecessary panic:

The debate about screen time and mental health goes back to the early days of the iPhone. In 2011, the American Academy of Pediatrics published a widely cited paper that warned doctors about “Facebook depression.”

But by 2016, as more research came out, the academy revised that statement, deleting any mention of Facebook depression and emphasizing the conflicting evidence and the potential positive benefits of using social media.

Megan Moreno, one of the lead authors of the revised statement, said the original statement had been a problem “because it created panic without a strong basis of evidence.”

A few different "studies of studies" are showing that there's little to no evidence to support harm from these popular technologies.

The latest research, published on Friday by two psychology professors, combs through about 40 studies that have examined the link between social media use and both depression and anxiety among adolescents. That link, according to the professors, is small and inconsistent.

“There doesn’t seem to be an evidence base that would explain the level of panic and consternation around these issues,” said Candice L. Odgers, a professor at the University of California, Irvine, and the lead author of the paper, which was published in the Journal of Child Psychology and Psychiatry.

There's a lot more in that NY Times article, or you can read through pretty much all of the recent academic research on the topic.

Of course, the real question is just how silly will Senators Hawley, Graham and others look as they continue to insist that social media and phones are harming the children?

]]>
another-techno-panic https://beta.techdirt.com/comment_rss.php?sid=20200119/17264343766
Thu, 31 Oct 2019 10:50:51 PDT Australia's Idiotic War On Porn Returns, This Time Using Facial Recognition Karl Bode https://beta.techdirt.com/articles/20191030/10032243289/australias-idiotic-war-porn-returns-this-time-using-facial-recognition.shtml https://beta.techdirt.com/articles/20191030/10032243289/australias-idiotic-war-porn-returns-this-time-using-facial-recognition.shtml For years now, governments around the world have attempted to block, filter, or otherwise restrict the public's access to porn. And for just as long those efforts have routinely and repeatedly fallen flat on their face. Whether it's the UK's bungled and incoherent plan to employ age-checks to restrict porn access, or Utah's seemingly endless efforts to fiter porn entirely, history is filled with examples of how trying to thwart porn simply doesn't work. Filters are easy to bypass and tend to cause more problems than they solve. Waging war on porn at scale always ends in wasted money and headaches.

Apparently learning nothing from that time a teenage kid bypassed Australia's $89 million porn filters in a matter of minutes, Australia's back with a new idea to combat porn: restricting access to it via the use of facial recognition technology:

"Under the proposal from the Department of Home Affairs, a computer user’s face would be matched to images from official identity documents. It does not say how the user would submit a facial image at the beginning of each online session."

Obviously the government retaining a massive database of not only citizen facial scans -- but who is or isn't watching porn -- isn't being received well by privacy advocates. A parliamentary committee of the Australian House of Representatives began an inquiry last month fielding feedback on the effectiveness of age verification systems for adult websites and internet gambling. A filing from Australia's Department of Home Affairs championed the idea, suggesting an expansion of the government's existing facial recognition database to restrict access to online porn.

Beyond that, it's pretty clear that nobody can really explain how it will work, which is always a good sign for these types of efforts. And nobody in the Australian government wants to actually own or even talk about the dumb plan:

"The Department of Home Affairs did not respond to questions about the proposal, and the attorney general’s office, when asked to comment on the legal ramifications of the system, directed all questions to Home Affairs."

It's not clear how many times we need to go through this before governments realize that restricting access to internet porn on any meaningful scale is not only impossible given technical countermeasures like proxies or VPNs, it usually results in more problems than it causes -- be that legitimate websites being caught in the filter trap, or collected data being compromised by hackers or used by governments in ways not originally advertised.

In about two years after millions are wasted, Australia will shelve the project and lives will go on, with people worried about illegal underage access to porn doing what they should have done from the start: a little thing called responsible parenting.

]]>
this-isn't-going-to-work-out-you-ninnies https://beta.techdirt.com/comment_rss.php?sid=20191030/10032243289
Fri, 18 Oct 2019 10:44:00 PDT Congressional Reps Targeting Homegrown Terrorism Are Pushing A Bill That Would Allow Congress To Subpoena Citizens' Communications Tim Cushing https://beta.techdirt.com/articles/20191016/10345943206/congressional-reps-targeting-homegrown-terrorism-are-pushing-bill-that-would-allow-congress-to-subpoena-citizens-communications.shtml https://beta.techdirt.com/articles/20191016/10345943206/congressional-reps-targeting-homegrown-terrorism-are-pushing-bill-that-would-allow-congress-to-subpoena-citizens-communications.shtml In the name of securing the homeland, Congressional reps are tossing around the idea of regulating online speech. This isn't the first effort of its type. There's always someone on Capitol Hill who believes the nation would be safer if the First Amendment didn't cover quite so much speech. But this latest effort is coming directly from the Congressional committee that oversees homeland security efforts, as the Hill reports.

Civil liberties and technology groups have been sharply critical of a draft bill from House Homeland Security Committee Democrats on dealing with online extremism, saying it would violate First Amendment rights and could result in the surveillance of vulnerable communities.

The whole thing sounds a bit innocuous. At first. The bill would create a bipartisan commission to develop recommendations for Congress to address online extremism. The committee would have to balance these recommendations with existing speech protections. But it's easy to see how certain inalienable rights will become more alienable if this committee decides national security interests are more important than the rights of the people it's securing.

When you get into the details, you begin to see how this isn't really about making Congress do more to address the problem. It's about regulating online speech via Congressional action. The end result will be censorship. And self-censorship in response to the chilling effect.

The government-appointed body would be given the power to subpoena communications, a sticking point that raised red flags for First Amendment advocates concerned about government surveillance.

A source familiar with the legislation told The Hill they were immediately concerned that the subpoena power could be abused, questioning whether it would unintentionally create another avenue for the government to obtain private conversations on social media between Americans.

The draft bill would require companies to "make reasonable efforts" to remove any personally identifiable information from any communications they handed over. But that provision has not satisfied tech and privacy groups.

This isn't about moderating public posts on social media platforms. It will likely end up affecting those eventually, but the draft bill appears to allow the committee to target personal communications, which are usually private. Whether or not there are robust protections in place to strip identifying info doesn't really matter. A Congressional committee with the power to subpoena the communications of people not actually under investigation by the committee isn't the sort of thing anyone should be encouraging, no matter the rationale.

Social media platforms have been doing more to address concerns of online radicalization, but their efforts never seem to satisfy political leaders. The efforts have routinely resulted in collateral damage, not the least of which is the removal of evidence of criminal activity from the internet.

Moderation at scale is impossible. The imperfections of algorithms, combined with the human flaws of the thousands of moderators employed by social media platforms, has turned online moderation into a mess that satisfies no one and does harm to free speech protections. Any Congressional rep with the ability to perform a perfunctory social media search can find something to wave around in hearings about online radicalization and internet companies' unwillingness to clean up the web. It doesn't mean they're right. It just shows it's impossible to satisfy everyone.

In this case, the Congressional committee appears to be targeting white nationalist extremists. Just because the target has shifted to homegrown threats doesn't make the proposal any less dangerous. Even if it never results in the subpoenaed harvesting of communications, it could still encourage the federal government (and the local agencies that work with it) to expand existing social media monitoring programs. These also utilize imperfect AI and flawed humans. And they will also result in the over-policing of content. Unfortunately, these efforts will utilize actual police, so it's not just the First Amendment being threatened.

]]>
hey-internet-rando,-the-government-wants-to-know-what-you've-been-talking-ab https://beta.techdirt.com/comment_rss.php?sid=20191016/10345943206
Thu, 3 Oct 2019 10:44:00 PDT DOJ Using The FOSTA Playbook To Attack Encryption Mike Masnick https://beta.techdirt.com/articles/20191002/23130843110/doj-using-fosta-playbook-to-attack-encryption.shtml https://beta.techdirt.com/articles/20191002/23130843110/doj-using-fosta-playbook-to-attack-encryption.shtml For years now, the various DOJ folks pushing to break encryption have whined and complained that the tech industry won't even consider having an adult conversation about encryption. This, of course, has never been true. Indeed, in just the past few weeks we've highlighted two separate examples of attempts to bring together law enforcement folks and technology/cryptography experts to see if there are legitimate ways to move the conversation forward. That first one came up with an interesting and useful framework for judging any conversation about "lawful access" to encrypted communications, while the second demonstrated just how much various tech companies have been doing over the years -- in particular in helping law enforcement deal with the issue of child abuse.

And what do they get for all that? First, a horrific article in the NY Times that accurately highlights the awfulness of child sexual abuse online... but oddly frames the efforts that various tech companies have put into helping law enforcement as... evidence of not caring about the problem. And, of course, suggests that encryption is part of the problem:

After years of uneven monitoring of the material, several major tech companies, including Facebook and Google, stepped up surveillance of their platforms. In interviews, executives with some companies pointed to the voluntary monitoring and the spike in reports as indications of their commitment to addressing the problem.

But police records and emails, as well as interviews with nearly three dozen local, state and federal law enforcement officials, show that some tech companies still fall short. It can take weeks or months for them to respond to questions from the authorities, if they respond at all. Sometimes they respond only to say they have no records, even for reports they initiated.

And when tech companies cooperate fully, encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

This is following the FOSTA/SESTA game plan. Highlight legitimately horrific examples of horrible things that people have done (in FOSTA's case, sex trafficking; here child porn). Then, follow it up by blaming the internet platforms and technology even if those platforms have actively helped law enforcement over and over again. Encryption is repeatedly suggested as truly evil.

Increasingly, criminals are using advanced technologies like encryption to stay ahead of the police.

"Advanced technologies"? And then:

Offenders can cover their tracks by connecting to virtual private networks, which mask their locations; deploying encryption techniques, which can hide their messages and make their hard drives impenetrable; and posting on the dark web, which is inaccessible to conventional browsers.

And again:

Tips included tutorials on how to encrypt and share material without being detected by the authorities.

Yes, encryption can be used to hide bad stuff. No doubt about it. However, it also protects the privacy and security of everyone else as well. There are real tradeoffs here to be discussed -- and we shouldn't forget that one element in that discussion is that law enforcement does have other tools to track down and find those involved in child porn. That encryption blocks some evidence does not mean there are not other ways for them to collect evidence -- as we've seen in many other cases.

But, given that Attorney General William Barr has been on an anti-encryption kick lately, as has FBI Director Chris Wray, it shouldn't be a huge surprise that they've teamed up for a DOJ "symposium" more or less building on the NY Times article (for which the DOJ appears to have been a major source), in which there appears to be an an entirely one-sided lineup of speakers to discuss the scary sounding:

Lawless Spaces: Warrant-Proof Encryption and Its Impact On Child Exploitation Cases

Barr is speaking, as is Wray. So is Deputy Attorney General Jeffrey Rosen, who also spent the summer trashing encryption. There are also two foreign speakers. There's Peter Dutton, the Home Affairs Minister from Australia, who lead that country's efforts to backdoor encryption with a bunch of ridiculously misleading claims about how companies that offer encryption should be blamed for anyone using it for illegal activity. Then there's the UK's Home Secretary, Priti Patel, who we had just mentioned earlier this week for her statements that encryption "empowers criminals."

There does not appear to be a single cryptographer on the program. There does not appear to be a single technologist on the program. There does not appear to be anyone who can provide even the slightest counterweight to the idea that encryption is just a tool for criminals. Contrast this to the sessions we talked about at the opening of this piece. The Carnegie Endowment and Stanford each actually invited people from a variety of different viewpoints and made sure to have actual experts involved. The DOJ is not doing that. This is pure theater as part of a public relations campaign to undermine the encryption that keeps us all safe.

Yes, you can point out very real and horrifying examples of people abusing just about any technology. But in a sane world, you then start looking at the actual size of the problem, the actual risks, the alternative approaches, and the costs and benefits associated with all of them. Not a single person on the DOJ's list of speakers seems equipped to do that. Given that we've quoted each and every one of them right here on Techdirt staking out an extreme anti-encryption standpoint over and over again, suggests that, unlike the tech industry, which has held and participated in various discussions, the DOJ just wants to set up scare mongering stories in the press before pushing for legislation that will destroy encryption and put us all at risk. For our safety.

As you'll recall, AG Barr himself had ominously warned that if the tech industry didn't "engage" then eventually there would be some sort of "incident" to "galvanize public opinion" against encryption:

Obviously, the Department would like to engage with the private sector in exploring solutions that will provide lawful access. While we remain open to a cooperative approach, the time to achieve that may be limited. Key countries, including important allies, have been moving toward legislative and regulatory solutions. I think it is prudent to anticipate that a major incident may well occur at any time that will galvanize public opinion on these issues.

Well, the industry was willing to engage and explain the costs of undermining encryption. But rather than understand that, it now looks like Barr is working overtime to manufacture that "major incident" to galvanize public opinion against their own security. It's a shame the NY Times decided to help.

]]>
watch-out https://beta.techdirt.com/comment_rss.php?sid=20191002/23130843110
Fri, 12 Jul 2019 09:29:00 PDT Senator Graham Spreads A Bunch Of Nonsense About 'Protecting Digital Innocence' Online Mike Masnick https://beta.techdirt.com/articles/20190711/01362142564/senator-graham-spreads-bunch-nonsense-about-protecting-digital-innocence-online.shtml https://beta.techdirt.com/articles/20190711/01362142564/senator-graham-spreads-bunch-nonsense-about-protecting-digital-innocence-online.shtml We warned last week that Senator Lindsey Graham was holding a "but think of the children online" moral panic hearing. Indeed, it happened. You can watch the whole 2 hours, but... I wouldn't recommend it (I did it for you, though). Most of it is the usual moral panic, technologically illiterate nonsense we've all come to expect from Congress. Indeed, in a bit of good timing, the Pessimist's Archive just tweeted out a clip of a 1993 Senate hearing in which then Senator Joe Lieberman flipped out about evil video games. Think about this, but two hours, and a wider array of nonsense:

It starts out with a prosecutor from South Carolina, Duffie Stone, moral panicking about basically everything. Encryption is evil. Children are being sex trafficked online. And, um, gangs are recruiting members with (gasp) music videos. Later he complains that some of those kids (gasp!) mock law enforcement in their videos. Something must be done! The second speaker, a law professor, Angela Campell, claims that we need more laws "for the children!" She also goes further and says that the FTC should go after Google and others for not magically stopping scammy companies from existing. Then there was this guy, Christopher McKenna, from an organization ("Protect Young Eyes!") dedicated to moral panics, telling all sorts of unbelievable anecdotes about evil predators stalking young people on Instagram and "grooming" them. Remember, that actual data on this kind of activity shows that it's actually quite rare (not zero, and that's not excusing it when it does happen, but the speaker makes it sound like every young girl on Instagram is likely to be at risk of sex trafficking). He also asks the government to require an MPAA/ESRB-style "rating" system for apps -- apparently unaware that laws attempting to require such ratings have been struck down as unconstitutional, and the MPAA/ESRB ratings only exist through voluntary agreements.

There's also... um... this:

It's the app where every kid, regardless of age, has access to the Discover News section, where they are taught how to engage in risky sexual behavior, such as hookup, group, anal, or torture sex, how to sell drugs, and how to hide internet activity from parents using "incognito mode."

He's describing Snapchat. I've used Snapchat for years and, uh, I've never come across any of that. Also, the complaint about incognito mode is... pretty messed up, considering how that's a tool for protecting privacy. This is all straight from the standard moral panic playbook. Also, he claims that Twitter has "hardcore porn and prostitution was everywhere" -- which is also news to me (and I use Twitter a lot). He also whines that VPNs are too easy to get -- and then later whines that it's "too hard" to protect our privacy. Um, hiding VPNs will harm our privacy. It's like a hodge podge of true nonsense.

There was also John Clark from NCMEC -- the National Center for Missing and Exploited Children. NCMEC actually does good work in helping platforms screen out and spot child porn. However, Clark contributes to the scare-mongering about just how awful the internet is. He also flat out lies. At one point during the panel, Senator Ted Cruz asks Clark about FOSTA and what it's done so far. Clark flat out lies and says that FOSTA took down Backpage. This is false. Backpage was taken down and its founders arrested before FOSTA was even signed into law.

The only semi-reasonable panelist was the last one, Stephen Balkam, from the Family Online Safety Institute. While McKenna mocks the idea that "parents have a role" by pointing out that parents can't watch over their kids every hour of every day (duh), Balkam points out that what we should be doing is not watching over our kids all the time, but rather training them and educating them to know how to be good digital citizens online and to avoid trouble. But that kind of message was basically ignored by the Senators, because what fun is actually respecting our kids and teaching them how to be smart internet users. Instead, most of panel focuses on crazy anecdotes and salacious claims about internet services that make them sounds a hell of lot more insane than any of those platforms actually are.

Later, Senator John Kennedy asks the guy from "Protect Young Eyes" if Apple can build a filter that will magically help parents block kids from ever seeing sexually explicit material. McKenna stumbles and admits he has no idea, leading Balkam to finally have to jump into the conversation (he's the only panelist that no Senator had called on throughout the entire ordeal) to point out that all platforms have some forms of parental controls. But Kennedy cuts him off and says "but can it be done?" Balkam stutters a "yes," which is not accurate -- since Kennedy is asking for something impossible. But then Kennedy suggests that Congress write a law that requires companies like Apple and Google to install filters (something that's already been ruled unconstitutional).

Kennedy's idea is... nutty. He includes the obligatory "I don't know how any of this is done" comment before suggesting a bunch of impossible ideas.

Could Apple, for example, design a program that a parent could opt into, and the instructions to Apple would be "design a program that will filter all information that my daughter or son may see that would be sexually exploitative"? Maybe "filter all pictures or written references to human genitalia." Can that be done? ... Isn't that the short way home here?

[....]

So could we write legislation, or promulgate a rule, that says "here's the thing that a reasonable parent would do to protect his or her child from seeing this stuff." And we do that in conjunction with somebody that has the obvious expertise. And you filter everything. I don't know how to do it. I can't write software. Maybe it's to prevent any pictures of human genitalia. Or prohibit any reference to sexual activity. I don't know. The kids aren't gonna like it, but that's not who we're trying to please here. Why couldn't that be done?

Well, the Constitution is why it can't be done Senator. Also, basic understanding of technology. Or the limits on filter technology. Block all mention of sexual activity? Sure, then kids will use slang. Good luck keeping up with that. Block all pictures of genitalia -- then say good buy to biology texts online. Or pages about breast cancer. This is all stuff that lots of people have studied for decades and Kennedy is displaying his ignorance about the Constitution, the law, the internet, the technology, and just about everything else as well. Including kids.

Balkam points out that there are lots of private companies already making such filters, but Kennedy keeps saying "can we write a law" and "can we require every device have these filters" and Balkam looks panic'd noting he has no idea about whether or not they can write such a law (answer: they cannot, at least not if they want it to pass Constitutional muster).

Senator Blackburn... brings up Jeffrey Epstein. Who, as far as we know... didn't use the internet to prey on girls. But according to Blackburn, Epstein proves the problems of the internet. Because. Senator Hawley then completely makes up a claim that YouTube is deliberately pushing kids to pedophiles and refuses to do anything about it. He claims -- incorrectly -- that Google admitted that it knows it sends videos of kids to pedophiles (and, he claims, allows the pedophiles to contact the kids) and that it deliberately has decided not to stop this. This misrepresents... basically everything once again.

Senator Thom Tillis then grandstands that it's all the parents' fault -- and if a kid gets a mobile phone and lies about his age, we should be... blaming the parents for "giving the kids a lethal device." No hyperbole and grandstanding there, huh? He's also really focused on "lethality." He later claims that the internet content itself is "lethal."

Towards the end, the Senators all gang up on Section 230. Senator Cruz asks his FOSTA question (leading NCMEC's Clark to falsely state that it was necessary to take down Backpage), and then Blumenthal calls 230 "the elephant in the room" and suggests that there needs to be a "duty of care" to get companies to do anything. It seems like Hawley is already gone by this time, but no one seems to point out that any such duty of care would likely lead to much greater censorship on these platforms, in direct contrast with his demand that the companies censor less.

Nevertheless, Senator Graham closes the hearing by saying that he thinks the companies need to "earn" their CDA 230 protections (which is part of Hawley's nonsense bill). Graham suggests that Congress needs to come up with "best business practices" and platforms should only get 230 protections if they "meet those best business practices."

Who knew the Republican Party was all about dictating business standards. What happened to the party of getting government out of business?

Who knows what will actually come out of this hearing, but it was mostly a bunch of ill-informed or mis-informed, technologically illiterate grandstanding, moral panic nonsense. In other words, standard operating procedure for most of Congress.

]]>
moral-panics https://beta.techdirt.com/comment_rss.php?sid=20190711/01362142564
Thu, 20 Jun 2019 12:08:25 PDT No, Your Kid Isn't Growing Horns Because Of Cellphone Use Karl Bode https://beta.techdirt.com/articles/20190620/10225542438/no-your-kid-isnt-growing-horns-because-cellphone-use.shtml https://beta.techdirt.com/articles/20190620/10225542438/no-your-kid-isnt-growing-horns-because-cellphone-use.shtml This week, the Washington Post grabbed plenty of attention for a story that claimed that kids are actually growing "horns" because of cell phone use. The story, which leans on 2016 and 2018 research out of Australia, was cribbing off of this more nuanced piece by the BBC on how skeletal adaptation to modern living changes are kind of a thing. The Post's more inflammatory take was accompanied by a wide variety of other stories proclaiming that today's children are growing horns and bone spurs because they use their durn cellphones too much!

The Washington Post put it this way, with an accompanying, scary X-Ray pulled from the initial research:

"What we have not yet grasped is the way the tiny machines in front of us are remolding our skeletons, possibly altering not just the behaviors we exhibit but the bodies we inhabit. New research in biomechanics suggests that young people are developing hornlike spikes at the back of their skulls — bone spurs caused by the forward tilt of the head, which shifts weight from the spine to the muscles at the back of the head, causing bone growth in the connecting tendons and ligaments."

The problem is that while the research did find that human skeletons are shifting and changing in the modern era due to postural and other behaviors, they weren't able to prove that cellphones were the culprit. There's a wide variety of modern human behaviors that could influence skeletal shifts, from watching television and reading books to terrible posture resulting from a lack of meaningful exercise. Only a few reporters could be bothered to note that at no point did the researchers directly, actually link the "horns" to cellphone use. In fact, technology isn't even mentioned in the source data:

"The researchers don’t mention technology or smartphones at all in their 2018 research, but they do make a statement in the discussion section of their 2016 paper. They make an educated guess that the prevalence of enthesophytes may have to do with “the increased use of hand-held technologies from early child-hood."

Their research does not prove that device use causes these bony appendages. They don’t even claim that device use and appendages are correlated. They simply make an educated guess in the discussion section, pointing to a topic for future research."

As journalist Caroline Haskins notes, the whole hysteria is reminiscent of the "smartphone pinky" scare that bubbled up a few years ago, which proclaimed that people's fingers were being "deformed" by the way they hold their electronic gadgets and smartphones. And it's tangentially related to the recent panic over the recent "Momo" hoax, which proclaimed that a viral game making the rounds on services like WhatsApp and YouTube involved a demonic-looking chicken lady goading young children into acts of violence or even suicide.

We love a good moral panic. And such panics often go viral because Americans are (if that hadn't been made clear in recent years) immeasurably susceptible to bullshit. But it's a problem made so much worse by a media that can't just focus on the amazing science and technology news and issues of the day, but instead quickly falls prey to nonsensical bullshit to generate additional ad revenue. And because the debunking stories see a quarter (or less) of the attention of the original inflammatory reports ("A lie can travel halfway around the world while the truth is still putting on its shoes," as the old saying goes), there's a huge chunk of the public walking around with fluff and nonsense in their heads where factual data should be.

]]>
another moral techno-panic https://beta.techdirt.com/comment_rss.php?sid=20190620/10225542438
Thu, 23 May 2019 09:38:03 PDT Forget 'Breaking Up' Internet Companies, Senator Josh Hawley Says They Should All Die Because They're Too Popular Mike Masnick https://beta.techdirt.com/articles/20190522/10364942262/forget-breaking-up-internet-companies-senator-josh-hawley-says-they-should-all-die-because-theyre-too-popular.shtml https://beta.techdirt.com/articles/20190522/10364942262/forget-breaking-up-internet-companies-senator-josh-hawley-says-they-should-all-die-because-theyre-too-popular.shtml We've had our issues with politicians like Senator Elizabeth Warren whose plans to "break up" big internet companies don't seem to make much sense, but it appears that Senator Josh Hawley has decided to take things to another level of insanity altogether. In an op-ed for USA Today, Hawley makes the argument that Facebook, Instagram and Twitter should all die. And while there are plenty of people who appear to support a dead Facebook in response to that company's long history of sketchy practices, that's not really the reason Hawley wants them dead.

He wants them dead because they're too popular. Hawley cherry picks some evidence to suggest that using social media is bad for our health.

And in order to guarantee an audience big enough to make their ads profitable, big tech has developed a business model designed to do one thing above all: addict.

Facebook, Twitter, Instagram — they devote massive amounts of money and the best years of some of the nation’s brightest minds to developing new schemes to hijack their users’ neural circuitry. That’s because social media only works — to make money, anyway — if it consumes users’ time and attention, day after day. It needs to replace the various activities we enjoyed and did perfectly well before social media existed.

This hearkens back to nearly every other overblown, ridiculous moral panic of yesteryear. Television, radio, video games, novels, comic books, dungeons and dragons, pinball, rock and roll. They've all received this nutty treatment. Even chess.

"A pernicious excitement to learn and play chess has spread all over the country, and numerous clubs for practicing this game have been formed in cities and villages. Why should we regret this? It may be asked. We answer, chess is a mere amusement of a very inferior character, which robs the mind of valuable time that might be devoted to nobler acquirements, while it affords no benefit whatever to the body. Chess has acquired a high reputation as being a means to discipline the mind, but persons engaged in sedentary occupations should never practice this cheerless game; they require out-door exercises--not this sort of mental gladiatorship."

Or, remember the report from 1909 by the "NY Society for the Prevention of Cruelty to Children"

This new form of entertainment has gone far to blast maidenhood ... Depraved adults with candies and pennies beguile children with the inevitable result. The Society has prosecuted many for leading girls astray through these picture shows, but GOD alone knows how many are leading dissolute lives begun at the 'moving pictures.'

Hawley's piece is one and the same with those previous moral panics. It's kind of amusing for a guy who claims to be a "free market, less government intervention" conservative to now stand up and argue for literally shutting down private enterprises because they're popular, but politics and hypocrisy go hand in hand.

Of course, to make his point, Hawley wants to tie popular social media to another moral panic: "drugs!"

Let’s be clear. This is a digital drug. And the addiction is the point. Addiction is what Mark Zuckerberg is selling.

Like other drugs, this one hurts its users. Attention spans dull. Tempers quicken. Relationships fray.

And those are the benign effects. The Journal of Pediatrics recently noted a surge in attempted suicide: more than double the attempts over the last decade for those under 19, with a tripling among girls and young women 10 to 24. The study’s authors can’t prove social media is to blame, but they strongly suspect it plays a critical role. Congress has a duty to investigate that potential link further.

Meanwhile, as we noted just earlier this week, another comprehensive study did not find any evidence to support the idea that social media is driving depression. But who needs facts when you have a moral panic to sell.

]]>
say what now? https://beta.techdirt.com/comment_rss.php?sid=20190522/10364942262
Tue, 19 Feb 2019 06:07:22 PST United States Gifted With 33rd National Emergency By President Who Says It's Not Really An Emergency Tim Cushing https://beta.techdirt.com/articles/20190216/09011641612/united-states-gifted-with-33rd-national-emergency-president-who-says-not-really-emergency.shtml https://beta.techdirt.com/articles/20190216/09011641612/united-states-gifted-with-33rd-national-emergency-president-who-says-not-really-emergency.shtml President Trump has declared a national emergency.

This is a thing presidents can do. And they've been doing it since 1979 when President Carter responded to a hostage situation in Iran by declaring a national emergency. We've spent four decades in perpetual emergency mode. With Trump's announcement, this makes American subject to 33 concurrent national emergencies, all of which grant the president a bunch of extra (and surprising!) powers, and encourage the government to start clawing back rights and privileges from the American people.

The declaration on the White House website is at least mostly coherent. It says there's a national security/humanitarian crisis at the southern border because, um, immigrants are still trying to migrate to the United States.

The current situation at the southern border presents a border security and humanitarian crisis that threatens core national security interests and constitutes a national emergency. The southern border is a major entry point for criminals, gang members, and illicit narcotics. The problem of large-scale unlawful migration through the southern border is long-standing, and despite the executive branch's exercise of existing statutory authorities, the situation has worsened in certain respects in recent years. In particular, recent years have seen sharp increases in the number of family units entering and seeking entry to the United States and an inability to provide detention space for many of these aliens while their removal proceedings are pending. If not detained, such aliens are often released into the country and are often difficult to remove from the United States because they fail to appear for hearings, do not comply with orders of removal, or are otherwise difficult to locate.

This statement may be coherent, but it's also mostly untrue. Southern border apprehensions are down to a quarter of the peak they reached in 2000. There have been increases in recent years of families seeking entry, but how that translates to a national security emergency is anyone's guess. The claim that immigrants blow off hearings is completely false. The DOJ's own data shows that 60-75% of non-detained immigrants show up for court appearances.

The other fudged claim -- somewhat muddied in the White House statement but somehow made more clear during the President's rambling press conference -- is the assertion that a porous border without The Wall/Fence is allowing drugs and trafficked humans to come pouring into the United States. The DEA has repeatedly stated that most drugs make their way into the US through legal points of entry. Why? Because it's way more efficient to move drugs with large vehicles, rather than a handful of mules walking through unguarded areas.

President Trump completely undercut his own national emergency declaration during his Rose Garden press conference. Trump said he didn't actually need to declare an emergency to secure border wall funds, but thought this would be faster than the usual appropriations process.

"I could do the wall over a longer period of time. I didn't need to do this, but I'd rather do it much faster."

We are subject to a national emergency that isn't an emergency, based on assumptions made by a president who refuses to listen to the government agencies he's involving in his manufactured crisis. On top of that, this is only the second declared national emergency that actively involves the military. This should be of great concern to all Americans, including Trump supporters, as it involves the siphoning of resources usually deployed elsewhere in the world and directs them towards a domestic crisis that isn't actually a crisis.

The only other national emergency to involve the US military was the one George W. Bush issued three days after the 9/11 attacks in 2001. We've all witnessed the explosive expansion of government power flowing from this declaration and other Congressional responses to the terrorist attacks. Here we are with no attacks, living in an era of unprecedented safety, and the president of the country has just invoked expansive powers to deal with an immigration influx that has been trending downward for nearly two decades.

]]>
nation-forced-to-hold-breath-until-president-given-what-he-wants https://beta.techdirt.com/comment_rss.php?sid=20190216/09011641612
Tue, 12 Feb 2019 09:53:16 PST Key Supporter Of FOSTA, Cindy McCain, Misidentifies 'Different Ethnicity' Child; Claims Credit For Stopping Sex Trafficking That Wasn't Mike Masnick https://beta.techdirt.com/articles/20190209/00191041565/key-supporter-fosta-cindy-mccain-misidentifies-different-ethnicity-child-claims-credit-stopping-sex-trafficking-that-wasnt.shtml https://beta.techdirt.com/articles/20190209/00191041565/key-supporter-fosta-cindy-mccain-misidentifies-different-ethnicity-child-claims-credit-stopping-sex-trafficking-that-wasnt.shtml In the wake of 9/11, the Metropolitan Transit Authority (MTA) in New York City hired an ad agency, Korey Kay & Partners, to come up with a "creative exercise" in dealing with the post-9/11 world. They came up with slogan "If you see something, say something" and plastered it all over subways. Incredibly, the MTA trademarked the term (despite its lack of "use in commerce") and later licensed it to DHS (insanely, the MTA has been known to threaten others for using the slogan). However, despite now sounding like common wisdom, the program has been an utter disaster that has not stopped a single terrorist, but has created massive hassles for innocent people, and law enforcement who have to deal with busybodies freaking out about "weird stuff."

Take, for example: Cindy McCain. The wife to the late Senator John McCain, recently decided she had seen something and had to say something. Specifically, as she herself claims, she was at an airport and saw a woman with a child of a "different ethnicity." And, rather than thinking "how nice" or "maybe I shouldn't be racist," she thought "I must go tell the police." Specifically,as she told an Arizona radio station:

“I came in from a trip I’d been on and I spotted—it looked odd—it was a woman of a different ethnicity than the child, this little toddler she had, and something didn’t click with me,” McCain said in the radio interview. “I went over to the police and told them what I saw, and they went over and questioned her, and, by God, she was trafficking that kid.”

By God, no she wasn't. The police did go check it out, and whatever McCain thinks happened... did not.

Phoenix police said Wednesday that while officers did respond to the Jan. 30 call, at McCain’s request, they were able to determine “there was no evidence of criminal conduct or child endangerment.”

McCain is now trying to brush away the criticism by insisting she was just seeing something and saying something:

Of course, there's more backstory here and it's kind of important. Over the last few years, McCain has been really heavily focused on playing up the exaggerated moral panic around sex trafficking. As we've discussed many times, sex trafficking is a real issue, but a very small one. The numbers around it are massively exaggerated or distorted, leading to crazy moral panics, and a desire by the police and the press to rush to talk about breaking up sex trafficking rings that don't seem to actually exist.

A more cynical person than I might point out that Cindy McCain's focus on "sex trafficking" seemed to coincide with her focus on Backpage, and a desire to extract some level of revenge from that company's owners, who also, for a time, owned the Phoenix New Times, which published a series of unflattering articles about Cindy McCain (and John McCain).

But, even leaving that aside, in the run up to the debate over FOSTA, multiple people told me that more sane and reasonable versions of the bill were well positioned to move forward until Cindy McCain got involved. Prior to that, there had been real, serious discussions, understanding the problems with the FOSTA/SESTA approach, and an attempt to create a more reasonable policy to deal with (what little) sex trafficking that actually happens online. However, then Cindy McCain "got involved" and basically everyone was told that the awful approach found in FOSTA was what would be in the law.

Of course, since then, we've highlighted how FOSTA has failed miserably. Just as many of us had predicted, it has resulted in widespread censorship, many closed services, an increase in online sex ads, and more womens' lives at risk.

And Cindy McCain "seeing something" and "saying something" and then taking credit for stopping trafficking, when she was really just hassling a diverse family. Maybe we should stop listening to Cindy McCain on this particular topic.

]]>
see-something,-shut-up https://beta.techdirt.com/comment_rss.php?sid=20190209/00191041565
Mon, 4 Feb 2019 10:41:16 PST Another Pre-Super Bowl 'Sex Trafficking Sting' Busts A Bunch Of People Trying To Buy Sex From Cops Pretending To Be Teens Tim Cushing https://beta.techdirt.com/articles/20190202/13591541514/another-pre-super-bowl-sex-trafficking-sting-busts-bunch-people-trying-to-buy-sex-cops-pretending-to-be-teens.shtml https://beta.techdirt.com/articles/20190202/13591541514/another-pre-super-bowl-sex-trafficking-sting-busts-bunch-people-trying-to-buy-sex-cops-pretending-to-be-teens.shtml Every Super Bowl is greeted with the same breathless stories about sex trafficking. As thousands of visitors descend on the unlucky host of The Big Game™, local law enforcement agencies -- sometimes accompanied by the DHS -- are there to claim there will be a sex trafficking victim for every Super Bowl attendee. Hundreds of law enforcement officers perform sweeps costing taxpayers millions of dollars. And every year, it's the same story: very little sex trafficking found, but a whole lot of sex buyers and sex workers are cited and/or jailed.

Prostitution may be the oldest profession, but it couldn't have been far ahead of "law enforcement spokesperson." Someone is always on the scene to spout meaningless numbers to press stenographers in order to perpetuate the myth that large gatherings = sex trafficking en masse. Few journalists dig into these claims.

Elizabeth Nolan Brown of Reason does perform the due diligence local journalists won't. Following the 2017 Super Bowl, Brown obtained booking sheets to see if law enforcement had snagged dozens of sex traffickers in the 750+ arrests made pre-Super Bowl.

Super Bowl 2017 was held in Houston, which sits in Harris County, Texas. Each day, the county posts its previous 24-hours worth of arrests on the Harris County Sherrif's Office (HCSO) website. The arrest report for February 6, 2017, contains more than 11 pages of arrests, including 12 for prostitution, a lot of DUI and driving-on-a-suspended-license charges, some marijuana possession, several assaults, theft, forgery, driving without a seatbelt, one "parent contributing to truancy," and a few for racing on the highway. The February 7 HSCO arrest log shows three arrests for prostitution. But neither reveals a single arrest for sex trafficking, soliciting a minor, pimping, promoting prostitution, compelling prostitution, or any other charges that might suggest forced or voluntary sex trade.

Maggie McNeill, an actual sex worker, has also debunked victorious pre-Super Bowl claims delivered by police hype men. According to law enforcement, more than "42" victims of sex trafficking were "contacted" during prostitution sweeps prior to the 2016 Super Bowl. But there's not 42 of anything in this mess:

42 potential human trafficking victims were contacted during the three weeks leading up to the big game…More than half of the victims were put in direct contact with an advocate or support group and…the task force arrested or cited 30…alleged clients of possible prostitutes…one “girl” was arrested for prostitution and resisting arrest…another “girl” was cited for prostitution…four people were cited for aiding in prostitution, two were cited for loitering with intent for prostitution, one potential victim disclosed other sex crimes and kidnapping, and two human trafficking cases involved a 15-year-old and a 17-year-old from Sacramento which resulted in a human trafficking arrest…one arrest for violation of a domestic violence restraining order and three arrests for other warrants…

As McNeill points out, the 15-year-old was a runaway with mental health issues -- a far cry from the panicked claims sex traffickers are lurking everywhere to poach teens off the street. As for the 17-year-old, she was apparently being "trafficked" by a slightly-older friend, who was also arrested.

As even McNeill notes, sex trafficking does happen. It just happens far less frequently than government officials claim, even as they use it to justify sweeps that do nothing more than temporarily inconvenience a few johns or, worse, punch holes in legal protections for third-party service providers. None of what's detailed above justifies all of the following, as summarized by McNeill and her inimitable prose:

Vast amounts of hype, blanketing an entire metropolis with pigs, spooks and g-men, millions wasted, thousands of words of anti-whore propaganda bloviated out, and for what? 21 women were given a phone number, 30 guys were tricked by cops with fake ads, two adult and two underage sex workers were arrested, six other people were arrested on bullshit charges, and four people arrested on warrants. The other 21 women were essentially just made up; “potential trafficking victims”? Really? Yeah, well every sex worker in the Bay Area is a potential police brutality and police rape victim, but I don’t see them counting that statistic.

Last year's Super Bowl featured the same stories. Law enforcement officials claimed they had busted 94 men in a "sex trafficking sting." In reality, all they'd done is arrest 94 men for trying to solicit sex from law enforcement officers pretending to be underage girls. Again, a couple dozen "potential victims" were "contacted" and referred to outreach groups, but nowhere in the arrest records will you find an actual sex trafficker, despite 57 officers putting in "20 hour days" for 11 straight days prior to the Super Bowl.

This year's stories have arrived and the numbers delivered by law enforcement again suggest two possibilities: either sex trafficking isn't nearly as prevalent as they believe it is, or the business model has some serious flaws.

Days before the New England Patriots and Los Angeles Rams are set to face off in Atlanta, 33 people were busted for sex trafficking in the lead-up to the big game, federal law enforcement officials announced Wednesday.

In a press conference outlining Super Bowl LIII safety measures, Department of Homeland Security Secretary Kirstjen Nielsen announced the arrests, adding four victims have been rescued.

33 traffickers and only four victims. Sure, the ratio's never going to be one-to-one, but you'd think it would be a lot closer than that. It's unlikely any trafficker is trafficking only one victim. In order for it to be the multi-billion dollar industry alarmists claim it is, traffickers most likely have at least a few victims each just to ensure profitability. Even if these traffickers are cut-and-run experts, you'd think the ratio would be more like three-to-one at worst, rather than ten-to-one.

The narrative about mass sex trafficking -- probably involving large numbers of underage victims -- is already falling apart. The DHS is being willfully vague about the arrests, but statements from local law enforcement agencies point to a bunch of cops sitting at desks pretending to be teenagers. And there appears to be a lack of underage victims, which kind of undercuts the usefulness of playing internet dress-up with pervs around the country.

On Jan. 23 and 24, Homeland Security assisted in a joint operation in Douglas County using undercover officers, social media sites and local hotel rooms, the Douglasville Police Department said Wednesday. Sixteen people were arrested, according to police, and the youngest person involved was 17. The timing of the crackdown was related to the Super Bowl, police said.

I not so boldly predict this narrative will collapse on itself just like it has following every other Super Bowl. The Big Game may draw big numbers. But it's not a sex trafficking mecca, nor is it an almost invisible symptom of a supposedly-billion dollar problem.

]]>
sex-trafficking-still-a-relatively-safe-profession https://beta.techdirt.com/comment_rss.php?sid=20190202/13591541514
Tue, 29 Jan 2019 09:37:15 PST How My High School Destroyed An Immigrant Kid's Life Because He Drew The School's Mascot Mike Masnick https://beta.techdirt.com/articles/20190121/12100941436/how-my-high-school-destroyed-immigrant-kids-life-because-he-drew-schools-mascot.shtml https://beta.techdirt.com/articles/20190121/12100941436/how-my-high-school-destroyed-immigrant-kids-life-because-he-drew-schools-mascot.shtml Late last year, Pro Publica and the NY Times published an incredible, long and infuriating article, mostly about how a high school in NY destroyed an immigrant student's life, due to a mixture of moral panics about "MS-13" gang activity (whipped up by the federal government), over-aggressive policing within schools, and deeply troubling decisions by ICE. The story touches on a number of things that we normally write about -- and I've been stewing over writing a post for weeks. The topics herein are most frequently covered on this site by Tim Cushing, rather than me. But I took this article, because the high school at the center of the article, Huntington High School in Suffolk County, New York, is the high school I attended. It's the high school I went to for 4 years, and it's the high school where I gave a speech at graduation on the same football field you can see in one of the photos used to illustrate the story.

Everything about the article is infuriating in so many ways, that it's been difficult to figure out where to even start, but if we have to start someplace, let's start with this: the rise of embedding police into schools -- so-called School Resource Officers (SROs), who are employed by the local police, but whose "beat" is a school. Those officers report to the local police department and not the school, and can, and frequently do, have different priorities. We've long raised concerns about the increased policing of schools. Traditionally, schools handled their own disciplinary matters directly, within the school, with a focus on what was best for the learning environment of the students. They were not always good at this, but adding in an element where the end result could be criminal charges has always seemed misguided, and never more so than in this particular story and the case of "Alex" in the news story.

As the article notes, this trend of putting police in schools came about as a result of the original "famous" school shooting, the one in Columbine, which resulted in a variety of moral panics:

CONGRESS FIRST PROVIDED funding to bring full-time police officers into schools after the 1999 Columbine shooting. The number of these resource officers has doubled in the last decade, according to the National Association of School Resource Officers. Some 80 percent of high schools with more than 1,000 students have them. Schools with large populations of black and Latino students are more likely to have a resource officer than schools that are majority white. After the school shooting this year in Parkland, Florida, Trump called for police officers on every campus.

The position of school resource officer is a hybrid of conflicting roles: counselor, teacher and cop. “You have to have a person who can be caring and loving, but on the flip of a switch, turn into a law-enforcement warrior,” said Mac Hardy, a spokesman for the resource officers association.

That was a few years after I had graduated from the school. We had security guards, but they were not actually police. They didn't carry guns. They didn't have the power to arrest people. And they certainly didn't write up secret reports and send them to ICE leading to the deportation of students. But, apparently, we live in different times.

The second disturbing moral panic in the story is around gang activity, and specifically worries about MS-13.

Huntington High administrators say there has never been any MS-13 presence at the school. Unlike a number of other Long Island high schools, Huntington High says nothing about gang activity on its website; instead it offers guidance on throwing snowballs (“dangerous”) and keeping the hallways clear (“essential”).

That sounds about right. I'm sure there is some gang activity and some violence among students at the school. There was when I was there. I don't know how accurate it is, but I do remember when I was there being told that Huntington had been selected for some study because the population there was a pretty close match to the population diversity of the entire US. You had some rich families, some poor, and plenty of middle class. You had kids of every color and nationality. There were all sorts of groupings. The first time I saw a handgun was when a student (who I barely knew) was showing it off in his locker. There were fights and localized gangs, but hardly anything that crazy. It really doesn't sound like that much has changed. But, with the President and others continually exaggerating the idea of "MS-13 gangs," some police and some schools seem to have bought into the moral panic -- including the police sent to high schools. And even though some have suggested not going overboard with these things, that kind of nuance appears to have gotten lost.

In Suffolk County, although resource officers have been in the schools for two decades, their roles are expanding. In 2017, the Police Department sent officers into Huntington High and other schools to train administrators and teachers to identify gang members. The presentations focused on items like plastic rosaries, blue bandannas, anything with horns and the numbers 504 and 503, written in notebooks or on hands. One slide, which was used in community presentations, featured a group of young men holding up the Salvadoran flag at a Central American pride parade.

Some police officers cautioned that these symbols could also mean a student was being pressured to join or just trying to look cool, and that symbols can have multiple meanings. The same way metal-heads might draw a pentagram, or wannabe punks might draw the anarchy sign (a letter A inside a circle), some students might draw MS-13 symbols, unaware that adults could take those doodles as proof of membership. One law-enforcement officer told me about being called in by a Long Island school after a student drew the signs for both MS-13 and a rival Mexican gang in his notebook. The officer explained that a real gang member would not draw signs of a gang he wasn’t a member of — the drawings were not incriminating, just dumb. But not all officers were as clear about these nuances.

In the case of Alex, in this story, these kinds of warnings apparently created the problem. His problems started... because he wore some blue sneakers and a security guard thought it was a gang symbol:

Alex knew that MS-13 claimed Nike Cortez shoes and blue bandannas, so he made sure to avoid them. In the spring of 2017, school security guards stopped him as he walked down the hall wearing bright blue sneakers that his mother picked out for him as a gift for accompanying her to an immigration appointment in Queens. They said the blue of the shoes was the color of MS-13. They also searched Alex’s bag, on which he had written “504,” and found that he had doodled the name of his Honduran hometown and a devil with horns. Without explaining why, the security guards photographed the drawings before giving Alex his books back. When Alex got home that day, he buried the shoes in a closet and didn’t wear them again, even on weekends.

Even trying not to wear anything that looked like a gang member was interpreted... as being a gang member:

He stopped wearing his Honduran sports jerseys and his bracelet with the colors of the flag. He avoided talking to anyone he didn’t already know well. He and his two best friends decided it was safest to wear all black to school to avoid being tagged as gang members. But when they showed up in their matching outfits, the security guards said they couldn’t dress like that because it looked as if they were trying to start a gang.

Oh, and about that "devil" drawing mentioned above. That apparently was a key part of where everything went wrong for Alex. Except... the freaking school mascot is the "Blue Devil" and has been since at least well before I went to the school. And that's what the drawing was:

A few weeks later, on May 4, 2017, Alex was daydreaming as his algebra teacher introduced yet another indecipherable math operation. Without thinking, he began doodling in pencil on the school calculator he was using. When the bell rang, he handed it back in. That afternoon, security staff pulled Alex out of English class and took him to the office of Brenden Cusack, the principal. When Alex walked in, he saw the calculator on Cusack’s desk. Through an interpreter, Cusack asked Alex if he had drawn the number 504 on the case, and Alex said he had. Then Cusack produced the security guard’s photos of Alex’s drawing of devil horns and told him that the doodles signified MS-13.

Alex told me he would never have written on a wall or desk in this American school, and he knew it was wrong to draw on the school-issued calculator, but he was surprised to be taken to the principal for something he saw as a form of fidgeting. He tried to defend himself; the devil was the school mascot, after all, and 504 was the Honduras country code. “For the police, it’s a gang thing, but for us, it’s about being proud of your country,” he later told me. To Cusack, Alex’s distinctions didn’t seem to matter. The principal signed an incident report that said Alex had been caught in possession of “gang paraphernalia” and had been “defacing school property with gang signs.” Alex said that Cusack told him that he would be suspended for three days and that the doodles would be reported to Fiorillo.

Indeed, Alex appears to have been proud to show off his school spirit:

When his parents had extra money, he asked for a T-shirt, sweatshirt or backpack emblazoned with Huntington High’s name and its mascot, the blue devil with horns.

But combine all of this and you end up with him being deported as a supposed MS-13 member. First, as mentioned above, he was suspended for three days over this moral panic concerning his doodling of the school mascot. Then, apparently, the local school police officer, Andrew Fiorillo, was given the "incident report" about this, leading him to share that with his police department... which later (of course) shared the information with ICE.

It is most likely that as Alex sat at home during his suspension, Fiorillo received word of the doodling incident. While Fiorillo told me he didn’t remember details about Alex’s case, Huntington High has a policy of calling him in as soon as a staff member sees something that could be gang-related, according to a former principal, Carmela Leonardi, who retired in 2015. “The minute you see a gang sign, you need to intervene,” she said. “First, we’d try to get Drew involved, and say, ‘Have you seen this kid outside of the school talking to people?’ Because sometimes you do that in your notebook because you’re trying to seem cool, or because you’re a little idiot.”

Once Fiorillo knew about Alex’s drawings, he would have had to fill out a form and send the information on to the department’s criminal-intelligence unit. Although Suffolk County school resource officers are allowed to use their judgment about reporting infractions like marijuana possession or writing on school walls, their 2017 handbook requires them to write up gang activity, no matter how trivial. School resource officers are not detectives, and they don’t generally go further than passing on what they are told and observe themselves, according to Gerard Gigante, Suffolk County’s chief of detectives.

There's a lot more in the story, but a few months later, out of the blue, ICE showed up at his house and detained him. He had no idea why, but that was the last time he saw his home in Huntington.

... when the ICE agents came to Alex’s house on June 14, 2017, he was shocked into silence. It was only when they were far from Huntington, passing through unfamiliar, rundown Long Island towns, that he was able to get out the words to ask why he was being arrested. Alex says the agent first asked him to guess, and then told him, “We received a report a while ago from the school that you were a gang member, and that’s why.” Behind the tinted windows, his confusion resolved into fear for himself and his parents. “I felt so bad,” he said, “because I was thinking that my mom and dad were going to suffer.”

The article details how everyone just kept passing the buck, rather than taking responsibility for this weird game of disciplinary telephone, where a doodle of the school mascot eventually leads to deportation:

When I asked Fiorillo if he had known that his information was shared with ICE, he demurred. “I can’t speak to what they do, they being a federal government agency,” he said. “I don’t work with them.” Testimony at an immigration hearing by another Suffolk County school resource officer, George Politis of Brentwood High, whose information collected in school was found in ICE memos, shed some light on the process. Asked what happened after he wrote a report, he said: “It’s submitted, and then I don’t know how it’s disseminated from there. We enter it on a computer, and then it goes to whoever wants to read it within the department.”

Meanwhile, the high school -- my freaking high school -- did nothing to help. In fact, they appeared to actively block any attempt to help, with the school principal claiming they couldn't help for privacy reasons:

Palacios asked his client’s teachers for letters of support. But the teachers refused, saying the administration wouldn’t allow it. Alex’s father and the parents of many of the other detained Huntington students also approached their children’s teachers for letters and were also turned down. Cusack, the principal, told me he had been caught off guard by the requests and worried that having staff write about students to third parties would violate students’ privacy rights.

The article notes that the ACLU sued over a large number of similar situations (though, because Alex had just turned 19, and was no longer considered a minor, his case was not included). The result of that lawsuit showed that this combination of moral panics, school police officers, and ICE gone nuts, meant a bunch of kids being detained (and some deported) over little more than random accusations that some of them might have done something vaguely gang like.

The lead case involved a Brentwood High student, Noel (his middle name), who ICE said was dangerous because he had been seen with suspected MS-13 members and had written the number 503 in a school notebook. ICE labeled Noel a “gang member” when he was detained, then downgraded him to a “probable member” and finally, on the day of his hearing, settled on calling him a person identified by a school resource officer as “associated” with the gang. In an immigration courthouse in lower Manhattan, Judge Aviva Poczter ordered Noel’s immediate release, noting that 503 is a country code. “I think this is slim, slim evidence on which to base the continuing detention of an unaccompanied child,” Poczter said.

In other hearings, ICE presented evidence pulled from the Suffolk Police Department’s gang database. Again and again, judges found that the material — a student cited for a gang tattoo who didn’t have a tattoo; a photo of a group of suspected gang members that did not include the student in question — was far too weak or inaccurate to detain the students. In the cases involving Huntington students, the “Huntington High resource officer” kept coming up. In one case, he reported that one student was “found to be in possession of MS-13 drawings in his school work.” In another, he reported that a student had written “MS13” on his arm. Ultimately, 30 of the 32 teenagers in the ACLU lawsuit were freed, including Palacios’ client, who returned to school.

The article goes on and on with much more detail, and background about Alex and his family -- and how a judge in his asylum case ignored the (lack of) evidence and ordered him deported. And then, the government kept pressuring him not to fight deportation, basically making his life a living hell until he felt he had no choice but to accept deportation.

All because he'd drawn the freaking school mascot. And because we've put police where they don't belong. And because of moral panics over "gang violence" that is not nearly as big a problem as gets hyped up by the media... and the President of the United States (for a good backgrounder on the actual threats of MS-13, I recommend the thorough This American Life episode, which shows (1) that MS-13 is much smaller than people claim, (2) that the violence is mostly directed at other immigrant kids, (3) that the police who claim to be so concerned about MS-13 seem to mostly ignore or deny actual reports of MS-13 violence when it involves immigrant kids, and (4) that the police only really care in the rare instances when it impacts white, American-born people).

Meanwhile, the Department of Homeland Security is celebrating this program of detaining and deporting kids who probably haven't done anything wrong as they continue to expand it:

But across Long Island, immigrant students who get in trouble for minor offenses still risk the same chain of overreactions that led to Alex’s deportation. In August 2018, the school district for Bellport High banned students from drawing devil horns and the numbers 503 and 504, or posting them on their private social-media pages. By December, the ACLU identified about 20 new minors around the country arrested by ICE on shaky gang claims, and it sued to force ICE to reveal the total number of minors who have been detained. ICE now says Operation Matador will be permanent on Long Island. This fall, the initiative won an annual award from the Department of Homeland Security for best new ICE program.

After the article came out, the school district posted a letter in response, which calls the details of the article "upsetting," but hardly seems to suggest that the school is going through any serious self-reflection of its role in all of this:

While it would be simple to argue statements and context in numerous places within the article, it does not change the fact that the events, as presented, are beyond upsetting. We deeply regret the harm faced by any family in our community who has been separated from a child. In that light, systems and processes at the high school will be reviewed thoroughly in an effort to maintain a safe haven, as well as the happiness and well-being of all students. We could not ask for a more caring and compassionate group of school staff members, who routinely place the needs of children before their own.

And while it says that it will do this "thorough" review, the letter, at the same time, suggests only minor modifications to having a police officer in the school:

We have enjoyed a productive working relationship with the area’s SRO through the years. He has helped and guided numerous students and families in our district and others. In light of current national and local concerns, however, we believe that we must advocate for an additional layer of organization addressing the relationship between schools districts and the Police Department. This can be accomplished through formulation of a Memorandum of Understanding. It is our firm belief that such an agreement would establish formal procedural guidelines associated with the SRO position, as well as with information flow and restrictions. It is our additional belief that this would not only provide guidance and protection for schools, school staff and students, but for the SRO’s and Department as well.

That seems like too little too late.

Honestly, so much of the article is a demonstration of how little things snowball and overreactions create horrific situations. Putting police in schools was never a good idea -- but extra fear about high profile school shootings encouraged doing that as a "solution" that isn't much of a solution (how often have you heard about SROs stopping a school shooting?). The panic over MS-13 and "gangs" has resulted in people freaking out over anything they perceive as a gang indicator. In many ways, it actually reminds me of the "Satanic Panic" from back in the 1980s, where adults were freaking out about "the kids" somehow being evil, and freaking out over even the slightest "evidence" to support their own delusions.

It is deeply disturbing that this happens anywhere, but the fact that I'm so familiar with this particular school makes it that much more painful to me, personally. That school, its teachers and other students, are certainly a big part of who I am today. And today I'm ashamed that that very same school had any role in this travesty, completely ruining a kid's life because he had a little school spirit (likely much more school spirit than I ever had).

While writing this, I was trying to recall the details of the graduation speech I gave 25 years ago at Huntington High School. It's possible that the printed out text is in a box somewhere at my parents' house -- which is still mere blocks away from the school. I don't remember it exactly, but I do recall, with tremendous clarity, that the key theme was about learning how to keep things in perspective, and about not getting carried away, especially based on trends or peer pressure. It probably was not a very good speech (a friend at the time noted that the other speaker that day gave a "reach for the clouds" message, while mine was a "but keep your feet planted on the ground" kind of speech). However, it certainly seems that many, many people these days could gain from internalizing that message -- including at the very high school I went to.

]]>
blue devils https://beta.techdirt.com/comment_rss.php?sid=20190121/12100941436
Fri, 9 Nov 2018 10:46:32 PST Manhattan DA Cy Vance Says The Only Solution To Device Encryption Is Federally-Mandated Backdoors Tim Cushing https://beta.techdirt.com/articles/20181105/11024640983/manhattan-da-cy-vance-says-only-solution-to-device-encryption-is-federally-mandated-backdoors.shtml https://beta.techdirt.com/articles/20181105/11024640983/manhattan-da-cy-vance-says-only-solution-to-device-encryption-is-federally-mandated-backdoors.shtml Because no one has passed legislation (federal or state) mandating encryption backdoors, Manhattan DA Cy Vance has to publish another anti-encryption report. An annual tradition dating back to 2014 -- the year Apple announced default encryption for devices -- the DA's "Smartphone Encryption and Public Safety" report [PDF] is full of the same old arguments about "lawful access" and evidence-free assertions about criminals winning the tech arms race. (h/t Riana Pfefferkorn)

You'd think there would be some scaling back on the alarmism, what with the FBI finally admitting its locked device count had been the victim of software-based hyperinflation. (Five months later, we're still waiting for the FBI to update its number of locked devices.) But there isn't. Vance still presents encryption as an insurmountable problem, using mainly Apple's multiple patches of security holes cops also found useful as the leading indicator.

The report is a little shorter this year but it does contain just enough stuff to be persuasive to those easily-persuaded by emotional appeals. Vance runs through a short list of awful crime solved by device access (child porn, assault) and another list of crimes unsolved (molestation, murder) designed to make people's hearts do all their thinking. While it's certainly true some horrible criminal acts will directly implicate device encryption, the fact of the matter is a majority of the locked phone-centric criminal acts are the type that won't make headlines or motivate lawmakers.

More than a third of these cases involve minor crimes like theft and check kiting. Another 20% is comprised of "sex crimes," which encompasses prostitution -- a crime where law enforcement sometimes chooses to believe the device itself is an "instrument of crime," never mind what other evidence might be hidden inside it.

So, more than half the crime involving locked phones isn't the sort of stuff that suggests encryption backdoors are the key to making New York City a safer place to reside. The stuff Vance throws in about unlocked devices producing exonerating evidence is a dodge. It's meant to show how granting law enforcement carte blanche access would be a net benefit for the public. But the examples given use stuff like cell site location info and social media app data -- things that could be obtained from third parties without having to go through the locked phone.

Then there's the other part of this argument Vance leaves completely undiscussed: if someone's phone contains exonerating evidence, it's very likely they'll provide officers with this evidence voluntarily, either by unlocking the device or handing over the relevant info/files. Using the very small percentage of cases where exonerating evidence may be recovered from locked phones as an argument for mandated backdoors is incredibly disingenuous.

And that's all this "report" is: a petition for federally-legislated encryption backdoors.

III. Federal Legislation Remains the Only Answer

[...]

For the reasons advanced in each of our prior Reports, national legislation of the sort we have proposed remains the most rational and least intrusive means to require device manufacturers to comply with lawful court orders in serious criminal cases upon a finding of probable cause.

"Most rational and least intrusive." I guess creating new security holes in millions of personal devices isn't "intrusive." And if this wasn't enough of a laugher, Vance ends his report with this sentence:

[O]ur Office stands willing to assist Congress and all relevant stakeholders in the effort to find a more rational balance among the interests of device makers, consumers and law enforcement in the regulation of smartphone encryption.

When your conclusion is that the only solution is federally-mandated encryption backdoors, you cannot honestly assert you're seeking to "balance" the interests of everyone involved. The only interest served by mandated backdoors is law enforcement's. Portraying device encryption as a threat to public safety is intellectually dishonest. Vance's own numbers undercut his threat level claims and his repeated failure to even generate serious discussion among federal legislators shows it's probably time for the Manhattan DA to retire his annual alarmism.

]]>
picking-up-the-torch-the-FBI-accidentally-dropped https://beta.techdirt.com/comment_rss.php?sid=20181105/11024640983
Thu, 8 Nov 2018 03:23:00 PST NPR Posits Nazis Are Recruiting All Of Our Children In Online Games With Very Little Evidence Timothy Geigner https://beta.techdirt.com/articles/20181106/12562340993/npr-posits-nazis-are-recruiting-all-our-children-online-games-with-very-little-evidence.shtml https://beta.techdirt.com/articles/20181106/12562340993/npr-posits-nazis-are-recruiting-all-our-children-online-games-with-very-little-evidence.shtml At this point, journalistic handwringing over the assumed dangers of video games has moved beyond annoyance levels and into the trope category. Violence, aggression, becoming sedentary, and the erosion of social skills have all been claimed to be outcomes of video games becoming a dominant choice for entertainment among the population that isn't collecting social security checks, and all typically with little to no evidence backing it up. This has become so routine that one can almost copy and paste past responses into future arguments.

But NPR really went full moral panic mode with a post that essentially claimed the recruitment of children into rightwing and Nazi extremist groups is a full on thing, while an actual analysis of what it relied on to make that claim reveals, well, very little of substance at all.

Yesterday, NPR published an article titled “Right-Wing Hate Groups Are Recruiting Video Gamers.” It’s the latest, most exaggerated version of a gaming-flavored narrative woven by elite media orgs in an apparent attempt to explain the rise of right-wing extremism in America. This article claims that games “have become one avenue for recruitment by right-wing extremist groups”; to support this, the reporter opens her story with a tale of a 15-year-old Counter-Strike: Global Offensive player whose father, John—no last name given—was one day startled to see neo-Nazi propaganda his son had printed out.

Yesterday’s NPR article, which attempted to make this case, was riddled with the sort of factual elisions one would expect out of propaganda journalism. On the basis of one real-life example and three interviews with apparent experts, the writer claims that gamers are getting plucked out of shooty-shooty games and dropped right into neo-Nazi forums. The most basic problem here feels beneath mention: inflating one anonymous father and his anonymous son’s journey through the bad net into an entire movement is preposterous. Had the reporter spoken to even two, three or four kids who had been rescued from the clutches of Fortniteextremists, it still wouldn’t have been enough. “Where,” one would ask, “is the sense of scale?”

And, as Kotaku notes, the problem of the scale in all of this leads one to the next obvious question. If this were really happening at a level that would warrant sounding the parental klaxons, then how has it been missed by gaming journalists, journalists writ large, the federal authorities, and the federal government? Are we really expected to accept, all on the back of one anecdotal story and a couple of experts contacted for comment, that the much wider world has missed Nazi recruitment in online gaming entirely? I don't know if ya'll have noticed, but there's been a bit of a focus on Nazis and white nationalism as of late. This just flew entirely under the radar?

Even the experts cited in the NPR piece leave very much to be desired. Christian Picciolini, who I have heard speak in other forums and who I generally found to be bright and trustworthy, comes off as simply looking foolish when asked to flesh out his concerns as outlined in the NPR post.

Picciolini, who describes himself as a “former white supremacist leader,” came onto Kotaku’s radar in July, when he hosted a Reddit AMA. In it, he claimed that right-wing extremists go into multiplayer games to recruit vulnerable demographics into their cause. Intrigued, my colleague Kashmir Hill and I reached out to Picciolini to hear more. We were curious about the right-wing movers and shakers who could fit an entire political pitch into a Fortnite match.

When we asked Picciolini for evidence of his claim and an interview, he referred us to “the many who have experienced the recruitment” and attached a few screenshots of Nazi imagery in open world games like All Points Bulletin. He also forwarded a screenshot of the game Active Shooter, a school shooting simulator, which was pulled from Steam before its release. Another screenshot was from a YouTube video titled “Fag Jews” in which someone named AuTiSmGaMiNg played Call of Duty. It had 11 views.

If that response is the best that can be mustered, from someone who is supposed to be an expert witness to the core claim that Nazis are recruiting children in online games no less, then this is going to look like ginned up panic-mongering. And that's a very real problem, given that there are some very serious social issues we're dealing with in this country right now, including issues that surround xenophobic white nationalist groups. That problem does exist, but when the fight against that sort of thing is carried out by people willing to inflate the concerns on the specifics, it's easy to see how this can all result in a boy who cried wolf scenario.

Given the large swaths of the population now playing video games on the regular, a post like NPR's can only serve to damage its reputation for journalism.

When fear-mongering moves into spaces that require rigorous investigative reporting and large-scale interviewing, it stumbles into the danger zone of modern journalism: “This wild, but unlikely thing is happening, widely. Please panic.”

That sort of thing might sell as a story in the short term, but it does long term damage in building trust with the reading audience. That's far more dangerous, actually, than whatever tiny number of Nazis are actually trying to lure kids into Nazism via the video game vector.

]]>
nein https://beta.techdirt.com/comment_rss.php?sid=20181106/12562340993
Thu, 1 Nov 2018 03:22:03 PDT Georgia Government Officials Celebrate Halloween By Engaging In Pointless Hassling Of Sex Offenders Tim Cushing https://beta.techdirt.com/articles/20181031/14533540953/georgia-government-officials-celebrate-halloween-engaging-pointless-hassling-sex-offenders.shtml https://beta.techdirt.com/articles/20181031/14533540953/georgia-government-officials-celebrate-halloween-engaging-pointless-hassling-sex-offenders.shtml Across the state of Georgia (and in other places around the nation), idiots in power are scoring points with the idiots in the electorate by engaging in "for the children" bullshit targeting sex offenders. The Sheriff of Butts County (not a typo) decided to plant signs in the yards of all registered sex offenders, which should ensure only pleasant things happen to parolees following the terms of their release.

As Sheriff, there is nothing more important to me than the safety of your children. This Halloween, my office has placed signs in front of every registered sex offender's house to notify the public that it's a house to avoid. Georgia law forbids registered sex offenders from participating in Halloween, to include decorations on their property. With the Halloween on the square not taking place this year, I fully expect the neighborhoods to be very active with children trick-or-treating. Make sure to avoid houses which are marked with the attached posted signs in front of their residents. I hope you and your children have a safe and enjoyable Halloween. It is an honor and privilege to serve as your sheriff.

(These signs are placed In accordance with Georgia Law O.C.G.A. 42-1-12-i(5) which states the Sheriff shall inform the public of the presence of sexual offenders in each community)

Sheriff Gary Long isn't making anyone safer by doing this, no matter what his self-congratulatory post says. The law he cites doesn't require the placement of signs in sex offenders' yards. If it did, these signs would already be in place and there'd be no reason for Sheriff Long to brag about his pointless waste of time on Facebook.

The state already has a law in place banning sex offenders from decorating their houses, handing out candy to children, or even turning their outside lights on. All of that should be enough to deter trick-or-treaters from visiting sex offenders' residences. The planting of signs is an unjustified additional punishment handed down for specious reasons that provides an opportunity for everyone who agrees with Long's self-serving idiocy to hurl invective, garbage, or whatever else in on hand in the general direction of property bearing these signs.

This won't make the kids safer. A 2009 study showed no spike in sex offender activity around Halloween.

States, municipalities, and parole departments have adopted policies banning known sex offenders from Halloween activities, based on the worry that there is unusual risk on these days. The existence of this risk has not been empirically established. National Incident-Base Reporting System crime report data from 1997 through 2005 were used to examine daily population adjusted rates from 67,045 nonfamilial sex crimes against children aged 12 years and less. Halloween rates were compared with expectations based on time, seasonality, and weekday periodicity. Rates did not differ from expectation, no increased rate on or just before Halloween was found, and Halloween incidents did not evidence unusual case characteristics. Findings were invariant across years, both prior to and after these policies became popular. These findings raise questions about the wisdom of diverting law enforcement resources to attend to a problem that does not appear to exist.

Law enforcement resources are better used ensuring children are safe by patrolling neighborhoods and increasing law enforcement presence in heavily-trafficked areas. Children are hundreds of times more likely to be hit by cars than snagged by a sex offender on Halloween (and, indeed, any day of the year). Additional officers deployed to neighborhoods might also deter something that actually happens far more often on Halloween than other holidays.

According to the National Safety Council, children are more than twice as likely to be hit by a car and killed on Halloween than on any other day of the year. And as for keeping the general pubic safe, vandalism spikes by 24% on Halloween, making it the night with the most vandalism of the year.

Even more absurd than Sheriff Long's plan is Grovetown, Georgia Mayor Gary E. Jones' idea. He just going to lock the "problem" up for the night.

Paroled sex offenders won’t have the chance to encounter trick-or-treaters in Grovetown, Ga., this Halloween.

That’s because Mayor Gary E. Jones plans to round them up. Jones this week revealed his plan to keep 25 to 30 local paroled sex offenders under the watchful eyes of five law enforcement officers at city hall for three hours next Wednesday as kids go door to door for candy.

Technically, this may be legal under the state's expansive sex offender laws. It doesn't sound all that Constitutional, which may result in a courtroom challenge in the near future. Mayor Jones has a perfectly good reason to do this, though: a long history of zero incidents on Halloween in his town. Jones claims this is being done "across the state," but WQAD reports "no other surrounding counties" are engaging in this technically-legal roundup.

If Jones was really concerned about safety and crime during Halloween, he would have his law enforcement out on the streets, rather than sitting guard at City Hall. And if criminals who've already paid their debt to society can be locked up for nebulous reasons, why isn't Jones tossing everyone ever picked up on vandalism charges into the ad hoc lockup for the night? It seems like they might pose more of a safety issue than the sex offenders Mayor Jones believes -- without a shred of evidence -- would kidnap trick-or-treaters if not otherwise detained.

And all of this doesn't even get to the problems of the sex offender registry itself and the fact it contains people who did nothing more than have sex with a 17-year-old when they were 20 or engaged in sexting with another teen. Or the fact that kids are far more likely to be abused by someone they know and trust, rather than some stranger offering Halloween candy on Halloween. All of this is willfully ignored by law-and-order types like Sheriff Long and Mayor Jones to score points with constituents who are equally as oblivious. It's just another form of security theater -- one that has a lot to say about safety, but actually does nothing to make anyone safer.

]]>
your-tax-dollars-being-made-stupider https://beta.techdirt.com/comment_rss.php?sid=20181031/14533540953
Mon, 15 Oct 2018 10:44:00 PDT 'Missing, Sex Trafficked' Children Neither Missing, Nor Victims Of Sex Trafficking Mike Masnick https://beta.techdirt.com/articles/20181014/23430140840/missing-sex-trafficked-children-neither-missing-victims-sex-trafficking.shtml https://beta.techdirt.com/articles/20181014/23430140840/missing-sex-trafficked-children-neither-missing-victims-sex-trafficking.shtml For quite some time we've highlighted the horrible laws being pushed by aggressively misrepresenting the size of the problem of sex trafficking -- and especially sex trafficking of children. This is not to say that it never happens. Nor is it to suggest that the crime of sex trafficking, especially of minors, is not horrific and hugely problematic. But we shouldn't overreact to false information. A year ago, we looked at some of the numbers being presented in favor of passing FOSTA, and found they were almost entirely bullshit. This included Rep. Ann Wagner's (who is the leading pusher of bad laws around "sex trafficking") claim child sex trafficking alone was a $9.5 billion industry. As we noted, this number came from a bizarre nonsensical extrapolation of a very misleading and confused report by ICE that covered issues of smuggling (not just sex trafficking). Other stats -- such as the supposed number of kids "lured" into sex trafficking -- showed even more extrapolation, while police were finding very, very few actual cases of this happening.

But, the narrative has been set and the media makes it into reality, even if... it's not. Take this headline from the NY Post from last week, claiming "123 missing children found in Michigan during sex trafficking operation":

Wow. That would be a pretty astoundingly successful police operation, and certainly gives weight to the idea that so many kids are lured into sex trafficking rings and then disappear and go missing. Except... details matter. And deep in the NY Post story they actually admit that out of the 123 missing kids only three were "identified as possible sex trafficking victims." So, uh, why does the headline suggest that all 123 kids are sex trafficking victims when it's not clear if any are, and clear that the vast majority are not?

And then there's this: only four of the kids were actually missing.

What's more, all but four of the "missing children" were not actually missing. In the remaining cases, minors were listed in a police database as missing but had since been found or returned home on their own. "Many were (homeschooled)," Lt. Michael Shaw told The Detroit News. "Some were runaways as well."

Indeed, if you look at the report, it notes that all of the kids outside of those four "were found safe with their parents or guardians."

So, remember, the headline screamed that 123 missing children were found in a sex trafficking "operation." Now it seems that most of them were "found" at home with their parents, and only three of them might have been victims of sex trafficking. These seem like important details, especially when you have election officials like Rep. Ann Wagner pushing a vast surveillance bill on the basis of the problem of sex trafficking. Pushing bogus information like over a hundred missing kids being engaged in sex trafficking only helps build that narrative -- one that appears to actually be much, much more limited than the media or lying politicians will let you know about.

]]>
moral-panics-are-the-worst https://beta.techdirt.com/comment_rss.php?sid=20181014/23430140840
Fri, 6 Jul 2018 10:47:27 PDT What Soda Taxes And Lead Paint Have To Do With Internet Regulation Cathy Gellis https://beta.techdirt.com/articles/20180704/13054740174/what-soda-taxes-lead-paint-have-to-do-with-internet-regulation.shtml https://beta.techdirt.com/articles/20180704/13054740174/what-soda-taxes-lead-paint-have-to-do-with-internet-regulation.shtml They say that laws are like sausages, and you should never watch either be made if you don't want to be sick. But some manufacturing processes are more disgusting than others, and if we don't want to suffer ill-effects, we need to keep an eye on the worst of them.

As others have discussed, the new California Consumer Privacy Act (CCPA) is at best a law with troubling aspects, if not completely chilling for future Internet businesses and even non-commercial online expression. True, there may be the opportunity to amend it before it goes into effect to dull the worst of it, but how we find ourselves in this position where we are stuck with a ticking time bomb of a law that we now need to fix is a story worth telling, because if it could happen once it could happen again. And already has.

Which is why I'm going to tell the story about how California just banned soda taxes (in fact, not coincidentally, right around the same time that it passed the CCPA).

To understand what happened, one first needs to understand a bit about the California Constitution. In addition to setting up the typical branches of government (legislative, executive, judicial), it also allows for a form of direct democracy through ballot initiatives. Ballot initiatives generally only need a simple majority to pass, but once passed, they can be very difficult, if not impossible, to un-pass or modify them without another ballot measure. Even when ballot measures only amend statutory code, and not the Constitution itself, the legislature can be prevented from making any modifications to that new language, no matter how necessary those changes may be, unless the ballot initiative allows the legislature to act. And even if the initiative does permit it, it may require a much more difficult to attain super-majority of the legislature to make any changes, rather than the simple majority typically required to pass legislation.

The upshot is that an awful lot of California law and policy can depend on the initiative process -- and thus a whole lot can depend on who is able to use it to push forth the policy they prefer. In one sense, it's hard to get a new initiative on the ballot: it requires hundreds of thousands of signatures to qualify. But it turns out that for people who have a lot of money, it's not all that hard. Some estimate that it may take only $3-4 million to acquire enough signatures to get any initiative on the ballot.

Of course, whether such an initiative would pass is a separate question, but there are a few factors that make the odds pretty good. One is that it's very difficult for the electorate to make informed choices, and I don't say that as any sort of insult to the average California voter. In the most recent election this past June I timed how long it took to figure out who and what to vote for and clocked it at a whole hour. And that's with me, a lawyer practiced in reading and evaluating law and policy, living in an unincorporated area of California, meaning that I was spared having to wade through any city candidate or ballot measure choices. I just had to vote on candidates for all county, state, and federal offices, and on all county and state ballot measures. And this was in June, where there were fewer choices all around than there will be in November, yet it still took an hour to make any sort of responsible decisions before I was prepared to head to the polls. Of course, not everyone has that hour, and for many it will likely take longer, which means that the electorate tends to be dependent on campaign advertising to help them make those choices. But if someone has a few million dollars to spend to get an initiative on the ballot, they may easily have a few more, or a lot more, to spend on that advertising, and their opponents, no matter how principled in their opposition, just as easily may not.

The reality is that anyone who can spend a few million dollars to get an initiative on the ballot can use that money to put an electoral gun to the head of policymakers and force them to legislate for their desired policy in exchange for withdrawing the initiative from the upcoming election. Because at least if the policy gets implemented via the legislature's hand, rather than through the initiative process, the legislature might be able to temper some of its language. Also, by being an ordinary bill, it would theoretically be more changeable in the future, subject only to ordinary legislative majorities and not dependent on someone funding a new initiative that could successfully override it.

As this article in the Sacramento Bee describes, the soda tax ban is a case study of this dynamic. A business group wrote a proposal that would have created some significant limitations in the state's ability to raise revenue. It then shopped around the proposed initiative until it found someone willing to underwrite the signature-gathering necessary to get it on the ballot. That someone turned out to be the beverage industry, which generally hates soda taxes.

The relative merits of soda taxes are beyond the scope of this post. Suffice it to say, certain California communities like them, often as a way of raising revenue for public health programs and deterring the over-consumption of unhealthy drinks. Several of these communities have already passed a few such taxes.

But after the beverage industry underwrote the effort to get enough signatures to qualify the tax-limiting initiative for the ballot, an initiative that did more than just ban soda taxes but instead affected the state's taxation ability more broadly, the legislature found itself having to play electoral roulette: perhaps the ballot measure might fail and everything would be fine, but if it passed, it risked messing up the fiscal health of the state and all the policies and programs the legislature wanted to fund. So it capitulated and did a deal with the initiative's sponsor to bar any other California communities from passing their own soda taxes for the next 12 years in exchange for having the ballot initiative withdrawn.

In fact, June was a busy month for legislative capitulation, because right around the same time that the legislature did that deal it also did a deal with the sponsors of the "Consumer Right to Privacy Act of 2018" initiative that had also qualified for the November ballot.* Because that initiative, if it passed, would definitely cripple the Internet, the legislature instead agreed to pass the CCPA, which will only probably cripple it, but at least has the potential for improvement.

And that's what this post is really about, this extortionate ability for basically anyone with $4 million to spend to blackmail the legislature to set aside its own legislative judgment and build into California law whatever terrible policy the person with the money wants. Sure, for any policy that is so awful or unpopular there's always the chance that it might lose at the polls come Election Day, and from time to time ballot initiatives do get shot down. But it's very easy for garbage to get through, and wealthy minority voices count on that possibility when they try to ram through all sorts of policies that aren't necessarily good ones for Californians or its businesses – including on matters of tech policy.

On our best days these tech policy challenges require careful, nuanced treatment. We should look to the legislature, and legislators, to give it that careful, nuanced treatment before imposing drastic changes in the law that will affect them. But they can't give these regulatory proposals that sort of necessary attention they deserve if for a mere $4 million or so people can force them to rush through law that has been drafted without any of the care or necessary transparency sound regulation requires.

And when they are forced to pass a law like that, as they were just now with the CCPA, it is unlikely to be something we should cheer.

* Also, per the Los Angeles Times article linked above, "A third proposal, asking taxpayers to subsidize lead paint cleanup projects, was withdrawn by paint companies in exchange for lawmakers scrapping a slate of bills designed to impose new rules on the industry."

]]>
public-policy-on-sale-now! https://beta.techdirt.com/comment_rss.php?sid=20180704/13054740174
Wed, 11 Oct 2017 09:24:16 PDT Deputy AG Pitches New Form Of Backdoor: 'Responsible Encryption' Tim Cushing https://beta.techdirt.com/articles/20171010/16223238384/deputy-ag-pitches-new-form-backdoor-responsible-encryption.shtml https://beta.techdirt.com/articles/20171010/16223238384/deputy-ag-pitches-new-form-backdoor-responsible-encryption.shtml The DOJ is apparently going to pick up where the ousted FBI boss James Comey left off. While Attorney General Jeff Sessions continues building his drug enforcement time machine, Deputy AG Rod Rosenstein is keeping the light on for Comey's prophesies of coming darkness.

Rosenstein recently gave a speech at the US Naval Academy on the subject of encryption. It was… well, it was pretty damn terrible. Once again, a prominent law enforcement official is claiming to love encryption while simultaneously extolling the virtues of fake encryption with law enforcement-ready holes in it.

The whole thing is filled with inadvertently hilarious assertions, like the following:

Encryption is a foundational element of data security and authentication. It is essential to the growth and flourishing of the digital economy, and we in law enforcement have no desire to undermine it.

Actually, Rosenstein has plenty of desire to do that, which will be amply demonstrated below, using his own words.

But the advent of “warrant-proof” encryption is a serious problem. Under our Constitution, when crime is afoot, impartial judges are charged with balancing a citizen’s reasonable expectation of privacy against the interests of law enforcement. The law recognizes that legitimate law enforcement needs can outweigh personal privacy concerns.

The law indeed recognizes this and provides law enforcement access to communications, documents, etc. with the proper paperwork. What the law cannot do is ensure the evidence is intact, accessible, or exactly what law enforcement is looking for.

Rosenstein is disingenuously reframing the argument as lawful access v. personal privacy, when it's really about law enforcement's desires v. user security. The latter group -- users -- includes a large percentage of people who've never been suspected of criminal activity, much less put under investigation. Weakened encryption affects everyone, not just criminal suspects.

Our society has never had a system where evidence of criminal wrongdoing was totally impervious to detection, especially when officers obtain a court-authorized warrant. But that is the world that technology companies are creating.

Our society has had plenty of systems where evidence was "impervious to detection." Calls, text messages, emails, personal conversations, passed notes, dead drops, coded transmissions, etc. have existed for years without law enforcement complaining about everything getting so damn dark. Law enforcement has never had 100% access to means of communications even with the proper paperwork in hand. And yet, police departments and investigative agencies routinely solved crimes, even without access to vast amounts of personal communications.

Rosenstein follows this loop a few times, always arriving at the same mistaken conclusion: law enforcement should be able to access whatever it wants so long it has a warrant. Why? Because it always used to be able to. Except for all those times when it didn't.

Since Rosenstein isn't willing to handle the encryption conversation with any more intellectual honesty than the departed James Comey, he's forced to come up with new euphemisms for encryption backdoors. Here's Rosenstein's new term for non-backdoor encryption backdoors.

Responsible encryption is achievable. Responsible encryption can involve effective, secure encryption that allows access only with judicial authorization.

At worst, this means some sort of built-in backdoor, sort of what Blackberry uses for its non-enterprise customers. Nearly just as bad, this possibly means key escrow. These are the solutions Rosenstein wants, but he doesn't even have the spine to take ownership of them. Not only does the Deputy AG want tech companies to implement whatever the fuck "responsible encryption" is, he wants them to bear all expenses, cope with customers fleeing the market for more secure options, and be the focal point for the inevitable criticism.

Such a proposal would not require every company to implement the same type of solution. The government need not require the use of a particular chip or algorithm, or require any particular key management technique or escrow. The law need not mandate any particular means in order to achieve the crucial end: when a court issues a search warrant or wiretap order to collect evidence of crime, the provider should be able to help.

In other words, the private sector needs to build the doors and hold the keys. All the government needs to do is obtain warrants.

Rosenstein just keeps piling it on. He admits the law enforcement hasn't been able to guilt tech companies into backdooring their encryption. That's the old way. Going forward, the talking points will apparently portray tech companies as more interested in profits than public safety.

The approach taken in the recent past — negotiating with technology companies and hoping that they eventually will assist law enforcement out of a sense of civic duty — is unlikely to work. Technology companies operate in a highly competitive environment. Even companies that really want to help must consider the consequences. Competitors will always try to attract customers by promising stronger encryption.

That explains why the government’s efforts to engage with technology giants on encryption generally do not bear fruit. Company leaders may be willing to meet, but often they respond by criticizing the government and promising stronger encryption.

Of course they do. They are in the business of selling products and making money.

In other words, tech companies are doing it for the clicks. This is a super-lazy argument often used to belittle things someone disagrees with. (A phrase that has since been supplanted by "fake news.") This sort of belittling is deployed by (and created for) the swaying of the smallest of minds.

Having painted the tech industry as selfish, Rosenstein airlifts himself to the highest horse in the immediate area.

We use a different measure of success. We are in the business of preventing crime and saving lives.

The Deputy AG makes a better point when he calls out US tech companies for acquiescing to ridiculous censorship demands from foreign governments. If companies are willing to oblige foreign governments with questionable human rights records, why can't they help out the US of A?

It's still not a very strong point, at least not in this context. But it is something we've warned against for years here at Techdirt: you humor enough stupid demands from foreign governments and pretty soon all of them -- including your own -- are going to start asking for favors.

It would be a much better argument if it wasn't tied to the encryption war Rosenstein's fighting here. Comparing censorship efforts and VPN blocking to the complexities of encryption isn't an apples-to-apples comparison. Blocking or deleting content is not nearly the same thing as opening up all users to heightened security risks because the government can't get at a few communications.

Whatever it is Rosenstein's looking for, he's 100% sure tech companies can not only provide it, but should also bear all liability for anything that might go wrong.

We know from experience that the largest companies have the resources to do what is necessary to promote cybersecurity while protecting public safety. A major hardware provider, for example, reportedly maintains private keys that it can use to sign software updates for each of its devices. That would present a huge potential security problem, if those keys were to leak. But they do not leak, because the company knows how to protect what is important. Companies can protect their ability to respond to lawful court orders with equal diligence.

It's that last sentence that's a killer. This is Rosenstein summing up his portrayal of tech companies as callous, profit-seeking nihilists with a statement letting everyone know the DOJ will pin all the blame for any future security breaches on the same companies who got on board with the feds' "nerd harder" demands.

This is a gutless, stupid, dishonest speech -- one that deliberately misconstrues the issues and lays all the blame, along with all the culpability on companies unwilling to sacrifice users' security just because the government feels it's owed access in perpetuity.

]]>
laugh-and-the-world-laughs-with;-pull-this-crap-and-you're-on-your-own https://beta.techdirt.com/comment_rss.php?sid=20171010/16223238384
Wed, 27 Sep 2017 09:19:00 PDT SESTA Is Being Pushed As The Answer To A Sex Trafficking 'Epidemic' That Simply Doesn't Exist Tim Cushing https://beta.techdirt.com/articles/20170925/16401838283/sesta-is-being-pushed-as-answer-to-sex-trafficking-epidemic-that-simply-doesnt-exist.shtml https://beta.techdirt.com/articles/20170925/16401838283/sesta-is-being-pushed-as-answer-to-sex-trafficking-epidemic-that-simply-doesnt-exist.shtml The rationale behind the Section 230-upending SESTA bill is that sex trafficking is such a huge problem, some collateral damage is a small price to pay. The push begins with the targeted criminal behavior itself. No one wants to appear as though they're opposed to fighting trafficking, so that scores some quick wins with a few legislators. It continues with inflated numbers suggesting trafficking has become a multi-billion dollar industry here in the US.

Two backers of an earlier human trafficking bill - Rep. Bob Goodlatte and Rep. Ann Wagner -- both cited unsupported numbers while discussing the criminal activity. Goodlatte claimed "child sex trafficking alone is a $9.8 billion industry." Wagner's money quote was about the same -- $9.5 billion -- but didn't narrow it down to just child sex trafficking.

It doesn't matter whether the number included children or not. The numbers are false. The Washington Post dug into the stats and couldn't find anything independently verifiable that added up to the $9 billion price tag asserted here. What WaPo found was the $9 billion was a worldwide estimate based on some very questionable extrapolation from a few small data sets with large sampling errors. The paper tracked the numbers all the way back to figures provided by ICE in 2003, which was a worldwide estimate that also included human smuggling.

Other reports have suggested an incredible amount of profit per exploited person:

The ILO in 2014 released another report on human trafficking with updated profit estimates. This report provided a calculation of $26 billion in profits for “forced sexual exploitation” in the 36 industrialized countries, based on the assumption of 300,000 prostitutes, earnings of about $115,000 a year, and profits of $80,000.

These are the sort of numbers being pointed to by supporters of SESTA. This is the extremely fuzzy math that leads SESTA frontman Sen. Rob Portman to declare sex trafficking an "epidemic" in America. The real numbers are never cited because they'll never convince anyone to sign off on an internet-damaging bill like SESTA. Elizabeth Nolan Brown does the actual math using actual FBI crime figures and there's nothing approaching a $10 billion/year trafficking epidemic.

Human trafficking arrests are almost nonexistent in most states, according to the FBI's newly released U.S. crime statistics for 2016.

Part of the Uniform Crime Reporting (UCR) project, the new data on sex and labor trafficking shows that arrests for either offense are rare and that many suspected incidents of trafficking did not ultimately yield results.

Lots of law enforcement resources get poured into human trafficking investigations but the expenditures vastly outweigh the results.

For instance, Florida reported 105 investigations into human-trafficking offenses in 2016 but zero human trafficking arrests last year. Nevada worked on 140 human trafficking investigations but made only 40 arrests on trafficking charges. Louisiana looked into 123 potential cases of human trafficking but only arrested 16 people for it.

Last year, supporters of the Justice for Victims of Trafficking Act were claiming 1,000 children become victims of sex trafficking in Ohio alone every year. These numbers were based the same sort of small data set + sampling errors + baseless extrapolation used to reach the national figures.

[T]he study’s authors took that 15-per-year figure [number of minor victims IDed in Toledo] and applied it to all girls ages 12 through 17 in the state of Ohio. That population, 337,961, yields an estimate of 202 girls per year.

Then, the commission multiplied 202 by five, because a University of Toledo study claimed that each sex trafficking victim they interviewed knew an average of five more underaged minors "not known to law enforcement, but who were engaging in the sex trade."

One big problem: The Ohio study did not control for the fact that Toledo’s child sex trafficking rates were the highest in the state, which inflates their estimate even before multiplying it by five.

The commission then threw in some boys for good measure, reasoning that being gay, transgender or a runaway are "risk factors" for becoming a child sex trafficking victim. Because 3 to 5 percent of the overall US population identifies as gay, lesbian, bisexual or transgender, the committee added 63 males to the estimated number of child victims.

The attorney general’s trafficking study concluded that an annual 1,078 minors in Ohio were potential victims.

More than 1,000 children exploited every year by traffickers! And by the wiliest of traffickers, apparently. Actual arrests in Ohio for trafficking? Five in 2014. Zero in 2015 and 2016.

Even some of those nominally supportive of SESTA are finding it difficult to reason with citizens affected by government-led sex trafficking hysteria. Roseville (CA) Police had to take to Facebook to combat misinformation being spread about sex trafficking by a viral social media post. The post detailed the "suspicious" activity of a man spotted in a grocery store parking lot. To those passing around the post, "suspicious man" = "proof of rampant human trafficking." The Roseville PD responded with some nice, cold facts.

[T]he post mentions that the suspicious man was probably a human trafficker looking to kidnap children. This is highly unlikely, as kidnapping by strangers is a rare crime in the United States. Stranger abductions of children are so frightening and so unusual that when they do happen, they make national news. According to national research, children taken by strangers or slight acquaintances represent only one-hundredth of 1 percent (.01%) of all missing children.

[...]

The Roseville Police Department has never taken a report of anyone being kidnapped by a stranger and forced into the sex trade. Our vice officers have interviewed numerous prostitutes and exploited victims over the years, and asked them how they got into their situations. None have said they were originally kidnapped.

[...]

We recently conducted undercover operations in retail areas, and found no evidence that human traffickers were there recruiting strangers.

Human trafficking in the US does exist. No one's denying that -- and no one's denying that it's devastating for those victims and their families. But it's not the multi-billion epidemic it's portrayed as by politicians and SESTA supporters. The problem should be addressed, but there are plenty of laws on the books already that allow for the pursuit and prosecution of actual sex traffickers. Throwing third-party service providers into the mix does nothing more than allow the government to attack third parties (because it's easier) rather than engage in the more difficult work of targeting traffickers themselves.

]]>
reality-check https://beta.techdirt.com/comment_rss.php?sid=20170925/16401838283
Mon, 25 Sep 2017 06:09:03 PDT British News Channel Touts Amazon Bomb Materials Moral Panic That Ends Up Being About Hobbyists And School Labs Timothy Geigner https://beta.techdirt.com/articles/20170922/09311238268/british-news-channel-touts-amazon-bomb-materials-moral-panic-that-ends-up-being-about-hobbyists-school-labs.shtml https://beta.techdirt.com/articles/20170922/09311238268/british-news-channel-touts-amazon-bomb-materials-moral-panic-that-ends-up-being-about-hobbyists-school-labs.shtml Moral panics take many forms, from Dungeons & Dragons being a lure to satanism in the eyes of parents to the wonderful theory that playing chess would turn children into violent psychopaths. What these moral panics tend to share in common is the extraction of seemingly nefarious details on a subject which, out of context, are interpreted in a demonizing manner and then exported for public consumption. Thus the public gets often well-meaning but highly misleading information on the terribleness of some innocuous thing.

This practice continues to this day, often times helped along by a media environment desperate for clicks and eyeballs. A recent example of this would be British media's Channel 4 News finding that Amazon's algorithm had a habit of recommending a combination of products together that appeared designed for terrorist-style explosives.

Channel 4 News has discovered that Amazon's algorithm guides users to the necessary chemical combinations for producing explosives and incendiary devices. Ingredients which are innocent on their own are suggested for purchase together as "Frequently bought together" products, as it does with all other goods.

Ingredients for black powder and thermite are grouped together under a “Frequently bought together” section on listings for specific chemicals. Steel ball bearings often used as shrapnel in explosive devices, ignition systems and remote detonators are also readily available; some promoted by the website on the same page as these chemicals as products that “Customers who bought this item also bought”.

Anyone reading this report would reach the obvious conclusion: either Amazon has enough customers trying to make terror-bombs that the algorithm is reacting to that, or Amazon is purposefully pushing and radicalizing innocent product purchasers into bomb-making terror demons. Channel 4 noted that beyond the chemicals needed to produce "black powder" and thermite, Amazon commonly listed ball-bearings, ignition systems, and switch-detonators alongside them as items frequently purchased with those products. Even the saltiest among us would forgive the public reading all of this for losing their minds over the report.

Except, of course, all of this comes along with a perfectly innocuous explanation, as detailed by Pinboard's Maciej Cegłowski.

The 'common chemical compound' in Channel 4's report is potassium nitrate, an ingredient used in curing meat. If you go to Amazon's page to order a half-kilo bag of the stuff, you'll see the suggested items include sulfur and charcoal, the other two ingredients of gunpowder. (Unlike Channel 4, I am comfortable revealing the secrets of this 1000-year-old technology.) The implication is clear...But as a few more minutes of clicking would have shown, the only thing Channel 4 has discovered is a hobbyist community of people who mill their own black powder at home, safely and legally, for use in fireworks, model rockets, antique firearms, or to blow up the occasional stump.

Yes, making black powder is perfectly legal in the UK, and for good reason. Hobbyists use it all the time. It's so popular, in fact, because it's a difficult substance to set off by accident. As for the ball bearings, those go in a ball mill or drum, which is used to mix the powders together and get the particles to a like size, important for their use in black powder. They aren't shrapnel at all.

The ball bearings Amazon is recommending are clearly intended for use in the ball mill. The algorithm is picking up on the fact that people who buy the ingredients for black powder also need to grind it. It's no more shocking than being offered a pepper mill when you buy peppercorns.

As for the thermite and the "widely available chemical" the Channel 4 piece goes on about, it essentially describes the chemicals needed to make thermite and magnesium-ribbon. As Cegłowski notes, this combination produces what is called a thermite reaction. If that term sounds familiar to you, it's probably because you likely performed the thermite reaction in chemistry class.

The thermite reaction is performed in every high school chemistry classroom, as a fun reward for students who have had to suffer through a baffling unit on redox reactions. You mix the rust and powdered aluminum in a crucible, light them with the magnesium ribbon, and watch a jet of flame shoot out, leaving behind a small amount of molten iron. The mixed metal powders are hard to ignite (that's why you need the magnesium ribbon), but once you get them going, they burn vigorously.

The main consumer use for thermite, as far as I can tell, is lab demonstrations and recreational chemistry. Importantly, thermite is not an explosive—it will not detonate. So Channel 4 has discovered that fireworks enthusiasts and chemistry teachers shop on Amazon.

So, the moral panic ginned up by Channel 4 essentially amounts to hobbyists and chemistry teachers getting a little convenient help from Amazon's algorithm as they go about their day. Not exactly the "holy shit, everyone is making bombs!" story that the "news" piece wanted to tell, but it has the advantage of actually being true. Perhaps most amazingly is how bereft of common sense the claims by Channel 4 were. After all, were its assertions true, why in the world aren't there bombs going off in record numbers using these chemical combinations?

If nothing else, however, this story does serve as a nice "anatomy of a moral panic", as the Idle Words post so helpfully titles this whole episode.

]]>
oops https://beta.techdirt.com/comment_rss.php?sid=20170922/09311238268