Notice: Use of undefined constant EDITION_TOKEN - assumed 'EDITION_TOKEN' in /home/beta6/deploy/itasca_20201215-3691-c395/rss.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss.php on line 2

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss-template.inc on line 2
Techdirt. Easily digestible tech news... https://beta.techdirt.com/ en-us Techdirt.https://beta.techdirt.com/images/td-88x31.gifhttps://beta.techdirt.com/ Thu, 25 Mar 2021 09:33:34 PDT Congressional Panel On Internet And Disinformation... Includes Many Who Spread Disinformation Online Mike Masnick https://beta.techdirt.com/articles/20210324/15214046487/congressional-panel-internet-disinformation-includes-many-who-spread-disinformation-online.shtml https://beta.techdirt.com/articles/20210324/15214046487/congressional-panel-internet-disinformation-includes-many-who-spread-disinformation-online.shtml We've pointed out a few times how silly all these Congressional panels on content moderation are, but the one happening today is particularly silly. One of the problems, of course, is that while everyone seems to be mad about Section 230, they seem to be mad about it for opposite reasons, with Republicans wanting the companies to moderate less, and Democrats wanting the companies to moderate more. That's only one of many reasons why today's hearing, like those in the past, are so pointless. They tend to bog down in silly "but what about this particular moderation decision" which will then be presented in a misleading or out of context fashion, allowing the elected official to grandstand about how they "held big tech's feet to the fire" or some such nonsense.

However, Cat Zakrzewski, over at the Washington Post has highlighted yet another reason why this particular "investigation" into disinformation online is so disingenuous: a bunch of the Republicans on the panel, exploring how these sites deal with mis- and disinformation -- are guilty of spreading disinformation themselves online.

A Washington Post analysis found that seven Republican members of the House Energy and Commerce Committee who are scheduled to grill the chief executives of Facebook, Google and Twitter about election misinformation on Thursday sent tweets that advanced baseless narratives of election fraud, or otherwise supported former president Donald Trump’s efforts to challenge the results of the presidential election. They were among 15 of the 26 Republican members of the committee who voted to overturn President Biden’s election victory.

Three Republican members of the committee, Reps. Markwayne Mullin (Okla.), Billy Long (Mo.) and Earl L. “Buddy” Carter (Ga.), tweeted or retweeted posts with the phrase “Stop the Steal” in the chaotic aftermath of the 2020 presidential election. Stop the Steal was an online movement that researchers studying disinformation say led to the violence that overtook the U.S. Capitol on Jan. 6.

Cool cool.

Actually, this highlights one of the many reasons why we should be concerned about all of these efforts to force these companies into a particular path for dealing with disinformation online. Because once we head down the regulatory route, we're going to reach a point in which the government is, in some form, determining what is okay and what is not okay online. And do we really want elected officials, who themselves were spreading disinformation and even voted to overturn the results of the last Presidential election, to be determining what is acceptable and what is not for social media companies to host?

As the article itself notes, rather than have a serious conversation about disinformation online and what to do about it, this is just going to be yet another culture war. Republicans are going to push demands to have these websites stop removing their own efforts at disinformation, and Democrats are going to push the websites to be more aggressive in removing information (often without concern for the consequences of such demands -- which often lead to the over-suppression of speech).

One thing I think we can be sure of is that Rep. Frank Pallone, who is heading the committee for today's hearing is being laughably naïve if he actually believes this:

Rep. Frank Pallone Jr. (N.J.), the Democrat who chairs the committee, said any member of Congress using social media to spread falsehoods about election fraud was “wrong,” but he remained optimistic that he could find bipartisan momentum with Republicans who don’t agree with that rhetoric.

“There’s many that came out and said after Jan. 6 that they regretted what happened and they don’t want to be part of it at all,” Pallone said in an interview. “You have to hope that there’s enough members on both sides of the aisle that see the need for some kind of legislative reform here because they don’t want social media to allow extremism and disinformation to spread in the real world and encourage that.”

Uh huh. The problem is that those who spread disinformation online don't think of it as disinformation. And they see any attempt to cut back on their ability to spread it to be (wrongly) "censorship." Just the fact that the two sides can't even agree on what is, and what is not, disinformation should give pause to anyone seeking "some kind of legislative reform" here. While the Democrats may be in power now, that may not last very long, and they should recognize that if it's the Republicans who get to define what is and what is not "disinformation" it may look very, very different than what the Democrats think.

]]>
because-of-course https://beta.techdirt.com/comment_rss.php?sid=20210324/15214046487
Thu, 25 Mar 2021 06:27:34 PDT Utah Governor Signs New Porn Filter Law That's Just Pointless, Performative Nonsense Karl Bode https://beta.techdirt.com/articles/20210322/07595646464/utahs-latest-porn-filter-bill-is-pointless-performative-nonsense.shtml https://beta.techdirt.com/articles/20210322/07595646464/utahs-latest-porn-filter-bill-is-pointless-performative-nonsense.shtml For decades now Utah legislators have repeatedly engaged in theater in their doomed bid to filter pornography from the internet. And repeatedly those lawmakers run face first into the technical impossibility of such a feat (it's trivial for anybody who wants porn to bypass filters), the problematic collateral damage that inevitably occurs when you try to censor such content (filters almost always wind up with legit content being banned), and a pesky little thing known as the First Amendment. But annoying things like technical specifics or the Constitution aren't going to thwart people who just know better.

For months now Utah has been contemplating yet another porn filtering law, this time HB 72. HB 72 pretends that it's going to purge the internet of its naughty bits by mandating active adult content filters on all smartphones and tablets sold in Utah. Phone makers would enable filters by default (purportedly because enabling such restrictions by choice is just to darn difficult), and require that mobile consumers in Utah enter a pass code before disabling the filters. If these filters aren't enabled by default, the bill would hold device manufacturers liable, up to $10 per individual violation.

On Tuesday, Utah Governor Spencer Cox signed the bill into law, claiming its passage would send an “important message” about preventing children from accessing explicit online content:

"Rep. Susan Pulsipher, the bill’s sponsor, said she was “grateful” the governor signed the legislation, which she hopes will help parents keep their children from unintended exposure to pornography. She asserts that the measure passes constitutional muster because adults can deactivate the filters, but experts said it still raises several legal concerns."

The AP story takes the "view from nowhere" or "both sides" US journalism approach to the story, failing to note that it's effectively impossible to actually filter porn from the internet. Usually because the filters (be they adult controls on a device or DNS blocklists) can usually be disabled by a toddler with a modicum of technical aptitude. Or that filters almost always cause unintended collateral damage to legitimate websites.

The AP also kind of buries the fact that the bill is more about performative posturing than productive solutions. The law literally won't take effect unless five other states pass equal laws, something that's not going to happen in part because most states realize what a pointless, Sisyphean effort this is:

"Moreover, the rule includes a huge loophole: it doesn’t take effect until five other states pass equivalent laws. If none pass before 2031, the law will automatically sunset. And so far, Utah is the only place that’s even got one on the table. “We don’t know of any other states who are working on any plans right now,” says Electronic Frontier Foundation media relations director Rebecca Jeschke."

There's also, again, that whole First Amendment thing. There is apparently something in the water at the Utah legislature that makes state leaders incapable of learning from experience when it comes to technical specifics or protected speech:

Obviously, this will go about as well as all the previous efforts of this type, including the multi-state effort by the guy who tried to marry his computer to mandate porn filters in numerous states under the false guise of combatting "human trafficking." And it will fail because these are not serious people or serious bills; they're just folks engaged in performative nonsense for a select audience of the perpetually aggrieved. Folks who simply refuse to realize that the solution to this problem is better parenting and personal responsibility, not shitty, unworkable bills or, in this case, legislation that does nothing at all.

]]>
round-and-round-you-go https://beta.techdirt.com/comment_rss.php?sid=20210322/07595646464
Thu, 25 Mar 2021 03:23:34 PDT City Of London Police Parrot Academic Publishers' Line That People Visiting Sci-Hub Should Be Afraid, Very Afraid Glyn Moody https://beta.techdirt.com/articles/20210323/09223246476/city-london-police-parrot-academic-publishers-line-that-people-visiting-sci-hub-should-be-afraid-very-afraid.shtml https://beta.techdirt.com/articles/20210323/09223246476/city-london-police-parrot-academic-publishers-line-that-people-visiting-sci-hub-should-be-afraid-very-afraid.shtml Techdirt has been following the saga of the City of London Police's special "Intellectual Property Crime Unit" (PIPCU) since it was formed back in 2013. It has not been an uplifting story. PIPCU seems to regard itself as Hollywood's private police force worldwide, trying to stop copyright infringement online, but without much understanding of how the Internet works, or even regard for the law, as a post back in 2014 detailed. PIPCU rather dropped off the radar, until last week, when its dire warnings about a new, deadly threat to the wondrous world of copyright were picked up by a number of gullible journalists. PIPCU's breathless press release reveals the shocking truth: innocent young minds are being encouraged to access knowledge, funded by the public, as widely as possible. Yes, PIPCU has discovered Sci-Hub:

Sci-Hub obtains the papers through a variety of malicious means, such as the use of phishing emails to trick university staff and students into divulging their login credentials. Sci Hub then use this to compromise the university's network and download the research papers.

That repeats an unsubstantiated claim about Sci-Hub that has frequently been made by academic publishers. And simply using somebody's login credentials does not constitute "compromising" the university's network, since at most it gives access to course details and academic papers: believe it or not, students are not generally given unrestricted access to university financial or personnel systems. The press release goes on:

Visitors to the site are very vulnerable to having their credentials stolen, which once obtained, are used by Sci-Hub to access further academic journals for free, and continue to pose a threat to intellectual property rights.

This is complete nonsense. It was obviously written by someone who has never accessed Sci-Hub, since there is no attempt anywhere to ask visitors for any information about anything. The site simply offers friction-free access to 85 million academic papers -- and not "70 million" papers as the press release claims, further proof the author never even looked at the site. Even more ridiculous is the following:

With more students now studying from home and having more online lectures, it is vital universities prevent students accessing the stolen information on the university network. This will not only prevent the universities from having their own credentials stolen, but also those of their students, and potentially the credentials of other members of the households, if connected to the same internet provider.

When students are studying from home, they won't be using the university network if they access Sci-Hub, but their own Internet connection. And again, even if they do visit, they won't have their credentials "stolen", because that's not how the site works. And the idea that members of the same household could also have their "credentials" stolen simply by virtue of being connected to the same Internet provider is so wrong you have to wonder whether the person writing it even knows how the modern (encrypted) Internet works.

But beyond the sheer wrongness of the claims being made here, there's another, more interesting aspect. Techdirt readers may recall a post from a few months back that analyzed how publishers in the form of the Scholarly Networks Security Initiative were trying to claim that using Sci-Hub was a terrible security risk -- rather as PIPCU is now doing, and employing much of the same groundless scare-mongering. It's almost as if PIPCU, always happy to toe Big Copyright's line, has uncritically taken a few talking points from the Scholarly Networks Security Initiative and repackaged in them in the current sensationalist press release. It would be great to know whether PIPCU and the Scholarly Networks Security Initiative have been talking about Sci-Hub recently. So I've submitted a Freedom of Information request to find out.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

]]>
just-a-coincidence? https://beta.techdirt.com/comment_rss.php?sid=20210323/09223246476
Wed, 24 Mar 2021 20:03:38 PDT NFL's Thursday Night Football Goes Exclusive To Amazon Prime Video Timothy Geigner https://beta.techdirt.com/articles/20210324/10225846485/nfls-thursday-night-football-goes-exclusive-to-amazon-prime-video.shtml https://beta.techdirt.com/articles/20210324/10225846485/nfls-thursday-night-football-goes-exclusive-to-amazon-prime-video.shtml While denialism over cord-cutting is still somewhat a thing, a vastly larger segment of the public can finally see the writing on the wall. While the cable industry's first brave tactic in dealing with the cord-cutting issue was to boldly pretend as though it didn't exist, industry executives more recently realize that there is a bloodbath coming its way. There are few roadblocks that remain for a full on tsunami of cord-cutters and one of the most significant of those is still live sports broadcasting. This, of course, is something I've been screaming about on this site for years: the moment that people don't need to rely on cable television to follow their favorite sports teams live, cable will lose an insane number of subscribers.

Over the past few years, the major American sports leagues have certainly inched in that direction. Notable for this post, 2017 saw the NFL ink a new streaming deal for mobile streaming with Verizon. The NFL had a long partnership with Verizon for mobile streaming already, but the notable aspect of the new deal was that NFL game streaming was suddenly not exclusive. Other streaming services could get in the game. And, while you can't draw a direct line to it, the tangential story of how the NFL just inked an exclusive deal with Amazon Prime for the broadcast rights for Thursday Night Football certainly shows you where this is all heading.

The deal runs from 2023 to 2033 and, according to a report from CNBC, will see Amazon pay $1 billion per year for the TNF package. Thursday Night Football is the NFL's newest and cheapest TV package, but the deal lets Amazon creep closer to parity with the NFL's other licensees, mainstream TV networks like Fox Sports, ABC/ESPN (Disney), CBS (Viacom), and NBC (Comcast). CNBC's report has the other four channels paying upward of $2 billion per year each, and unlike Amazon, the TV networks get to take turns airing the Super Bowl.

The exclusivity for Amazon seems like a mistake for the NFL, which really should want its product viewed in as many places as possible. On the other hand: 1 billion dollars a year. The Thursday lineups are typically one or two games each Thursday, far less than the deals for Sunday games. It's an incredible amount of money to pay just so Amazon can exclusively show the NFL's worst games of the week. But it also shows not only that Amazon understands the power and draw of live sports like this, but also that the NFL understands the power and draw of streaming services.

Building on that point, the NFL is also loosening up what its other broadcast partners can do in terms of streaming games.

The NFL's new deal contains streaming provisions for the other providers, too. Each network can now simulcast their games on their streaming service, and some deals scored one or two streaming-exclusive games. Disney's ABC and ESPN games are also allowed on ESPN+, and ESPN+ will get one exclusive game per season, the London "International Series" game. NBC games can also appear on the streaming service Peacock, and Peacock is getting "an exclusive feed of a select number of NFL games." CBS can stream games on Paramount+. Fox Sports, which wasn't part of Disney's acquisition of Fox, apparently has a streaming service called "Tubi," which can now simulcast the Fox games.

All of which is to say that the NFL is widely opening up its games to be streamed in more and more places. This shouldn't come as the world's biggest surprise, frankly. The NFL is a money-making operation and it does its marketing and promotional work better than most leagues. The very smart people handling broadcast contracts for the league certainly can see where the future in broadcasting games is and it sure looks like they are only going further and further into streaming.

If pro sports leagues follow suit, the end of cable television as we know it is nigh.

]]>
b-as-in-billion https://beta.techdirt.com/comment_rss.php?sid=20210324/10225846485
Wed, 24 Mar 2021 15:46:32 PDT Content Moderation Case Study: Huge Surge In Users On One Server Prompts Intercession From Discord (2021) Copia Institute https://beta.techdirt.com/articles/20210324/15363146488/content-moderation-case-study-huge-surge-users-one-server-prompts-intercession-discord-2021.shtml https://beta.techdirt.com/articles/20210324/15363146488/content-moderation-case-study-huge-surge-users-one-server-prompts-intercession-discord-2021.shtml Summary: A wild few days for the stock market resulted in some interesting moderation moves by a handful of communications/social media platforms.

A group of unassociated retail investors (i.e. day traders playing the stock market with the assistance of services like Robin Hood) gathering at the Wall Street Bets subreddit started a mini-revolution by refusing to believe Gamestop stock was worth as little as some hedge funds believed it was.

The initial surge in Gamestop's stock price was soon followed by a runaway escalation, some of it a direct response to a hedge fund's large (and exposed) short position. Melvin Capital -- the hedge fund targeted by Wall Street Bets denizens -- had announced its belief Gamestop stock wasn't worth the price it was at and had put its money where its mouth was by taking a large short position that would only pay off if the stock price continued to drop.

As the stock soared from less than $5/share to over $150/share, people began flooding to r/wallstreetbets. This forced the first moderation move. Moderators briefly took the subreddit private in an attempt to stem the flow of newcomers and get a handle on the issues these sort of influxes bring with them.

Wall Street Bets moved some of the conversation over to Discord, which prompted another set of moderation moves. Discord banned the server, claiming users routinely violated guidelines on hate speech, incitement of violence, and spreading misinformation. This was initially viewed as another attempt to rein in vengeful retail investors who were inflicting pain on hedge funds: the Big Guys making sure the Little Guys weren't allowed on the playing field. (Melvin Capital received a $2.75 billion cash infusion after its Gamestop short was blown up by Gamestop's unprecedented rise in price.)

But it wasn't as conspiratorial as it first appeared. The users who frequented a subreddit that described itself as "4chan with a Bloomberg terminal" were very abrasive and the addition of mics to the mix at the Discord server made things worse by doubling the amount of noise -- noise that often included hate speech and plenty of insensitive language.

The ban was dropped and the server was re-enabled by Discord, which announced it was stepping in to more directly moderate content and users. With over 300,000 users, the server had apparently grown too large, too quickly, making it all but impossible for Wall Street Bets moderators to handle on their own. This partially reversed the earlier narrative, turning Discord into the Big Guy helping out the Little Guy, rather than allowing them to be silenced permanently due to the actions of their worst users.

Decisions to be made by Discord:

  • Do temporary bans harm goodwill and chase users from the platform? Is this the expected result when this happens?

  • Is participating directly in moderation of heavily-trafficked servers scalable?

  • How much moderation should be left in the hands of server moderators? Should they be allowed more flexibility when moderating questionable content that may violate Discord rules but is otherwise still legal?

Questions and policy implications to consider:
  • Are temporary bans of servers more effective than other, more scaled escalation efforts? Are changes more immediate?

  • Is the fallout from bans offset by the exit of problem users? Or do server bans tend to entrench the worst users to the detriment of new users and moderators who are left to clean up the mess?

  • As more users move to Discord, is the platform capable of stepping in earlier to head off developing problems before they reach the point a ban is warranted?

  • Does offloading moderation to users of the service increase the possibility of rules violations? If so, should Discord take more direct control earlier when problematic content is reported?

Resolution: The Wall Street Bets Discord server is still up and running. Its core clientele likely hasn't changed much, which means moderation is still a full-time job. An influx of new users following press coverage of this particular group of retail traders may dilute the user base, but it's unlikely to turn WSB into a genteel community of stock market amateurs. Discord's assistance will likely be needed for the foreseeable future

Originally published on the Trust & Safety Foundation website.

]]>
moderating-game-stonks https://beta.techdirt.com/comment_rss.php?sid=20210324/15363146488
Wed, 24 Mar 2021 13:38:06 PDT Drone Company Wants To Sell Cops A Drone That Can Break Windows, Negotiate With Criminals Tim Cushing https://beta.techdirt.com/articles/20210318/13031246449/drone-company-wants-to-sell-cops-drone-that-can-break-windows-negotiate-with-criminals.shtml https://beta.techdirt.com/articles/20210318/13031246449/drone-company-wants-to-sell-cops-drone-that-can-break-windows-negotiate-with-criminals.shtml A drone manufacturer really really wants cops to start inviting drones to their raiding parties. This will bring "+ whatever" to all raiding party stats, apparently. BRINC Drones is here to help... and welcomes users to question the life choices made by company execs that led to the implementation of this splash page:

If these cops don't really look like cops to you, you're not alone. And by "you," I also mean BRINC Drones, which apparently wants to attract the warriors-in-a-war-zone mindset far too common in law enforcement. BRINC has a new drone -- one that presents itself as warlike as its target audience.

Drones are definitely an integral part of the surveillance market. BRINC wants to make them an integral part of the "drug raids and standoffs with reluctant arrestees" market. Sure, anyone can smash a window. But how cool would it be if a drone could do it?

The LEMUR is built by BRINC Drones to help police locate, isolate, and communicate with suspects. It has an encrypted cellphone link for two-way communication and can right itself if it crashes upside down. But it’s that remarkable glass smasher that sets it apart from the many other police drones we’ve seen.

BRINC says the 5-inch blade has tungsten teeth and can spin at up to 30,000 RPM. It’s enough to break tempered, automotive, and most residential glass. It’s an add-on feature to the drone, but it can be quickly attached with three thumb screws.

Whatever tactical gains might be made by a two-way communication device for negotiations will presumably be undone by the Black Mirror-esque destruction of windows by a remotely controlled flying nuisance. Assuming the suspect isn't able to, I don't know, throw a coat over the drone, negotiations will proceed between the human person and the bug-like drone sitting on the ground in front of them.

And let's not underplay the window-smashing. Cops do love them some broken windows. Break a window, justify your policing, as the old "broken windows" philosophy goes. "Command presence" is the term often deployed to excuse the physical destruction that precedes physical violence by police officers. Disorient and disarm. That's why cops smash all the doors and windows they can when raiding houses.

But if you give cops a specialized tool that is cheap to buy and cheap to replace, it will swiftly move from a last resort to Plan A. Case in point: flashbang grenades. These are not harmless weapons. They are war weapons designed to disorient lethal forces. Instead of being used in only the most desperate of situations, they're used as bog standard raid initiators. That's how they end up in the beds of toddlers, resulting in severe burns -- something the involved cops claimed was an innocent mistake. How could they have know the house might have contained children, they said stepping over a multitude of children's toys scattered across the lawn of the house they were raiding.

This drone will become as common as a flashbang grenade if they're cheap enough to obtain. The difference between a severely burned toddler and a flayed toddler is something the courts will get to sort out. And no matter how the court decision goes for cops, no one can put the skin back on injured toddlers. "By any means necessary," say drug warriors, forgetting the Constitution and a bunch of other state-level safeguards are in place to supposedly prevent the ends from justifying the means.

But there's even more here. And the "more" is inadvertently hilarious. BRINC claims its drone can open doors. But that's only true if by "open" you mean "make incrementally more open." Check out the drone "opening" a door in this BRINC promotional video.

LOL.

It gets funnier when you add physics to the mix. BRINC promises cops a warlike machine. Science says theses drones can be swiftly turned from aggressors to victims simply by allowing the drones to operate in the advertised manner.

Viewers of Battlebots know if a whirling blade comes into contact with a stationary object, at least SOME of the energy is absorbed by the blade. A 2.4-pound drone would be knocked reeling from a 30,000 rpm collision. The video doesn’t actually show that moment of impact, but the fact it hits the floor upside down suggests there’s no fancy electronics or damping to keep it stable.

Sure, we can laugh at this now. And we can hope our local law enforcement officials aren't so taken in by a presentation that keeps its boots slick with saliva. But tech will keep moving forward and BRINC's fantasies will edge closer to reality.

But what we have to ask ourselves (and hope our government agencies will consider) is how much this actually might subtract from the deadly human costs of police-citizen interactions. Sending a drone smashing through a window hardly sounds like de-escalation, even if the end result is a walkie-talkie hitching a ride on a modded tech toy. There's still a lot of intrinsic value in human interactions. Putting a flying buffer between law enforcement and those they're attempting to "save" sounds like a recipe for more violence, rather than less. The physical approach of dystopia through a recently shattered window is hardly calming, especially for those already on edge.

]]>
did-we-mention-it-breaks-windows https://beta.techdirt.com/comment_rss.php?sid=20210318/13031246449
Wed, 24 Mar 2021 12:05:07 PDT Beware Of Facebook CEOs Bearing Section 230 Reform Proposals Mike Masnick https://beta.techdirt.com/articles/20210324/10392546486/beware-facebook-ceos-bearing-section-230-reform-proposals.shtml https://beta.techdirt.com/articles/20210324/10392546486/beware-facebook-ceos-bearing-section-230-reform-proposals.shtml As you may know, tomorrow Congress is having yet another hearing with the CEOs of Google, Facebook, and Twitter, in which various grandstanding politicians will seek to rake Mark Zuckerberg, Jack Dorsey, and Sundar Pichai over the coals regarding things that those grandstanding politicians think Facebook, Twitter, and Google "got wrong" in their moderation practices. Some of the politicians will argue that these sites left up too much content, while others will argue they took down too much -- and either way they will demand to know "why" individual content moderation decisions were made differently than they, the grandstanding politicians, wanted them to be made. We've already highlighted one approach that the CEOs could take in their testimony, though that is unlikely to actually happen. This whole dog and pony show seems all about no one being able to recognize one simple fact: that it's literally impossible to have a perfectly moderated platform at the scale of humankind.

That said, one thing to note about these hearings is that each time, Facebook's CEO Mark Zuckerberg inches closer to pushing Facebook's vision for rethinking internet regulations around Section 230. Facebook, somewhat famously, was the company that caved on FOSTA, and bit by bit, Facebook has effectively lead the charge in undermining Section 230 (even as so many very wrong people keep insisting we need to change 230 to "punish" Facebook). That's not true. Facebook is now perhaps the leading voice for changing 230, because the company knows that it can survive without it. Others? Not so much. Last February, Zuckerberg made it clear that Facebook was on board with the plan to undermine 230. Last fall, during another of these Congressional hearings, he more emphatically supported reforms to 230.

And, for tomorrow's hearing, he's driving the knife further into 230's back by outlining a plan to further cut away at 230. The relevant bit from his testimony is here:

One area that I hope Congress will take on is thoughtful reform of Section 230 of the Communications Decency Act.

Over the past quarter-century, Section 230 has created the conditions for the Internet to thrive, for platforms to empower billions of people to express themselves online, and for the United States to become a global leader in innovation. The principles of Section 230 are as relevant today as they were in 1996, but the Internet has changed dramatically. I believe that Section 230 would benefit from thoughtful changes to make it work better for people, but identifying a way forward is challenging given the chorus of people arguing—sometimes for contradictory reasons—that the law is doing more harm than good.

Although they may have very different reasons for wanting reform, people of all political persuasions want to know that companies are taking responsibility for combatting unlawful content and activity on their platforms. And they want to know that when platforms remove harmful content, they are doing so fairly and transparently.

We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content. Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Platforms should not be held liable if a particular piece of content evades its detection—that would be impractical for platforms with billions of posts per day—but they should be required to have adequate systems in place to address unlawful content.

Definitions of an adequate system could be proportionate to platform size and set by a third-party. That body should work to ensure that the practices are fair and clear for companies to understand and implement, and that best practices don’t include unrelated issues like encryption or privacy changes that deserve a full debate in their own right.

In addition to concerns about unlawful content, Congress should act to bring more transparency, accountability, and oversight to the processes by which companies make and enforce their rules about content that is harmful but legal. While this approach would not provide a clear answer to where to draw the line on difficult questions of harmful content, it would improve trust in and accountability of the systems and address concerns about the opacity of process and decision-making within companies.

As reform ideas go, this is certainly less ridiculous and braindead than nearly every bill introduced so far. It attempts to deal with the largest concerns that most people have -- what happens when illegal, or even "lawful but awful," activity is happening on websites and those websites have "no incentive" to do anything about it (or, worse, incentive to leave it up). It also responds to some of the concerns about a lack of transparency. Finally, to some extent it makes a nod at the idea that the largest companies can handle some of this burden, while other companies cannot -- and it makes it clear that it does not support anything that would weaken encryption.

But that doesn't mean it's a good idea. In some ways, this is the flip side of the discussion that Mark Zuckerberg had many years ago regarding how "open" Facebook should be regarding third party apps built on the back of Facebook's social graph. In a now infamous email, Mark told someone that one particular plan "may be good for the world, but it's not good for us." I'd argue that this 230 reform plan that Zuckerberg lays out "may be good for Facebook, but not good for the world."

But it involves some thought, nuance, and predictions of how this plays out to understand why.

First, let's go back to the simple question of what problem are we actually trying to solve for. Based on the framing of the panel -- and of Zuckerberg's testimony -- it certainly sounds like there's a huge problem of companies not having any incentive to clean up the garbage on the internet. We've certainly heard many people claim that, but it's just not true. It's only true if you think that the only incentives in the world are the laws of the land you're in. But that's not true and has never been true. Websites do a ton of moderation/trust & safety work not because of what legal structure is in place but because (1) it's good for business, and (2) very few people want to be enabling cesspools of hate and garbage.

If you don't clean up garbage on your website, your users get mad and go away. Or, in other cases, your advertisers go away. There are plenty of market incentives to make companies take charge. And of course, not every website is great at it, but that's always been a market opportunity -- and lots of new sites and services pop up to create "friendlier" places on the internet in an attempt to deal with those kinds of failures. And, indeed, lots of companies have to keep changing and iterating in their moderation practices to deal with the fact that the world keeps changing.

Indeed, if you read through the rest of Zuckerberg's testimony, it's one example after another of things that the company has already done to clean up messes on the platform. And each one describes putting huge resources in terms of money, technology, and people to combat some form of disinformation or other problematic content. Four separate times, Zuckerberg describes programs that Facebook has created to deal with those kinds of things as "industry-leading." But those programs are incredibly costly. He talks about how Facebook now has 35,000 people working in "safety and security," which is more than triple the 10,000 people in that role five years ago.

So, these proposals to create a "best practices" framework, judged by some third party, in which you only get to keep your 230 protections if you meet those best practices, won't change anything for Facebook. Facebook will argue that its practices are the best practices. That's effectively what Zuckerberg is saying in this testimony. But that will harm everyone else who can't match that. Most companies aren't going to be able to do this, for example:

Four years ago, we developed automated techniques to detect content related to terrorist organizations such as ISIS, al Qaeda, and their affiliates. We’ve since expanded these techniques to detect and remove content related to other terrorist and hate groups. We are now able to detect and review text embedded in images and videos, and we’ve built media-matching technology to find content that’s identical or near-identical to photos, videos, text, and audio that we’ve already removed. Our work on hate groups focused initially on those that posed the greatest threat of violence at the time; we’ve now expanded this to detect more groups tied to different hate-based and violent extremist ideologies. In addition to building new tools, we’ve also adapted strategies from our counterterrorism work, such as leveraging off-platform signals to identify dangerous content on Facebook and implementing procedures to audit the accuracy of our AI’s decisions over time.

And, yes, he talks about making those rules "proportionate to platform size" but there's a whole lot of trickiness in making that work in practice. Size of what, exactly? Userbase? Revenue? How do you determine and where do you set the limits? As we wrote recently in describing our "test suite" of internet companies for any new internet regulation, there are so many different types of companies, dealing with so many different markets, that it wouldn't make any sense to apply a single set of rules or best practices across each one. Because each one is very, very different. How do you apply similar "best practices" on a site like Wikipedia -- where all the users themselves do the moderation -- to a site like Notion, in which people are setting up their own database/project management setups, some of which may be shared with others. Or how do you set up the same best practices that will work in fan fiction communities that will also apply to something like Cameo?

And, even the "size" part can be problematic. In practice, it creates so many wacky incentives. The classic example of this is in France, where stringent labor laws kick in only for companies at 50 employees. So, in practice, there are a huge number of French companies that have 49 employees. If you create thresholds, you get weird incentives. Companies will seek to limit their own growth in unnatural ways just to avoid the burden, or if they're going to face the burden, they may make a bunch of awkward decisions in figuring out how to "comply."

And the end result is just going to be a lot of awkwardness and silly, wasteful lawsuits for companies arguing that they somehow fail to meet "best practices." At worst, you end up with an incredible level of homogenization. Platforms will feel the need to simply adopt identical content moderation policies to ones who have already been adjudicated. It may create market opportunities for extractive third party "compliance" companies who promise to run your content moderation practices in the identical way to Facebook, since those will be deemed "industry-leading" of course.

The politics of this obviously make sense for Facebook. It's not difficult to understand how Zuckerberg gets to this point. Congress is putting tremendous pressure on him and continually attacking the company's perceived (and certainly, sometimes real) failings. So, for him, the framing is clear: set up some rules to deal with the fake problem that so many insist is real, of there being "no incentive" for companies to do anything to deal with disinformation and other garbage, knowing full well that (1) Facebook's own practices will likely define "best practices" or (2) that Facebook will have enough political clout to make sure that any third party body that determines these "best practices" is thoroughly captured so as to make sure that Facebook skates by. But all those other platforms? Good luck. It will create a huge mess as everyone tries to sort out what "tier" they're in, and what they have to do to avoid legal liability -- when they're all already trying all sorts of different approaches to deal with disinformation online.

Indeed, one final problem with this "solution" is that you don't deal with disinformation by homogenization. Disinformation and disinformation practices continually evolve and change over time. The amazing and wonderful thing that we're seeing in the space right now is that tons of companies are trying very different approaches to dealing with it, and learning from those different approaches. That experimentation and variety is how everyone learns and adapts and gets to better results in the long run, rather than saying that a single "best practices" setup will work. Indeed, zeroing in on a single best practices approach, if anything, could make disinformation worse by helping those with bad intent figure out how to best game the system. The bad actors can adapt, while this approach could tie the hands of those trying to fight back.

Indeed, that alone is the very brilliance of Section 230's own structure. It recognizes that the combination of market forces (users and advertisers getting upset about garbage on the websites) and the ability to experiment with a wide variety of approaches, is how best to fight back against the garbage. By letting each website figure out what works best for their own community.

As I started writing this piece, Sundar Pichai's testimony for tomorrow was also released. And it makes this key point about how 230, as is, is how to best deal with misinformation and extremism online. In many ways, Pichai's testimony is similar to Zuckerberg's. It details all these different (often expensive and resource intensive) steps Google has taken to fight disinformation. But when it gets to the part about 230, Pichai's stance is the polar opposite of Zuckerberg's. Pichai notes that they were able to do all of these things because of 230, and changing that would put many of these efforts at risk:

These are just some of the tangible steps we’ve taken to support high quality journalism and protect our users online, while preserving people’s right to express themselves freely. Our ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.

Section 230 is foundational to the open web: it allows platforms and websites, big and small, across the entire internet, to responsibly manage content to keep users safe and promote access to information and free expression. Without Section 230, platforms would either over-filter content or not be able to filter content at all. In the fight against misinformation, Section 230 allows companies to take decisive action on harmful misinformation and keep up with bad actors who work hard to circumvent their policies.

Thanks to Section 230, consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy. Today, more people have the opportunity to create content, start a business online, and have a voice than ever before. At the same time, it is clear that there is so much more work to be done to address harmful content and behavior, both online and offline.

Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability. We are, however, concerned that many recent proposals to change Section 230—including calls to repeal it altogether—would not serve that objective well. In fact, they would have unintended consequences—harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.

We might better achieve our shared objectives by focusing on ensuring transparent, fair, and effective processes for addressing harmful content and behavior. Solutions might include developing content policies that are clear and accessible, notifying people when their content is removed and giving them ways to appeal content decisions, and sharing how systems designed for addressing harmful content are working over time. With this in mind, we are committed not only to doing our part on our services, but also to improving transparency across our industry.

That's standing up for the law that helped enable the open internet, not tossing it under the bus because it's politically convenient. It won't make politicians happy. But it's the right thing to say -- because it's true.

]]>
good-for-facebook,-not-good-for-the-world https://beta.techdirt.com/comment_rss.php?sid=20210324/10392546486
Wed, 24 Mar 2021 10:51:04 PDT Verizon Again Doubles Down On Yahoo After 6 Years Of Failure Karl Bode https://beta.techdirt.com/articles/20210323/08410346475/verizon-again-doubles-down-yahoo-after-6-years-failure.shtml https://beta.techdirt.com/articles/20210323/08410346475/verizon-again-doubles-down-yahoo-after-6-years-failure.shtml You might recall that Verizon's attempt to pivot from grumpy old telco to sexy new Millennial ad brand hasn't been going so well. Oddly, mashing together two failing 90s brands in AOL and Yahoo, and renaming the coagulated entity "Oath," didn't really impress many people. The massive Yahoo hack, a controversy surrounding Verizon snoopvertising, and the face plant by the company's aggressively hyped Go90 streaming service (Verizon's attempts to make video inroads with Millennials) didn't really help.

By late 2018 Verizon was forced to acknowledge that its Oath entity was effectively worthless. By 2019, Verizon wound up selling Tumblr to WordPress owner Automattic at a massive loss after a rocky ownership stretch. Throughout all of this, Verizon has consistently pretended that this was all part of some amazing, master plan.

Those claims surfaced again this week with Verizon announcing that the company would be doubling and tripling down on the Yahoo experiment. For one, the company is launching Yahoo Shops, "a new marketplace destination featuring a curated, native shopping experience tailored to the user including innovative tech, from shoppable video to 3D try-ons, and more." It's also shifting its business model to focus more on subscriptions through Yahoo Plus, hoping to add on to the 3 million people that, for some reason, subscribe to products like Yahoo Fantasy and Yahoo Finance.

Again though, all of this sounds very much like unsurprising and belated efforts to mimic products and services that already exist. While surely somebody somewhere finds these efforts enticing, that this is the end result of its $4.48 billion Yahoo acquisition in 2017, and its 2015 $4.4 billion acquisition of AOL is just kind of...meh. It's in no way clear how Verizon intends to differentiate itself in the market, and people who cover telecom and media for a living continue to find Verizon's persistence both adorable and amusing:

Again, as companies that have spent the better part of a generation as government-pampered, natural monopolies, creativity, competition, innovation, and adaptation are alien constructs. Both AT&T and Verizon have thrown around countless billions at trying to become disruptive players in new media and advertising, and the end result has been nothing but a parade of stumbles. In fact AT&T's probably been a better poster child for this than even Verizon, given it spent $200 billion on megamergers only to lose around 8 million TV subscribers in just a few years. Growth for growth's sake isn't a real strategy.

The ultimate irony is that both companies even managed to successfully convince regulators at the FCC to effectively self-immolate, and even that couldn't buy either company the success they crave believe they're owed. Neither did the billions in money gleaned from the Trump tax cuts, which resulted in more layoffs than innovation. There are oodles of lessons here for those looking to learn from them, but absolutely no indication that's actually going to ever happen.

]]>
telecoms-can't-innovate https://beta.techdirt.com/comment_rss.php?sid=20210323/08410346475
Wed, 24 Mar 2021 10:46:04 PDT Daily Deal: Way Pro No Code Landing Page Builder Daily Deal https://beta.techdirt.com/articles/20210324/10152546484/daily-deal-way-pro-no-code-landing-page-builder.shtml https://beta.techdirt.com/articles/20210324/10152546484/daily-deal-way-pro-no-code-landing-page-builder.shtml Creating a site has been made easier with Way Pro. This landing page builder is an easy, no-code, and component-ready platform that helps you execute lead generation campaigns faster. It comes with tons of templates and components that make your page beautiful and powerful. With a responsive design, Way Pro makes every element and section mobile-ready. Its quick setup only takes less than 30 seconds to publish a landing page and start getting results. You can also easily export your leads with a click. It's on sale for $35.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

]]>
good-deals-on-cool-stuff https://beta.techdirt.com/comment_rss.php?sid=20210324/10152546484
Wed, 24 Mar 2021 09:33:00 PDT If Trump Ever Actually Creates A Social Network Of His Own, You Can Bet It Will Rely On Section 230 Mike Masnick https://beta.techdirt.com/articles/20210323/07195146472/if-trump-ever-actually-creates-social-network-his-own-you-can-bet-it-will-rely-section-230.shtml https://beta.techdirt.com/articles/20210323/07195146472/if-trump-ever-actually-creates-social-network-his-own-you-can-bet-it-will-rely-section-230.shtml There have been rumors for ages that former President Donald Trump might "start" a social network of his own, and of course, that talk ramped up after he was (reasonably) banned from both Twitter and Facebook. Of course Trump is not particularly well known for successfully "starting" many businesses. Over the last few decades of his business career, he seemed a lot more focused on just licensing his name to other businesses, often of dubious quality. So it was no surprise when reports came out last month that, even while he was President, he had been in talks with Parler to join that site in exchange for a large equity stake in the Twitter-wannabe-for-Trumpists. For whatever reason, that deal never came to fruition.

But, over the weekend, Trump spokesperson (and SLAPP suit filer) Jason Miller told Fox News that Trump was preparing to launch his own social network in the next few months. Amusingly, right before Miller made this claim, he noted exactly what I had said about how Trump being banned from Twitter and Facebook wasn't censorship, since Trump could get all the press coverage he wanted:

“The president’s been off of social media for a while,” he told Fox News Media Buzz host Howard Kurtz, “[but] his press releases, his statements have actually been getting almost more play than he ever did on Twitter before.”

But he then followed that up with an offhand comment saying:

I do think that we’re going to see President Trump returning to social media in probably about two or three months here with his own platform.

And this is something that I think will be the hottest ticket in social media, it’s going to completely redefine the game, and everybody is going to be waiting and watching to see what exactly President Trump does. But it will be his own platform.

Many, many people have assumed that -- just like revealing his tax returns, infrastructure week, and his shiny new healthcare plan -- that this announcement was just bluster and nonsense with no actual expectation that anything will ever be done. And that is perhaps likely. Even Trump's normal allies seem less than thrilled with the idea, though mainly because it may lead to further fragmenting among the "social media website for MAGA conspiracy theorists." Others have, quite reasonably, pointed out that a social media site built on Trump's cult of personality is likely to be crazy boring and just not that interesting.

However, I kind of do hope that it actually comes to be, if only to see just how quickly Trump's new social network has to rely on Section 230 to defend itself in court. Remember, Trump spent the last year of his presidency slamming Section 230 (which he completely misrepresented multiple times and never seemed to actually understand). You may recall that one of the parting shots of his presidency was to try to block military funding if Congress wouldn't completely repeal Section 230.

But, of course, if a TrumpBook ever came into actual existence, you can bet that (1) it, like Parler, would need to speedrun the content moderation learning curve, and (2) would certainly be subject to some lawsuits regarding whatever insane crap its users would post. Trump's own comments on his own site would not be protected by Section 230, as that would be content created by an "employee" of the site itself, but the site would be protected from liability from whatever nonsense his sycophantic fans posted. And you can bet that his lawyers (assuming he could find any who would work for him) would very quickly lean on Section 230 to protect the company from any such lawsuits.

I mean, we've already seen Trump rely on anti-SLAPP laws in court, despite demands to "open up our libel laws." So he's already got a precedent for relying on the very same laws he hates in court. Hell, Trump has even relied on Section 230 in court to argue that he wasn't legally liable for his own retweets.

So, sure, let him start his own social network, and then be forced to recognize how Section 230 is actually something that he needs.

]]>
i-mean,-come-on https://beta.techdirt.com/comment_rss.php?sid=20210323/07195146472
Wed, 24 Mar 2021 06:39:25 PDT Despite A Decade Of Complaints, US Wireless Carriers Continue To Abuse The Word 'Unlimited' Karl Bode https://beta.techdirt.com/articles/20210322/10241046468/despite-decade-complaints-us-wireless-carriers-continue-to-abuse-word-unlimited.shtml https://beta.techdirt.com/articles/20210322/10241046468/despite-decade-complaints-us-wireless-carriers-continue-to-abuse-word-unlimited.shtml Way back in 2007, Verizon was forced to strike an agreement with the New York State Attorney General for falsely marketing data plans with very obvious limits as "unlimited." For much of the last fifteen years numerous other wireless carriers, like AT&T, have also had their wrists gently slapped for selling "unlimited" wireless service that was anything but. Despite this, there remains no clear indication that the industry has learned much of anything from the punishment and experience. Most of the companies whose wrists were slapped have, unsurprisingly, simply continued on with the behavior.

The latest case in point is Boost Mobile, a prepaid wireless provider that was shoveled over to Dish Network as part of the controversial T-Mobile Sprint merger. For years the company has been selling prepaid "unlimited" data plans that aren't, by any definition of the word, unlimited. In part because once users hit a bandwidth consumption threshold (aka a "limit"), users find their lines slowed to around 2G speeds (somewhere around 128 kbps) for the remainder of the billing period.

No regulators could be bothered to thwart this behavior, so it fell to the wireless industry's self-regulatory organization, The National Advertising Division (NAD), to dole out the wrist slaps this time. The organization last week told Boost that it should stop advertising its data plans as unlimited, after getting complaints from AT&T -- a company that spent a decade falsely advertising its plans as unlimited:

"AT&T had challenged Boost for its “Unlimited Data, Talk & Text” claims, asserting that the prepaid brand’s 4G LTE data plans are throttled to 2G speeds once a monthly data cap is hit. For the “Talk & Text” portion, NAD sided with Boost, saying the company was able to support its message."

Carriers (including AT&T) have historically tried to claim that a connection is still technically "unlimited" if you slow it to substandard speeds, something regulators and the courts haven't agreed with. NAD didn't much like this explanation either, noting that trying to use modern services on the equivalent of a 1998 IDSN line amounts to the same outcome:

"At 2G speeds, many of today's most commonly used applications such as social-media, e-mail with attachments, web browsing on pages with embedded pictures, videos and ads and music may not work at all or will have such significant delays as to be functionally unavailable because the delays will likely cause the applications to time out,” NAD stated in its decision."

Granted NAD's punishments never really carry much weight. As a self-regulatory organization NAD's function is basically to pre-empt tougher, more comprehensive regulatory action on things like false advertising (which are already pretty rare in telecom). So usually what happens is the organization steps in, doles out a few wrist slaps for ads that have already been running for a year or two, leaving little incentive for real reform in an industry long known for its falsehoods. Which is precisely why we keep reading this same story in the press with little substantive change.

]]>
that-word,-it-does-not-mean-what-you-think-it-means https://beta.techdirt.com/comment_rss.php?sid=20210322/10241046468
Wed, 24 Mar 2021 03:35:25 PDT Sidney Powell Asks Court To Dismiss Defamation Lawsuit Because She Was Just Engaging In Heated Hyperbole... Even When She Was Filing Lawsuits Tim Cushing https://beta.techdirt.com/articles/20210323/13071246480/sidney-powell-asks-court-to-dismiss-defamation-lawsuit-because-she-was-just-engaging-heated-hyperbole-even-when-she-was-filing.shtml https://beta.techdirt.com/articles/20210323/13071246480/sidney-powell-asks-court-to-dismiss-defamation-lawsuit-because-she-was-just-engaging-heated-hyperbole-even-when-she-was-filing.shtml In January, Dominion Voting Systems sued former Trump lawyer Sidney Powell for defamation. The voting machine maker claimed the self-titled "Kraken" was full of shit -- and knowingly so -- when she opined (and litigated!) that Dominion had ties to the corrupt Venezuelan government and that it had rigged the election against Donald Trump by changing votes or whatever (Powell's assertions and legal filings were based on the statements of armchair experts and conspiracy theorists).

Sidney Powell has responded to Dominion's lawsuit with what is, honestly, about the best defense she could possibly muster. And that defense is, "I have zero credibility when it comes to voting fraud allegations and certainly any reasonable member of the public would know that." From Powell's motion to dismiss [PDF]:

Determining whether a statement is protected involves a two-step inquiry: Is the statement one which can be proved true or false? And would reasonable people conclude that the statement is one of fact, in light of its phrasing, context and the circumstances surrounding its publication…

Analyzed under these factors, and even assuming, arguendo, that each of the statements alleged in the Complaint could be proved true or false, no reasonable person would conclude that the statements were truly statements of fact.

In other words, these allegations were just Powell's "heated" opinions and should be viewed as protected expression. These wild accusations based on hearsay and YouTube videos were nothing more than contributions to the "robust discourse" surrounding the 2020 election.

As political speech, it lies at the core of First Amendment protection; such speech must be “uninhibited, robust, and wide-open.” N.Y. Times Co., 376 U.S. at 270. Additionally, in light of all the circumstances surrounding the statements, their context, and the availability of the facts on which the statements were based, it was clear to reasonable persons that Powell’s claims were her opinions and legal theories on a matter of utmost public concern. Those members of the public who were interested in the controversy were free to, and did, review that evidence and reached their own conclusions—or awaited resolution of the matter by the courts before making up their minds. Under these circumstances, the statements are not actionable.

Maybe so, as far as public appearances go. But Powell also made the same allegations in her election-related litigation. Somehow, Powell evidently feels this calling her statements nothing more than "protected expression" should contribute to her defense against defamation claims, rather than adding to the weaponry Dominion can deploy against her.

All the allegedly defamatory statements attributed to Defendants were made as part of the normal process of litigating issues of momentous significance and immense public interest. The statements were tightly focused on the legal theories they were advancing in litigation and the evidence they had presented, or were going to present, to the courts in support of their claims that the presidential election was stolen, denying millions of Americans their constitutional rights to “one person, one vote” by deliberately mis-counting ballots, diminishing the weight of certain ballots while enhancing the weight of others and otherwise manipulating the vote tabulation process to achieve a pre-determined result.

It's a solid defense. Sort of. Claiming your wild speculation was just mildly-informed wild speculation that anyone of a reasonable mind would have viewed as nothing more than highly opinionated hot takes on election fraud is a good way to get out of defamation lawsuits. Powell isn't wrong here: discussions about issues of public interest are given more First Amendment leeway, especially when both parties involved are public figures.

But this defense ignores one critical fact -- one Dominion has accounted for. This "robust discussion" wasn't limited to press conferences and Fox News appearances. It was also the basis for lawsuits filed by Sidney Powell -- lawsuits in which she presented these same allegations as facts backed by sworn statements. Sure, it takes a court to sort the baseless allegations from the actionable ones, but filing a lawsuit in a court and signing it means the plaintiff believes all allegations to be true until otherwise proven false. And while there are some protections for allegations made in court, it's pretty tough to argue averred statements of fact are also just harmless opinion tossed into the highly charged political ether.

Powell's response claims her comments fall into the "exaggeration and hyperbole" end of the spectrum -- an area of opinion that gets a lot of First Amendment coverage because it's both heated and open to interpretation by "reasonable" people. But "exaggeration and hyperbole" isn't generally welcome in sworn pleadings. Knowingly shoveling bullshit into a courtroom and asking the court to weigh in on its relevance and honesty isn't something courts tend to tolerate. It's this exact thing that has led to Michigan state officials asking the court system to sanction Powell for her bad faith litigation.

We'll see where the court takes it from here, but it's hard to see a court responding favorably to a motion to dismiss that basically says no one should take Powell's allegations seriously… except for courts handling cases in which she's the one filing complaints.

]]>
behold-the-kraken-and-the-unholy-mess-it-has-made-on-the-carpet! https://beta.techdirt.com/comment_rss.php?sid=20210323/13071246480
Tue, 23 Mar 2021 20:06:24 PDT New Year, Same You: Twitch Releases Tools To Help Creators Avoid Copyright Strikes, Can't Properly Police Abuse Timothy Geigner https://beta.techdirt.com/articles/20210318/10311146445/new-year-same-you-twitch-releases-tools-to-help-creators-avoid-copyright-strikes-cant-properly-police-abuse.shtml https://beta.techdirt.com/articles/20210318/10311146445/new-year-same-you-twitch-releases-tools-to-help-creators-avoid-copyright-strikes-cant-properly-police-abuse.shtml Readers here will remember that the last quarter of 2020 was a very, very bad time for streaming platform Twitch. It all started when the RIAA came calling on the Amazon-owned platform, issuing a slew of DMCA takedown notices over all sorts of music included in the recorded streams of creators. Instead of simply taking the content down and issuing a notice to creators, Twitch simply perma-deleted the content in question, with no recourse for a counternotice given to creators as an option. After an explosive backlash, Twitch apologized, but still didn't offer any clarity or tools for creators to understand what might be infringing content and what was being targeted. Instead, during its remote convention, Twitch only promised more information and tools in the coming months.

Five months later, Twitch has finally informed its creators of the progress its made on that front: tools on the site to help creators remove material flagged as infringement and some more clarity on what is infringing.

Twitch announced in an email to streamers that the site has added new tools today to help creators see where they stand with takedown requests and copyright strikes. Twitch also added tools to let streamers mass delete their recorded streams. It’s a smart move because it gives streamers better tools to play on the right side of copyright law. (If you don’t, and you rack up enough copyright strikes, you get permabanned.)

Now, in the event that a streamer gets hit with a DMCA takedown request, it’ll show up in their on-site inbox; Twitch’s video producer will also show the number of copyright strikes a channel has received. In addition, streamers can now unpublish or delete all their VODs at once (or in batches of 20 at a time).

It's not that Twitch's new tool is a bad thing. More clarity for creators and an increased ability to granularly address DMCA notices for their content is a decidedly good thing. But it all feels both extremely late in coming because, obviously, Twitch should have known that DMCA notices on creators from the copyright industries would be a thing.

But if you're looking for encouraging signs that Twitch is getting its shit together in policing its own site, you certainly won't find it in the story of how one creator got one of his accounts banned for an account name that was "harassment via username." The user he's accused of harassing would appear to be... himself.

It might be an understatement to say that popular Minecraft YouTuber and streamer George “GeorgeNotFound’’ Davidson had a weird weekend. Within two days, he got banned from Twitch, possibly un-banned, definitely banned again, and unbanned (again?). Why? “Harassment via username,” according to Twitch. Problem is, the only person he could have possibly been harassing was himself.

The new, different ban email from Twitch accused him of “harassment via username” and once again informed him that the suspension was indefinite—aka, a ban. This one also further elaborated on what exactly he might have done, without telling him exactly what he definitely did. Examples included “having a username that explicitly insults another user,” “having a username that threatens negative action towards another user,” and “having a username that promotes self-harm in conjunction with malicious chat activity, such as telling another user to kill themselves.”

Obviously, "ThisIsNotGeorgeNotFound" does not do any of those things. And, yet, his account was banned for a second time. He has since had the account unbanned. Which, fine, but what the hell is going on at Twitch that it never seems to get any of this right? And if the platform can't be trusted to do something as relatively simple as properly policing creators' handles, why would anyone have any confidence that it's going to navigate waters as treacherous as the Copyright Seas any better?

Look, Twitch grew up fast. And nobody expects any growing platform to be perfect from the get go. But with the backing of a parent company like Amazon, it certainly should be able to do better by its creators than this.

]]>
getting-twitchy https://beta.techdirt.com/comment_rss.php?sid=20210318/10311146445
Tue, 23 Mar 2021 15:53:05 PDT Connecticut Legislature Offers Up Bill That Would Make Prison Phone Calls Free Tim Cushing https://beta.techdirt.com/articles/20210318/11293946447/connecticut-legislature-offers-up-bill-that-would-make-prison-phone-calls-free.shtml https://beta.techdirt.com/articles/20210318/11293946447/connecticut-legislature-offers-up-bill-that-would-make-prison-phone-calls-free.shtml A lot of rights just vanish into the ether once you're incarcerated. Some of this makes sense. You have almost no privacy rights when being housed by the state. Your cell can be searched and your First Amendment right to freedom of association can be curtailed in order to prevent criminal conspiracies from being implemented behind bars.

But rights don't disappear completely. The government has an obligation to make sure you're cared for and fed properly -- something that rarely seems to matter to jailers.

Treating people as property has negative outcomes. Not only are "good" prisoners expected to work for pennies a day, but their families are expected to absorb outlandish expenses just to remain in contact with their incarcerated loved ones. The government loves its paywalls and it starts with prison phone services.

Cellphone adoption changed the math for service providers. After a certain point, customers were unwilling to pay per text message. And long distance providers realized they could do almost nothing to continue to screw over phone users who called people outside of their area codes. Some equity was achieved once providers realized "long distance" was only a figure of profitable speech and text messages were something people expected to be free, rather than a service that paid phone companies per character typed.

But if you're in prison, it's still 1997. The real world is completely different but your world is controlled by companies that know how to leverage communications into a profitable commodity. As much as we, the people, apparently hate the accused and incarcerated, they're super useful when it comes to funding local spending. Caged people are still considered "taxpayers," even when they can't generate income or vote in elections.

So, for years, we've chosen to additionally punish inmates by turning basic communication options into high priced commodities. And we've decided they don't have any right to complain, even when the fees are astronomical or prison contractors are either helping law enforcement listen in to conversations with their legal reps or making it so prohibitively expensive only the richest of us can support an incarcerated person's desire to remain connected to their loved ones.

Connecticut legislators have had enough. Whether it will be enough to flip the status quo table remains to be seen. But, for now, a bill proposed by the Connecticut House aims to strip the profit from for-profit service providers, as well as the for-profit prisons that pad their budgets with kickbacks from prison phone service providers. (h/t Kathy Morse)

Connecticut holds the dismal distinction of being the state with the most expensive prison phone calls in the country. But a new bill in the state legislature may soon make Connecticut the first state to make prison phone calls free.

Senate Bill 520 would require Connecticut state prisons to offer telephone or other communication to incarcerated people free of charge, at a minimum of 90 minutes per day. The state could not collect any revenue from operating these services.

Seems like a reasonable response. 90 minutes per day should make most calls from prisons free for all but the most talkative. And I hope those profiting from these services socked some money away for a legislative rainy day. They've certainly had the opportunity. As this report notes, prison call services raked in over $13 million in fees in 2018 alone. There's no reason to believe this amount declined in 2019 or 2020, especially when 2020 gave people millions of reasons to avoid in-person visits with anyone.

The bill [PDF] is short and sweet -- somewhat of a surprise considering it was crafted by public servants who often seem to believe they're being paid by the word. Here it is in its entirety:

AN ACT CONCERNING THE COST OF TELECOMMUNICATION SERVICES FOR INCARCERATED PERSONS.

Be it enacted by the Senate and House of Representatives in General Assembly convened:

That title 18 of the general statutes be amended to require the Department of Correction to provide voice or other telecommunication services to incarcerated persons free of cost for a minimum of ninety minutes per day.

Statement of Purpose: To provide certain cost-free telecommunication services for incarcerated persons.

As it says on the tin, the purpose of the legislation is to provide prisoners with free phone calls, rather than allow them to be subjected to per-minute fees last viewed as "reasonable" sometime in the early 1990s. (And only viewed as "reasonable" by long distance providers, not the captive market they provided service to. [And "captive" means people who have few options in terms of service providers, not just those locked behind physical bars.])

Expect significant pushback. And it won't just be coming from prison phone service providers like Securus. It will also come from local law enforcement agencies which receive a percentage of these fees -- something most people would call a kickback, even if law enforcement continues to argue that it isn't.

If this passes, this will be the first successful effort that covers a whole state. Pockets of prison phone fee resistance have been found elsewhere prior to this (New York City, San Francisco) but it has yet to be implemented at state level. This bill would be the first to make it illegal to charge for prison phone calls across an entire state.

This is the sort of legislation that should be adopted across the nation. Prisons -- for better or worse -- are a public service. They shouldn't be subject to the predatory behavior of private companies. Making it prohibitively expensive to talk to loved ones should be considered "cruel," if not "unusual." It serves no deterrent effect. All it does is enforce the unspoken fact that people in prisons are no longer considered "people." That's not how our justice system is supposed to work.

]]>
let's-get-this-trending! https://beta.techdirt.com/comment_rss.php?sid=20210318/11293946447
Tue, 23 Mar 2021 13:30:00 PDT Techdirt Podcast Episode 275: The State Of Trust & Safety Leigh Beadon https://beta.techdirt.com/articles/20210323/12320046479/techdirt-podcast-episode-275-state-trust-safety.shtml https://beta.techdirt.com/articles/20210323/12320046479/techdirt-podcast-episode-275-state-trust-safety.shtml

For some reason, a lot of people who get involved in the debate about content moderation still insist that online platforms are "doing nothing" to address problems — but that's simply not true. Platforms are constantly working on trust and safety issues, and at this point many people have developed considerable expertise regarding these unique challenges. One such person is Alex Feerst, former head of Trust & Safety at Medium, who joins us on this weeks episode to clear up some misconceptions and talk about the current state of the trust and safety field.

Follow the Techdirt Podcast on Soundcloud, subscribe via Apple Podcasts, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

]]>
what's happening https://beta.techdirt.com/comment_rss.php?sid=20210323/12320046479
Tue, 23 Mar 2021 12:07:25 PDT North Carolina Legislators Push Bill That Would Prevent Cops, Prosecutors From Charging Six-Year-Olds For Picking Flowers Tim Cushing https://beta.techdirt.com/articles/20210319/13081346457/north-carolina-legislators-push-bill-that-would-prevent-cops-prosecutors-charging-six-year-olds-picking-flowers.shtml https://beta.techdirt.com/articles/20210319/13081346457/north-carolina-legislators-push-bill-that-would-prevent-cops-prosecutors-charging-six-year-olds-picking-flowers.shtml This is today's law enforcement. While there are multiple societal and criminal problems that deserve full-time attention, our tax dollars are paying cops to turn our children into criminals. We don't have the luxury of pretending this isn't happening. Schools have welcomed cops into their confines, turning routine disciplinary problems into police matters.

While there may be some schools plagued by actual violent criminal activity, the stories that most often rise to the surface are those that involve violence by (uniformed) adults being inflicted on children. And I don't just mean legal minors -- a group that usually involves anyone under the age of 18. We're talking actual kids.

Here's a brief rundown of some notable cases involving "school resource officers, " a term that suggests these cops aren't actually just cops, but rather an integral part of the school disciplinary system. But when SROs deal with children, they treat children just like they treat hardened criminals.

This is a post about cops in schools I put together back in 2013. In this one, students were arrested for engaging in a water balloon fight, a 14-year-old was arrested for wearing an NRA shirt, and a DC cop gave a 10-year-old a concussion for ditching out on his music class. That's the tip of the ugly iceberg covered in this post.

But let's look at a few more incidents.

- Cops arrested a 12-year-old for pointing "finger guns" at classmates.

- Cops strip searched an 8-year-old while "investigating" feces found on a school bathroom floor.

- Orlando (FL) police officers arrested a six-year-old, zip tying her hands. One cop said the child looked like an "infant." The arresting officer was later fired for not asking permission to arrest someone under the age of 12.

- A five-year-old was hogtied by an SRO for allegedly "battering" a school employee.

That's how we've chosen to run schools in this nation. Students are just grist for the "criminal justice" mill when cops are involved. Problems better handled by administrators and parents are turned over to government employees with guns and a toolset that turns every misbehaving student into a criminal on the verge of becoming hardened.

But that's only the entry point. Cuffing elementary school kids and hauling them off to face criminal charges is only the beginning. This dumps them into a system that is inclined to believe cops and view accused persons -- no matter their age -- as items to be processed and disposed of.

This report for the Winston-Salem (NC) Journal shows what happens once cops are done turning misbehaving students into criminal defendants. If you think this nation won't tolerate criminal court proceedings involving kindergarten students, well… you just don't know what we're capable of.

The 6-year-old dangled his legs above the floor as he sat at the table with his defense attorney, before a North Carolina judge.

He was accused of picking a tulip from a yard at his bus stop, his attorney Julie Boyer said, and he was on trial in juvenile court for injury to real property.

The boy's attention span was too short to follow the proceedings, Boyer said, so she handed him crayons and a coloring book.

"I asked him to color a picture," she said, "so he did."

That's just the beginning of the report: a case involving a picked flower and a six-year-old who had no idea what the criminal justice system was willing to do to him. In child porn cases, minors are considered unable to give consent to the sexual acts perpetrated upon them. But during criminal proceedings against minors, we're apparently supposed to believe minors know the intricacies of local ordinances and only violate them with the intent of committing criminal acts.

Sure, the NC Juvenile Justice Division may require parents to take part in court proceedings against their children, but it also apparently expects children to defend themselves against criminal charges -- something they're obviously incapable of doing. Hiring a lawyer helps but, as can be seen by this case, it doesn't prevent courts from following through with the ridiculous motions of, say, prosecuting a six-year-old for picking a flower.

Things might change in North Carolina, though. The Juvenile Justice division has offered its support of legislation that would raise the minimum age for criminal prosecution to 10. The Justice division would actually like to see it raised to 14, but state legislators seem unwilling to protect prepubescents from the machinations of a justice system that relies heavily on plea deals and -- despite stating otherwise -- tends to view accused people as guilty.

Then there's the other problem, which probably can't be fixed with legislation. The juvenile "justice" system plays favorites, starting with the law enforcement agency performing the arrest.

From 2015 through 2018 nearly 7,300 complaints were filed against children age 6 to 11 years old, according to numbers from the state Juvenile Justice section.

Of those complaints, 47% were against Black children, 40% were against white children and 7% against Hispanic or Latino children.

In general, 22% of the state's population is Black, 70% is white and 10% is Hispanic.

That's how it works in North Carolina. White kids are usually taken to their parents. Minority kids are fed to the system. And the poorer you are, the worse it is. Courts punish kids and parents who are unable to attend hearings or court-ordered programs due to a lack of reliable transportation or conflicts with work schedules. For wealthy residents, court cases involving their kids are a mild inconvenience. For everyone else, they're capable of disrupting lives, ending employment, and saddling families with the stigma of criminal convictions. And all for doing nothing more than picking a flower at a bus stop.

Fortunately for parents, judges are willing to exercise the discretion law enforcement agencies and prosecutors won't. The child who picked a tulip had his case dismissed once the judge got to see the facts of the case. But even this dismissal meant his parents had to ensure their child appeared in court and had legal representation.

Legal defenders of children point out this isn't the only time prosecutors have been willing to throw the book at minors legally (and mentally) incapable of defending themselves against criminal charges.

Others cases have involved young children who have broken windows at a construction site with older friends and stood on a chair and thrown a pencil at a teacher, attorneys said. Another case involved sexual exploration with another child, attorneys said.

One of Mitchell's youngest clients was a 9-year-old with autism whose response to a teacher resulted in him being found guilty of assault on a government official.

Even if the state legislature manages to raise the age to 10, North Carolina will still be one of the worst states in the nation when it comes to accusing children of criminal acts. Only 12 states still set the minimum for prosecutions at ten. Most go higher. Some don't specify an age at all, apparently believing prosecutors are capable of exercising discretion. But, for years, North Carolina scraped along the bottom, allowing prosecutions against children as young as six years of age. That law was passed in 1979, but there's nothing on record that indicates why legislators thought justice would be better served by running kids this young through the system.

Then there's the schools, which are as least as culpable as any of the other government participants in the prosecution of children barely old enough to attend school.

Most of the complaints for kids under 12 come from schools, according to Juvenile Justice data.

From 2015 to 2018, 87% of the complaints against 6-year-olds and 58% of the complaints against 10-year-olds were from schools.

If administrators can't figure out how to effectively discipline their newest additions to their rosters without involving people with guns and prosecutors who wouldn't know discretion if it raided their house and arrested their children, then it definitely needs to be addressed with legislation that alters the contours of these judgment calls. Administrators have failed to exercise good judgment. So have the prosecutors who have relied on similarly logic-free cops to feed them underage defendants.

With any luck, the law will pass and we'll only be subjected to horror stories about kids over the age of ten being prosecuted for throwing pencils or picking flowers or whatever. Unfortunately, mindset can't be legislated. And, as long as administrators would rather throw children to the uniformed wolves for minor infractions, the justice system will never find itself running low on pre-teen defendants.

]]>
yeah-i've-never-seen-all-those-words-in-that-order-before-either https://beta.techdirt.com/comment_rss.php?sid=20210319/13081346457
Tue, 23 Mar 2021 10:50:21 PDT Senator Mark Warner Doesn't Seem To Understand Even The Very Basic Fundamentals Of Section 230 As He Seeks To Destroy It Mike Masnick https://beta.techdirt.com/articles/20210322/17020246469/senator-mark-warner-doesnt-seem-to-understand-even-very-basic-fundamentals-section-230-as-he-seeks-to-destroy-it.shtml https://beta.techdirt.com/articles/20210322/17020246469/senator-mark-warner-doesnt-seem-to-understand-even-very-basic-fundamentals-section-230-as-he-seeks-to-destroy-it.shtml On Monday morning, Protocol hosted an interesting discussion on Reimagining Section 230 with two of its reporters, Emily Birnbaum and Issie Lapowsky. It started with those two reporters interviewing Senator Mark Warner about his SAFE TECH Act, which I've explained is one of the worst 230 bills I've seen and would effectively end the open internet. For what it's worth, since posting that I've heard from a few people that Senator Warner's staffers are now completely making up lies about me to discredit my analysis, while refusing to engage on the substance, so that's nice. Either way I was curious to see what Warner had to say.

The Warner section begins at 12 minutes into the video if you want to just watch that part and it's... weird. It's hard to watch this and not come to the conclusion that Senator Warner doesn't understand what he's talking about. At all. It's clear that some people have told him about two cases in which he disagrees with the outcome (Grindr and Armslist), but that no one has bothered to explain to him any of the specifics of either those cases, or what his law would actually do. He also doesn't seem to understand how 230 works now, or how various internet websites actually handle content moderation. It starts out with him (clearly reading off a talking point list put in front of him) claiming that Section 230 has "turned into a get out of jail free card for large online providers to do nothing for foreseeable, obvious and repeated misuse of their platform."

Um. Who is he talking about? There are, certainly, a few smaller platforms -- notably Gab and Parler -- that have chosen to do little. But the "large online platforms" -- namely Facebook, Twitter, and YouTube -- all have huge trust & safety efforts to deal with very difficult questions. Not a single one of them is doing "nothing." Each of them has struggled, obviously, in figuring out what to do, but it's not because of Section 230 giving them a "get out of jail free card." It's because they -- unlike Senator Warner, apparently -- recognize that every decision has tradeoffs and consequences and error bars. And if you're too aggressive in one area, it comes back to bite you somewhere else.

One of the key points that many of us have tried to raise over the years is that any regulation in this area should be humble in recognizing that we're asking private companies to solve big societal problems that governments have spent centuries trying, and failing, to solve. Yet, Warner just goes on the attack -- as if Facebook is magically why bad stuff happens online.

Warner claims -- falsely -- that his bill would not restrict anyone's free speech rights. Warner argues that Section 230 protects scammers, but that's... not true? Scammers still remain liable for any scam. Also, I'm not even sure what he's talking about because he says he wants to stop scamming by advertisers. Again, scamming by advertisers is already illegal. He says he doesn't want the violation of civil rights laws -- but, again, that's already illegal for those doing the discriminating. The whole point of 230 is to put the liability on the actual responsible party. Then he says that we need Section 230 to correct the flaws of the Grindr ruling -- but it sounds like Warner doesn't even understand what happened in that case.

His entire explanation is a mess, which also explains why his bill is a mess. Birnbaum asks Warner who from the internet companies he consulted with in crafting the bill. This is actually a really important question -- because when Warner released the bill, he said that it was developed with the help of civil rights groups, but never mentioned anyone with any actual expertise or knowledge about content moderation, and that shows in the clueless way the bill is crafted. Warner's answer is... not encouraging. He says he talked with Facebook and Google's policy people. And that's a problem, because as we recently described, the internet is way more than Facebook and Google. Indeed, this bill would help Facebook and Google by basically making it close to impossible for new competitors to exist, while leaving the market to those two. Perhaps the worst way to get an idea of what any 230 proposal would do is to only talk to Facebook and Google.

Thankfully, Birnbaum immediately pushed back on that point, saying g that many critics have noted that smaller platforms would inevitably be harmed by Warner's bill, and asking if Warner had spoken to any of these smaller platforms. His answer is revealing. And not in a good way. First, he ignores Birnbaum's question, and then claims that when Section 230 was written it was designed to protect startups, and that now it's being "abused" by big companies. This is false. And Section 230's authors have said this is false (and one of them is a colleague of Warner's in the Senate, so it's ridiculous that he's flat out misrepresenting things here). Section 230 was passed to protect Prodigy -- which was a service owned by IBM and Sears. Neither of those were startups.

Birnbaum: Critics have said that small platforms and publishers will be disproportionately harmed by some of these sweeping Section 230 reforms, including those contained within your bill. So did you have an ongoing conversation with some of those smaller platforms before the bill was introduced? Are you open to any changes that would ensure that they are not disproportionately harmed while Facebook just pays more, which they can afford?

Warner:Section 230 in the late '90s was then about protecting those entrepreneurial startups. What it has transformed into is a "get-out-of-jail-free" card for the largest companies in the world, to not moderate their content, but frankly, to ignore repeated misuse abuse in a way that we've tried to address.

What an odd way to respond to a question about smaller websites -- to immediately focus on the largest companies, and not ever address the question being raised.

Lapowsky jumps in to point out that Warner is not answering the question, and that to just focus on the (false) claim that the "big tech" platforms use 230 as a "get out of jail free card" ignores all the many smaller sites who use it to help deal with frivolous and vexatious litigation. Lapowsky follows that up by noting, correctly, that it's really the 1st Amendment that protects many of the things that Warner is complaining about, and that Section 230 has the procedural benefits that help get such cases kicked out of court earlier. Her question on this is exactly right and really important: Facebook and Google can spend the money to hire the lawyers to succeed on 1st Amendment grounds on those cases. Smaller platforms (like, say, ours) cannot.

Warner, looking perturbed, completely misses the point, and stumbles around with a bunch of half sentences before finally trying to pick a direction to go in. But one thing Warner does not do is actually answer Lapowsky's question. He just repeats what he claims his law will do (ignoring the damage it will actually do). He also claims that the law is being used against the wishes of the authors (the authors have explicitly denied this). He also claims -- based on nothing -- that the courts have "dramatically expanded" what 230 covers, and that other lawmakers don't understand the difference between the 1st Amendment and 230.

And then things go completely off the rails. Lapowsky pushes back, gently, on Warner's misunderstanding of the point and intent of 230, and Warner cuts her off angrily, again demonstrating his near total ignorance of the issue at hand and refusing to address her actual point, but just slamming the table insisting that the big companies are somehow ignoring all bad stuff on their websites. This is (1) simply not true and (2) completely unrelated to the point Lapowsky is making about every other website. What's incredible is how petulant Warner gets when asked to defend just the very basics of his terrible law.

Lapowsky: There's also another part of your bill, though, that deals with affirmative defense requirements. And the idea is basically so defendants couldn't just immediately fast track to the 230 defense to get cases quickly dismissed. And this is something a lot of critics say, effectively, guts the main purpose of Section 230 protections. So tell me a little bit about why you introduced this requirement.

Warner: Are you saying that the original intent of Section 230 was to in a sense, wipe away folks' legal rights?

Lapowsky: Not the intent, but certainly—

Warner: But if we're gonna go back to the intent of the legislation, versus the way the courts have so dramatically expanded what was potentially the original intent, I think it's one of the reasons why we're having this debate. And candidly, some policymakers may not be as familiar with the nuance and the differential between First Amendment rights, which we want to stand by and protect, and what we think has been the misuse of this section and the over-expansion of the court's rulings. We want to draw it back in and to make sure that things that are already illegal — like for example, illegal paid scams that take place on a lot of these platforms, I actually think there should be an ability to bring a suit against those kinds of illegal scams. The idea that you can flash your "get-out-of-jail" Section 230 card up front, before you even get to the merits of any of those discussions, I just respectfully think ought to not be the policy of the United States.

Lapowsky: My understanding of the intent was that this was a bill that was meant to encourage good faith efforts to moderate content, but also protect companies when they get things wrong, when they don't catch all the content or when they take something down that they shouldn't have. And obviously, this was written at a time when the internet—

Warner: Can I just ask, are you saying that Section 230 has reinforced good faith intent on moderation? Again, if that's your view of how it's been used by the large platforms, we just have a fundamental disagreement. I think Section 230 has been used and abused by the most powerful companies in the world.

Lapowsky: I wouldn't—

Warner: [They've been allowed] to not act responsibly, and instead it has allowed whether it's abuse of civil rights, abuse of individuals' personal behaviors as in the Grindr case, whether it's for large platforms to say, "Well, I know this scam artist is paying me to put up content that probably is ripping people off, but I'm going to use Section 230 as a way to prevent me from acting responsibly and actually taking down that content." So if you don't believe those things are happening, then that's a position to have, again, respectfully, I would just fundamentally disagree with you.

Lapowsky: It's not my position—

Warner: Emily, are there other questions? I thought we were gonna hear from a variety of questions. I'm happy to do this debate but I thought that—

There's so much to comment on here. First, Lapowsky is asking a specific question that Warner either does not understand or does not want to answer. She's pointing out, accurately, what 230 actually does and how it protects lots and lots of internet users and sites, beyond the "big" guys. And Warner is obsessing over some perceived problem that he fundamentally does not seem to understand. First of all, no large online platform wants scammers on their website. They don't need to hide behind Section 230, because public pressure in the form of angry users, journalists exposing the bad behavior, and just common sense has every major online site seeking to take down scams.

Warner's bill doesn't do anything to help in that situation other than make sure that if a smaller platform fucks up and misses a scammer, then suddenly they'll face crippling liability. The big platforms -- that Warner is so sure are doing nothing at all -- have massive trust and safety operations on the scale that no other site could possibly match. And they're going to miss stuff. You know why? Because that's the nature of large numbers. You're going to get stuff wrong.

As for the Grindr case, that actually proves the opposite point. The reason the Grindr case was a problem was not that Grindr fucked up, but that law enforcement ignored Matthew Herrick's complaint against his vengeful ex for too long. And eventually they got it right and arrested his ex who had abused Grindr to harass Herrick. Making Grindr liable doesn't fix law enforcement's failures. It doesn't fix anything. All it does is make sure that many sites will be much more aggressive in stifling all sorts of good uses of their platform to make sure they don't miss the rare abusive uses. This is fundamentally why 230 is so important. It creates the framework that enables nearly every platform to work to minimize the mistakes without fearing what happens if they get it wrong (exactly as Lapowsky pointed out, and which Warner refuses to address).

At this point, Lapowsky again tries to explain in more detail what she's asking, and a clearly pissed off Warner cuts her off, ignores her and turns to the other reporter, Birnbaum, to ask if she has any other questions for him, snottily noting that he expected questions from listeners. Lapowsky tries again, pointing out that she thinks it's important to hear Warner respond to the actual criticisms of his bill (rather than just repeating his fantasy vision of what is happening and what his bill does).

Finally, Lapowsky is able to raise one of the key problems we raised in our article: that the SAFE TECH Act, by wiping out 230 protections for any content for which money exchanges hands, is way too broad and would remove 230 for things like web hosting or any kind of advertising. Warner goes on a long rambling rant about how he thinks this should be debated around conference tables as they "iterate," but then also says that the companies should be forced to come to hearings to defend their content moderation practices. Then, rather than actually responding to the point that the language is incredibly broad, he immediately focuses in on one extreme case, the Armslist case, and demands to know Lapowsky's view on what should happen with that site.

But... notice that he never actually answers her question about the incredibly broad language in the bill. It's incredibly ridiculous to focus on an extreme outlier to defend language that would basically impact every website out there by removing any 230 protections for web hosts. This is the worst kind of political grandstanding. Take one extreme example, and push for a law that will impact everyone, and if anyone calls you on the broad reach, just keep pointing at that extreme example. It's disgusting.

At the end, Warner states that he's open to talking to smaller platforms, which is kind of laughable, considering that his staffers have been going around trashing and lying about people like myself that have pointed out the problems with his bill.

Either way, the interview makes clear that Warner does not understand how content moderation works, or what his bill actually does. Clearly, he's upset about a few extreme cases, but he doesn't seem to recognize that in targeting what he believes are two bad court rulings, he would completely upend how every other website works. And when pushed on that, he seems to get angry about it. That's not a good way for legislation to be made.

]]>
this-is-astounding https://beta.techdirt.com/comment_rss.php?sid=20210322/17020246469
Tue, 23 Mar 2021 10:45:21 PDT Daily Deal: Mini Wipebook Scan (2-Pack) Daily Deal https://beta.techdirt.com/articles/20210323/09592546477/daily-deal-mini-wipebook-scan-2-pack.shtml https://beta.techdirt.com/articles/20210323/09592546477/daily-deal-mini-wipebook-scan-2-pack.shtml What do you get when you cross a whiteboard and a notebook? Wipebook’s technology transforms conventional paper into reusable and erasable surfaces. It has 10 double sided pages or 20 surfaces: 10 graph and 10 ruled. It's the perfect tool for thinkers, doers, and problem solvers. Use the Mini Wipebook to work things out, save to the cloud, and wipe old sketches completely clean. The Wipebook Scan App saves your work and uploads it to your favorite cloud services like Google Drive, Evernote, Dropbox, and OneDrive. This 2-pack is on sale for $52.95.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

]]>
good-deals-on-cool-stuff https://beta.techdirt.com/comment_rss.php?sid=20210323/09592546477
Tue, 23 Mar 2021 09:33:00 PDT What I Hope Tech CEOs Will Tell Congress: 'We're Not Neutral' Adam Kovacevich https://beta.techdirt.com/articles/20210322/09521746466/what-i-hope-tech-ceos-will-tell-congress-were-not-neutral.shtml https://beta.techdirt.com/articles/20210322/09521746466/what-i-hope-tech-ceos-will-tell-congress-were-not-neutral.shtml The CEOs of Facebook, Google, and Twitter will once again testify before Congress this Thursday, this time on disinformation. Here’s what I hope they will say:

Thank you Mister Chairman and Madam Ranking Member.

While no honest CEO would ever say that he or she enjoys testifying before Congress, I recognize that hearings like this play an important role -- in holding us accountable, illuminating our blind spots, and increasing public understanding of our work.

Some policymakers accuse us of asserting too much editorial control and removing too much content. Others say that we don’t remove enough incendiary content. Our platforms see millions of user-generated posts every day -- on a global scale -- but questions at these hearings often focus on how one of our thousands of employees handled a single individual post.

As a company we could surely do a better job of explaining -- privately and publicly -- our calls in controversial cases. Because it’s sometimes difficult to explain in time-limited hearing answers the reasons behind individual content decisions, we will soon launch a new public website that will explain in detail our decisions on cases in which there is considerable public interest. Today, I’ll focus my remarks on how we view content moderation generally.

Not “neutral”

In past hearings, I and my CEO counterparts have adopted an approach of highlighting our companies’ economic and social impact, answering questions deferentially, and promising to answer detailed follow up questions in writing. While this approach maximizes comity, I’ve come to believe that it can sometimes leave a false impression of how we operate.

So today I’d like to take a new approach: leveling with you.

In particular, in the past I have told you that our service is “neutral.” My intent was to convey that we don’t pick political sides, or allow commercial influence over our editorial content.

But I’ve come to believe that characterizing our service as “neutral” was a mistake. We are not a purely neutral speech platform, and virtually no user-generated-content service is.

Our philosophy

In general, we start with a Western, small-d democratic approach of allowing a broad range of human expression and views. From there, our products reflect our subjective -- but scientifically informed -- judgments about what information and speech our users will find most relevant, most delightful, most topical, or of the highest quality.

We aspire for our services to be utilized by billions of people around the globe, and we don’t ever relish limiting anyone’s speech. And while we generally reflect an American free speech norm, we recognize that norm is not shared by much of the world -- so we must abide by more restrictive speech laws in many countries where we operate.

Even within the United States, however, we choose to forbid certain types of speech which are legal, but which we have chosen to keep off our service: incitements to violence, hate speech, Holocaust denial, and adult pornography, just to name a few.

We make these decisions based not on the law, but on what kind of service we want to be for our users.

While some people claim to want “neutral” online speech platforms, we have seen that services with little or no content moderation whatsoever -- such as Gab and Parler -- become dominated by trolling, obscenities, and conspiracy theories. Most consumers reject this chaotic, noisy mess.

In contrast, we believe that millions of people use our service because they value our approach of airing a variety of views, but avoiding an “anything goes'' cesspool.

We realize that some people won’t like our rules, and go elsewhere. I’m glad that consumers have choices like Gab and Parler, and that the open Internet makes them possible. But we want our service to be something different: a pleasant experience for the widest possible audience.

Complicated info landscape means tough calls

When we first started our service decades ago, content moderation was a much less fractious topic. Today, we face a more complicated speech and information landscape including foreign propaganda, bots, disinformation, misinformation, conspiracy theories, deepfakes, distrust of institutions, and a fractured media landscape. It challenges all of us who are in the information business.

All user-generated content services are grappling with new challenges to our default of allowing most speech. For example, we have recently chosen to take a more aggressive posture toward election- and vaccine-related disinformation because those of us who run our company ultimately don’t feel comfortable with our platform being an instrument to undermine democracy or public health.

As much as we aim to create consistent rules and policies, many of the most difficult content questions we face are ones we’ve never seen before, or involve elected officials -- so the questions often end up on my desk as CEO.

Despite the popularity of our services, I recognize that I’m not a democratically elected policymaker. I’m a leader of a private enterprise. None of us company leaders takes pleasure in making speech decisions that inevitably upset some portion of our user base - or world leaders. We may make the wrong call.

But our desire to make our platform a positive experience for millions of people sometimes demands that we make difficult decisions to limit or block certain types of controversial (but legal) content. The First Amendment prevents the government from making those extra-legal speech decisions for us. So it’s appropriate that I make these tough calls, because each decision reflects and shapes what kind of service we want to be for our users.

Long-term experience over short-term traffic

Some of our critics assert that we are driven solely by “engagement metrics” or “monetizing outrage” like heated political speech.

While we use our editorial judgment to deliver what we hope are joyful experiences to our users, it would be foolish for us to be ruled by weekly engagement metrics. If platforms like ours prioritized quick-hit, sugar-high content that polarizes our users, it might drive short term usage but it would destroy people’s long-term trust and desire to return to our service. People would give up on our service if it’s not making them happy.

We believe that most consumers want user-generated-content services like ours to maintain some degree of editorial control. But we also believe that as you move further down the Internet “stack” -- from applications towards ours toward app stores, then cloud hosting, then DNS providers, and finally ISPs -- most people support a norm of progressively less content moderation at each layer.

In other words, our users may not want to see controversial speech on our service -- but they don’t necessarily support disappearing it from the Internet altogether.

I fully understand that not everyone will agree with our content policies, and that some people feel disrespected by our decisions. I empathize with those that feel overlooked or discriminated against, and I am glad that the open Internet allows people to seek out alternatives to our service. But that doesn’t mean that the US government can or should deny our company’s freedom to moderate our own services.

First Amendment and CDA 230

Some have suggested that social media sites are the “new public square” and that services should be forbidden by the government to block anyone’s speech. But such a rule would violate our company’s own First Amendment rights of editorial judgment within our services. Our legal freedom to prioritize certain content is no different than that of the New York Times or Breitbart.

Some critics attack Section 230 of the Communications Decency Act as a “giveaway” to tech companies, but their real beef is with the First Amendment.

Others allege that Section 230’s liability protections are conditioned on our service following a false standard of political “neutrality.” But Section 230 doesn’t require this, and in fact it incentivizes platforms like ours to moderate inappropriate content.

Section 230 is primarily a legal routing mechanism for defamation claims -- making the speaker responsible, not the platform. Holding speakers directly accountable for their own defamatory speech ultimately helps encourage their own personal responsibility for a healthier Internet.

For example, if car rental companies always paid for their renters’ red light tickets instead of making the renter pay, all renters would keep running red lights. Direct consequences improve behavior.

If Section 230 were revoked, our defamation liability exposure would likely require us to be much more conservative about who and what types of content we allowed to post on our services. This would likely inhibit a much broader range of potentially “controversial” speech, but more importantly would impose disproportionate legal and compliance burdens on much smaller platforms.

Operating responsibly -- and humbly

We’re aware of the privileged position our service occupies. We aim to use our influence for good, and to act responsibly in the best interests of society and our users. But we screw up sometimes, we have blind spots, and our services, like all tools, get misused by a very small slice of our users. Our service is run by human beings, and we ask for grace as we remedy our mistakes.

We value the public’s feedback on our content policies, especially from those whose life experiences differ from those of our employees. We listen. Some people call this “working the refs,” but if done respectfully I think it can be healthy, constructive, and enlightening.

By the same token, we have a responsibility to our millions of users to make our service the kind of positive experience they want to return to again and again. That means utilizing our own constitutional freedom to make editorial judgments. I respect that some will disagree with our judgments, just as I hope you will respect our goal of creating a service that millions of people enjoy.

Thank you for the opportunity to appear here today.

Adam Kovacevich is a former public policy executive for Google and Lime, former Democratic congressional and campaign aide, and a longtime tech policy strategist based in Washington, DC.

]]>
just-lay-out-the-truth https://beta.techdirt.com/comment_rss.php?sid=20210322/09521746466
Tue, 23 Mar 2021 06:30:03 PDT Yet More Studies Show That 5G Isn't Hurting You Karl Bode https://beta.techdirt.com/articles/20210322/08522246465/yet-more-studies-show-that-5g-isnt-hurting-you.shtml https://beta.techdirt.com/articles/20210322/08522246465/yet-more-studies-show-that-5g-isnt-hurting-you.shtml On the one hand, you have a wireless industry falsely claiming that 5G is a near mystical revolution in communications, something that's never been true (especially in the US). Then on the other hand you have oodles of internet crackpots who think 5G is causing COVID or killing people on the daily, something that has also never been true. In reality, most claims of 5G health harms are based on a false 20 year old graph, and an overwhelming majority of scientists have made it clear that 5G is not killing you (in fact several incarnations are less powerful than 4G).

Last week, more evidence emerged that indicates that no, 5G isn't killing you. Researchers from the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) and the Swinburne University of Technology in Australia both released studies last week in the Journal of Exposure Science and Environmental Epidemiology. Both studies are among the first to look exclusively at 5G, and the only people who'll be surprised by their findings get all of their news from email forwards and YouTube. From an ARPANSA press statement on its first study's findings:

"‘In conclusion, a review of all the studies provided no substantiated evidence that low-level radio waves, like those used by the 5G network, are hazardous to human health,’ said Dr Karipidis, Assistant Director, Assessment and Advice at ARPANSA."

The second study, which focused on RF energy specifically in the millimeter wave band (the ultra-fast but limited range variant of 5G) also found no health impact that could be replicated by other studies:

"‘This meta-analysis of the experimental studies also presented little evidence of an association between millimetre waves and adverse health effects,’ said Dr Karipidis. "Studies that did report biological effects were generally not independently replicated and most of the studies reviewed employed low-quality methods of exposure assessment and control."

Now that doesn't mean these studies are the definitive answer to questions surrounding 5G's impact on human health, but the evidence we do have continues to indicate that the technology isn't killing you. Granted the actual underlying scientific evidence is headed in the complete, opposite direction of the conspiracy theorists and assorted dipshits who've been attacking telecom infrastructure (or employees) because some supplement-grifting nitwit said so on YouTube.

The reality is, and continues to be, that 5G isn't interesting enough to warrant hyperventilation over its supposed revolutionary impact on communications, or its supposed diabolical impact on human health. But since neither opinion is a real money maker, the truth continues to play second fiddle to bullshit, whether it's coming from the mouths of wireless carriers or complete crackpots.

]]>
stop-getting-your-news-from-YouTube https://beta.techdirt.com/comment_rss.php?sid=20210322/08522246465
Mon, 22 Mar 2021 20:05:43 PDT Sharyl Attkisson Lawsuit Against Rod Rosenstein Claiming She Was Hacked By Government Tossed Timothy Geigner https://beta.techdirt.com/articles/20210319/11052646456/sharyl-attkisson-lawsuit-against-rod-rosenstein-claiming-she-was-hacked-government-tossed.shtml https://beta.techdirt.com/articles/20210319/11052646456/sharyl-attkisson-lawsuit-against-rod-rosenstein-claiming-she-was-hacked-government-tossed.shtml Remember Sharyl Attkisson? If not, she is a former CNN and CBS journalist who made something of a name for herself both in reporting on the Obama administration, often critically, as well as for accusing that same administration of hacking into her computer and home network. Whatever you think of her reporting, her lawsuit against Eric Holder and the Justice Department over the hacking claims was crazy-pants. Essentially, she took a bunch of the same technological glitches all of us deal with on a daily basis -- flickering television screens, a stuck backspace key on her computer -- and wove that into a giant conspiracy against her and her reporting. She made a big deal in the suit, and her subsequent book on the matter, over some "computer experts" she relied on to confirm that she was a victim of government hacking, except those experts remained largely anonymous and were even, in some cases, third party people she'd never met. For that and other reasons related to how quickly she managed to do initial discovery, the case was tossed by the courts in 2019.

That didn't stop Attkisson's crusade against the government, however. In 2020, she filed suit against Rod Rosenstein, again accusing the government of spying on her and her family. To back this up, she again relied on an anonymous source, but that source has since been revealed. And, well...

The source was initially anonymous but later identified by Attkisson’s attorneys as Ryan White, an alleged former FBI informant. White is a QAnon conspiracy adherent who appears to have been the source of bizarre child-abuse allegations that Georgia attorney Lin Wood leveled at Chief Justice John Roberts last year, according to a report in the Daily Beast.

And so here we are yet again, with an extremely serious claim lodged against the federal government that relies on the tinfoil hat crowd as "evidence." In addition, Attkisson lays out again the computer and network hacking claims, with a named "computer forensic" expert who apparently told her that there was spyware on her machine, that they had logs for where these breaches originated (such as a Ritz Carlton hotel), and that the tools used for all of this appeared to be the sort typically only available to government actors. And here too, just as in her original lawsuit, there are tons of details and claims that reveal that, like so many other conspiracy theories, there is a duality problem. Namely, that the federal government is so nefarious and great at hacking that they completely compromised nearly every machine Attkisson used at work and at home, but that same federal government was too stupid to mask the IP address from which it launched these attacks.

For example, her suit claims that these attacks were originally launched from the United States Postal Service in Baltimore, where some staff involved in infiltrating The Silk Road worked. The contention of her Qanon witness is that the spying on Attkisson somehow happened as an offshoot of a multi-agency task force against dark web dealings. And to believe all of that, you again have to believe that the government's l337 h4x0rs didn't bother to cover their USPS tracks.

But those are conversations about the merits of Attkisson's case. We don't really need to get that far, because her suit has again been tossed on essentially procedural grounds.

Bennett, an appointee of President George W. Bush, also ruled that there was inadequate indication that any surveillance of Attkisson involved activities in Maryland, which Bennett’s court has jurisdiction over.

“The Amended Complaint is devoid of any factual allegations with respect to actual conduct related to the alleged surveillance which occurred in Maryland,” Bennett wrote in his 20-page decision, issued on Tuesday. “The conclusory statements that the alleged surveillance was performed by individuals in Maryland, unsupported by any factual allegations, lie in contrast to the Plaintiffs’ numerous assertions regarding conduct performed and events which occurred in the Eastern District of Virginia.”

So, on the one hand, it's not as if the court is saying that Attkisson's claims are nonsense. And maybe this will lead to her refiling her lawsuit in the proper jurisdiction. On the other hand, it doesn't inspire a great deal of confidence in the merits of her claims or her legal team that they can't even get the case filed in the correct jurisdiction.

So, do I think this is the last we'll hear from Sharyl Attkisson's lawsuits over the supposed hacking of all her things? No, I doubt it. After all, she must certainly have another book to write and promote soon.

]]>
crazypants https://beta.techdirt.com/comment_rss.php?sid=20210319/11052646456
Mon, 22 Mar 2021 15:49:12 PDT It's The End Of Citation As We Know It & I Feel Fine Brian Frye https://beta.techdirt.com/articles/20210318/22393446451/end-citation-as-we-know-it-i-feel-fine.shtml https://beta.techdirt.com/articles/20210318/22393446451/end-citation-as-we-know-it-i-feel-fine.shtml Legal scholarship sucks. It’s interminably long. It’s relentlessly boring. And it’s confusingly esoteric. But the worst thing about legal scholarship is the footnotes. Every sentence gets one1. Banal statement of historical fact? Footnote. Recitation of hornbook law? Footnote. General observation about scholarly consensus? Footnote. Original observation? Footnote as well, I guess.

It’s a mess. In theory, legal scholarship should be free as a bird. After all, it’s one of the only academic disciplines to have avoided peer review. But in practice, it’s every bit as formalistic as any other academic discipline, just in a slightly different way. You can check out of Hotel Academia, but you can’t leave.

Most academic disciplines use peer review to evaluate the quality of articles submitted for publication. In a nutshell, anonymous scholars working in the same area read the article and decide whether it’s good enough to publish. Sounds great, except for the fact that the people reviewing an article have a slew of perverse incentives. After all, what if the article makes arguments you dislike? Even worse, what if it criticizes you? And if you are going to recommend publication, why not insist on citations to your own work? After all, it’s obviously relevant and important.

But the problems with peer review run even deeper. For better or worse, it does a pretty good job of ensuring that articles don’t jump the shark and conform to the conventional wisdom of the discipline. Of course, conformity can be a virtue. But it can also help camouflage flaws. Peer review is good at catching outliers, but not so good at catching liars. As documented by websites like Retraction Watch, plenty of scholars have sailed through the peer review process by just fabricating data to support appealing conclusions. Diederik Stapel, eat your heart out!

Anyway, legal scholarship is an outlier, because there’s no peer review. Of course, it still has gatekeepers. But unusually, the people deciding which articles to publish are students, not professors. Why? Historical accident. Law was a profession long before it became an academic discipline, and law schools are a relatively recent invention. Law students invented the law review in the late 19th century, and legal scholars just ran with it.

Asking law students to evaluate the quality of legal scholarship and decide what to publish isn’t ideal. They don’t know anything about legal scholarship. They don’t even know all that much about the law yet. But they aren’t stupid! After all, they’re in law school. So they rely on heuristics to help them decide what to publish. One important heuristic is prestige. The more impressive the author’s credentials, the more promising the article. Or at least, chasing prestige is always a safe choice, a lesson well-observed by many practicing lawyers as well.

Another key heuristic is footnotes. Indeed, footnotes are almost the raison d’etre of legal scholarship. An article with no footnotes is a non-starter. An article with only a few footnotes is suspect. But an article with a whole slew of footnotes is enticing, especially if they’re already properly Bluebooked. After all, much of the labor of the law review editor is checking footnotes, correcting footnotes, adding footnotes, and adding to footnotes. So many footnotes!

Most law review articles have hundreds of footnotes. Indeed, the footnotes often overwhelm the text. It’s not uncommon for law review articles to have entire pages that consist of nothing but a footnote.

It’s a struggle. Footnotes can be immensely helpful. They bolster the author’s credibility by signaling expertise and point readers to useful sources of additional information. What’s more, they implicitly endorse the scholarship they cite and elevate the profile of its author. Every citation matters, every citation is good. But how to know what to cite? And even more vexing, how to know when a citation is missing? So much scholarship gets published, it’s impossible to read it all, let alone remember what you’ve read. It’s easy to miss or forget something relevant and important. Legal scholars tend to cite anything that comes to mind and hope for the best.

There’s gotta be a better way. Thankfully, in 2020, Rob Anderson and Trent Wenzel created ScholarSift, a computer program that uses machine learning to analyze legal scholarship and identify the most relevant articles. Anderson is a law professor at Pepperdine University Caruso School of Law and Wenzel is a software developer. They teamed up to produce a platform intended to make legal scholarship more efficient. Essentially, ScholarSift tells authors which articles they should be citing, and tells editors whether an article is novel.

It works really well. As far as I can tell, ScholarSift is kind of like Turnitin in reverse. It compares the text of a law review article to a huge database of law review articles and tells you which ones are similar. Unsurprisingly, it turns out that machine learning is really good at identifying relevant scholarship. And ScholarSift seems to do a better job at identifying relevant scholarship than pricey legacy platforms like Westlaw and Lexis.

One of the many cool things about ScholarSift is its potential to make legal scholarship more equitable. In legal scholarship, as everywhere, fame begets fame. All too often, fame means the usual suspects get all the attention, and it’s a struggle for marginalized scholars to get the attention they deserve. Unlike other kinds of machine learning programs, which seem almost designed to reinforce unfortunate prejudices, ScholarSift seems to do the opposite, highlighting authors who might otherwise be overlooked. That’s important and valuable. I think Anderson and Wenzel are on to something, and I agree that ScholarSift could improve citation practices in legal scholarship.

But I also wonder whether the implications of ScholarSift are even more radical than they imagine? The primary point of footnotes is to identify relevant sources that readers will find helpful. That’s great. And yet, it can also be overwhelming. Often, people would rather just read the article, and ignore the sources, which can become distracting, even overwhelming. Anderson and Wenzel argue that ScholarSift can tell authors which articles to cite. I wonder if it couldn’t also make citations pointless. After all, readers can use ScholarSift, just as well as authors.

Maybe ScholarSift could free legal scholarship from the burden of oppressive footnotes? Why bother including a litany of relevant sources when a computer program can generate it automatically? Maybe legal scholarship could adopt a new norm in which authors only cite works a computer wouldn’t flag as relevant? Apparently, it’s still possible. I recently published an essay titled “Deodand.” I’m told that ScholarSift generated no suggestions about what it should cite. But I still thought of some. The citation is dead; long live the citation.

Brian L. Frye is Spears-Gilbert Professor of Law at the University of Kentucky College of Law

1. Orin S. Kerr, A Theory of Law, 16 Green Bag 2d 111 (2012). (“It is a common practice among law review editors to demand that authors support every claim with a citation. These demands can cause major headaches for legal scholars. Some claims are so obvious or obscure that they have not been made before. Other claims are made up or false, making them more difficult to support using references to the existing literature.”).

]]>
the-ai-can-free-us https://beta.techdirt.com/comment_rss.php?sid=20210318/22393446451
Mon, 22 Mar 2021 13:42:48 PDT Drone Manufacturers Are Amping Up Surveillance Capabilities In Response To Demand From Government Agencies Tim Cushing https://beta.techdirt.com/articles/20210318/16202646450/drone-manufacturers-are-amping-up-surveillance-capabilities-response-to-demand-government-agencies.shtml https://beta.techdirt.com/articles/20210318/16202646450/drone-manufacturers-are-amping-up-surveillance-capabilities-response-to-demand-government-agencies.shtml The CBP loves its drones. It can't say why. I mean, it may lend them out to whoever comes asking for one, but there's very little data linking hundreds of drone flights to better border security. Even the DHS called the CBP's drone program an insecure mess -- one made worse by the CBP's lenient lending policies, which allowed its drones to stray far from the borders to provide dubious assistance to local law enforcement agencies.

The CBP's thirst for drones -- with or without border security gains -- is unslakeable. Thomas Brewster reports for Forbes that the agency is very much still in the drone business. It may no longer be using Defense Department surplus to fail at doing its job, but it's still willing to spend taxpayer money to achieve negligible gains in border security. And if the new capabilities present new constitutional issues, oh well.

This year, America’s border police will test automated drones from Skydio, the Redwood City, Calif.-based startup that on Monday announced it had raised an additional $170 million in venture funding at a valuation of $1 billion. That brings the total raised for Skydio to $340 million. Investors include blue-chip VC shops like Andreessen Horowitz, AI chipmaker Nvidia and even Kevin Durant, the NBA star.

The CBP is not alone. It has used government drones and private party drones to engage in border surveillance. But as prices continue to fall and the gap between government and private capabilities continues to narrow, the most bang for taxpayer buck may also be the most banging of constitutional rights only minimally observed near our nation's borders.

For the inland police, it's the same thing. Buy first and let the courts sort it out. Capabilities move drone surveillance far past the limitations of mounted cameras and law enforcement officer eyes and ears. Pervasive, continuous surveillance is only a few dollars away. Sometimes, it's even free (as in taxpayer-funded lunches, not free as in freedom.)

By Forbes’ calculation, based on documents obtained through Freedom of Information Act (FOIA) requests and Skydio’s public announcements, more than 20 police agencies across the U.S. now have Skydios as part of their drone fleets, including major cities like Austin and Boston, though many got one for free as part of a company project to help out during the pandemic.

The tech sector gains. So do its government patrons. Caesar has been rendered unto, but unto citizens, what? Well, the opportunity to be surveilled in greater detail for pennies on the dollar.

[Skydio] claims to be shipping the most advanced AI-powered drone ever built: a quadcopter that costs as little as $1,000, which can latch on to targets and follow them, dodging all sorts of obstacles and capturing everything on high-quality video. Skydio claims that its software can even predict a target’s next move, be that target a pedestrian or a car.

Seems like a problem. Surely we can count on the multiple layers of oversight to ensure we're not just characters in surveillance fanfic composed by people who never met a dystopia they haven't liked.

Nope. Skydio is run by all-American boys who see themselves as updated Hardy Boys, providing tools to law enforcement to track down The Smugglers of Pirates Cove or whatever. The heads of Skydio rolled through MIT and Google before settling down to sell cheap surveillance gear to law enforcement agencies. And now they're lobbying the FAA to obtain clearance for surveillance drones to operate in air traffic space -- something the FAA tends to deny to hobbyists and researchers due to the possibility of interfering with airport operations.

Skydio has some big competitors in the market. DJI has taken the lead in supplying all and sundry with drones. But the company has taken a hit due to its link to Chinese manufacturing. Skydio is all about its USA location. It may use some Chinese components, but it assembles its products domestically, making it a safer bet for government agencies that have to comply with the latest wind-guided legislation thrust upon it by legislators who love scoring political points more than they love serving their constituents.

It's not just the CBP and a handful of local cops using Skydio's drones -- ones capable of keeping a very close eye on the movements of multiple people at one time. The DEA has also ordered some high-end Skydio drones to help it with whatever it imagines to be its primary purpose at this point. (The Drug War has been lost. We can only steal from the wallets of those still on the battlefield.)

DJI is hamstrung by anti-Chinese activity. But it will be back. Skydio -- which sells high-end cameras mounted to high-end drones -- doesn't face these obstacles. The winner of this arms race really doesn't matter. Drones will ultimately take over the job done by aircraft with higher buy-in costs and higher maintenance requirements.

And in the CBP's case, the eventual winners of this tech race will circle overland, unrestricted by constitutional niceties. The CBP is mostly out of the reach of courts and case law. Anything within 100 miles of a border (or port) [or international airport] is considered fair game for intrusive searches and surveillance. We have people in power who can change this. But it seems unlikely they will. A vague threat is all that's needed to expand government power. Reigning it back in requires thousands of dollars and voluminous legal arguments. The status quo only requires a shrug.

]]>
watched-over-by-machines-of-everlasting-antagonism https://beta.techdirt.com/comment_rss.php?sid=20210318/16202646450
Mon, 22 Mar 2021 12:07:26 PDT Cop's Lies About A Traffic Stop Are Exposed By A Home Security Camera Located Across The Street Tim Cushing https://beta.techdirt.com/articles/20210319/14530946458/cops-lies-about-traffic-stop-are-exposed-home-security-camera-located-across-street.shtml https://beta.techdirt.com/articles/20210319/14530946458/cops-lies-about-traffic-stop-are-exposed-home-security-camera-located-across-street.shtml Cops lie.

This is undeniable. But why do cops lie? There seems to be little reason for it. Qualified immunity protects them against all but their most egregious rights violations. Internal investigations routinely clear them for all but their most egregious acts of misconduct. And police union contracts make it almost impossible to fire bad cops, no matter what they've done.

So, why do they lie? If I had to guess, it's because they've been granted so much deference by those adjudicating their behavior that "my word against theirs" has pretty much become the standard for legal proceedings. If a cop can push a narrative without more pushback than the opposing party's sworn statements, the cop is probably going to win.

This reliance on unreliable narrators has been threatened by the ubiquity of recording devices. Some devices -- body cameras, dashcams -- are owned by cops. And, no surprise, they often "fail" to activate these devices when some shady shit is going down.

But there are tons of cameras cops don't control. Every smartphone has a camera. And nearly every person encountering cops has a smartphone. Then there's the plethora of home security cameras whose price point has dropped so precipitously they're now considered as accessible as tap water.

The cops can control their own footage. And they do. But they can't control everyone else's. And that's where they slip up. A narrative is only as good as its supporting evidence. Cops refuse to bring their own, especially when it contradicts their narrative. But they can't stop citizens from recording their actions. This is a fact that has yet to achieve critical mass in the law enforcement community. A cop's word is only as good as its supporting facts. Going to court with alternative facts -- especially ones contradicted by nearby recording devices is a bad idea. (h/t TheUrbanDragon)

But that still doesn't stop cops from lying to courts. Cops in Lake Wales, Florida tried to claim a driver attacked them during a traffic stop -- something that could have resulted in a conviction on multiple felony charges. But camera footage obtained from a home security camera across the street from the traffic stop undermined the officers' sworn perjury:

A Lake Wales man, who could have been sent to prison for years based on the claims in a police report, was saved by a home surveillance camera. It showed he didn’t attack an officer, as claimed in the report.

[...]

Officer [Colt] Black’s report said, “Cordero immediately exited the driver door and began to charge towards my patrol vehicle.”

It also indicated Cordero approached the officer with closed fists.

Sounds like an attempted assault on police officers -- an assault only negated by the swift (and brutal) acts of officers on the scene. But here's what really happened, according to an unblinking eye located across the street.

Cordero stood by his car for more than 20 seconds.

[...]

Black approached Cordero about 30 seconds later.

“He sucker-punched me from the back, right here, cracked a piece of my tooth out. I landed on the ground,” Cordero said.

Despite this being an assault of a citizen by Officer Black (with an assist by Officer Travis Worley), Officer Black claimed he "delivered an elbow strike" because he thought Cordero was reaching for a weapon. This lie was added to the lie that Cordero had "approached" the officers with "closed fists." The security camera recorded the whole thing, which showed officers attacked Cordero as he stood motionless by his car.

So, what was the excuse given after security cam footage showed Officer Black had lied? Officer Black lied again. He claimed he was unable to accurately recall the traffic stop because it was so "stressful."

After Cordero shared the footage with police, Officer Black wrote in another report, "I believe my perception was altered due to the high stress of the incident.”

If a regular traffic stop is so stressful it alters officers' recollection of events, no officer -- or at least not this officer -- should be considered trustworthy when it comes to testifying about traffic stops or any other unrecorded interactions with citizens. Presumably most interactions are stressful. But that's the job. And if the stress makes you make shit up about incidents that implicate a host of constitutional rights and people's actual physical freedom, you probably shouldn't be a cop.

]]>
golden-age-of-surveillance-of-public-officials https://beta.techdirt.com/comment_rss.php?sid=20210319/14530946458
Mon, 22 Mar 2021 10:48:26 PDT Senators Leahy And Tillis -- Both Strongly Supported By Hollywood -- Ask Merrick Garland To Target Streaming Sites Mike Masnick https://beta.techdirt.com/articles/20210319/00172446452/senators-leahy-tillis-both-strongly-supported-hollywood-ask-merrick-garland-to-target-streaming-sites.shtml https://beta.techdirt.com/articles/20210319/00172446452/senators-leahy-tillis-both-strongly-supported-hollywood-ask-merrick-garland-to-target-streaming-sites.shtml As you'll likely recall, at the very end of last year, Senator Thom Tillis, the head of the intellectual property subcommittee in the Senate, slipped a felony streaming bill into the grand funding omnibus. As we noted at the time, this bill -- which was a pure gift to Hollywood -- was never actually introduced, debated, or voted on separately. It was just introduced and immediately slipped into the omnibus. This came almost a decade after Senators had tried to pass a similar bill, connected to the SOPA/PIPA. You may even recall when Senator Amy Klobuchar introduced such a bill in 2011, Justin Bieber actually suggested that maybe Senator Klobuchar should be locked up for trying to turn streaming into a felony.

Of course, this whole thing was a gift to the entertainment industry, who has been a big supporter of Senator Tillis. With the flipping of the Senate, now Senator Leahy has become the chair of the IP subcommittee. As you'll also likely recall, he was the driving force behind the PIPA half of SOPA/PIPA, and has also been a close ally of Hollywood. So close, in fact, that they give him a cameo in every Batman film. Oh, and his daughter is literally one of Hollywood's top lobbyists in DC.

So I guess it's no surprise that Tillis and Leahy have now teamed up to ask new Attorney General Merrick Garland to start locking up those streamers. In a letter sent to Garland, they claim the following:

Unlawful streaming services cost the U.S. economy an estimated $29 billion per year. This illegal activity impacts growth in the creative industries in particular, which combined employ 2.6 million Americans and contribute $229 billion to the economy per year. In short, unlawful streaming is a threat to our creative industries and the economic security and well-being of millions of Americans.

If you've been following these stories long enough, you know where this number comes from. It's from a report put out by the US Chamber of Commerce's "The Global IP Center" and written by NERA Consulting. The US Chamber of Commerce has always been a huge backer of stronger copyright -- mainly because the MPA pays them to be -- and NERA Consulting releases reports for Hollywood all the time. This report is not nearly as bad as some of their earlier reports, but it still makes a ton of assumptions about consumption that seem unlikely to be anywhere close to reality.

Either way, Tillis and Leahy want Garland to get down to doing exactly what Hollywood wants:

Now that have you been confirmed, will you commit to making prosecutions under the PLSA a priority? If so, what steps will you take during your first one hundred days to demonstrate your commitment to combating copyright piracy?

How quickly do you intend to update the U.S. Attorneys manual to indicate prosecutors should pursue actions under the PLSA?

Hurry up and throw streamers in jail!

As if recognizing just how bad this looks, they did include one final point as a sort of nod towards the fact that the DOJ probably shouldn't be going after ordinary everyday streamers.

When updating the U.S. Attorneys manual, what type of guidance do you intend to provide to make clear that prosecutions should only be pursued against commercial piracy services? Such guidance should make clear that the law does not allow the Department to target the ordinary activities of individual streamers, companies pursuing licensing deals in good faith, or internet service providers (ISPs) and should be reflective of congressional intent as reflected in our official record.

Just the fact that they need to include this certainly suggests that they know how dangerous the law they passed was, and how it could easily be misinterpreted and/or abused to go after such individuals or companies.

Hopefully, AG Garland realizes that he's got more important things to do than being Hollywood's latest cop on the beat.

]]>
because-of-course https://beta.techdirt.com/comment_rss.php?sid=20210319/00172446452