Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 6 January 2022 @ 12:11pm

Top Disney Lawyer To Become Top Copyright Office Lawyer, Because Who Cares About The Public Interest?

People at the Copyright Office seem to get mad at me every time I suggest that the Copyright Office is captured by Hollywood, and pointing out how top officials there all seem to bounce back and forth between the Copyright Office and Hollywood.

That’s not to say there aren’t some good people there, because there are. But the organization is dominated by former (and, if the past is any indication, soon to be again), lobbyists and lawyers of the biggest copyright abusers on the planet. So it’s difficult to take the Office seriously as a steward for the public good (as they are supposed to be), when it’s currently headed by the former top lawyer at IFPI, who, before that, was the top IP lawyer for Time Warner. And, when she then decides to hire Disney’s top “IP lawyer” to become General Counsel of the Copyright Office (as has just been announced), it becomes really difficult not to be cynical.

This is what regulatory capture looks like.

But even worse, actions like this are why the public doesn’t believe in copyright. Over and over again all we see is abuse of copyright, and then the government puts the same people who have abused copyright in charge of copyright at the Copyright Office, it makes the public cynical and (reasonably) distrustful of the intentions of the Copyright Office. That’s disappointing, as there are plenty of people who have expertise in copyright law who would be great for the Copyright Office. But, for some reason, they never get hired into the top jobs unless they’ve spent time working for one of the giant Hollywood or recording industry organizations.

Posted on Techdirt - 6 January 2022 @ 09:32am

The Making Of A Moral Panic, Courtesy Of The NY Times

We’ve been talking a bit lately about how the media creates moral panics, especially ones that blame social media for problems that are much more likely mostly created by the media themselves.

And here’s another example of the virtuous cycle, in which the New York Times is able to first create a moral panic, and then gets to keep reporting on Congress “investigating” the moral panic the NY Times itself created. It started with an article in the NY Times discussing a website, which I will not name, that has created forums for those interested in suicide. The article is presented as saying (1) that the website encourages suicide… and (2) then appears to blame Section 230 for it. The reality, on both of those points, is a hell of a lot more complicated.

First off, discussions about “encouraging” suicide are always somewhat fraught. I’ve lost two friends to suicide, and it’s very, very natural to look for people to blame. But it’s often counterproductive, and no one can ever know for sure what actually caused someone to decide to end their life. A decade ago we talked about this a bit, in regards to two separate lawsuits looking to hold liable people who, it was argued, “drove” others to suicide. Except, as we noted at the time, when you blame people for “driving” or “encouraging” suicide, you are actually giving way more power to the suicide itself, because it gives more power to those thinking of killing themselves, knowing that it will punish people who had been mean to them. In other words, trying to hold people liable for “encouraging” suicide can, unfortunately, actually encourage more suicide in and of itself.

Suicide itself is a very fraught topic. In early 2021, Katie Engelhart’s book The Inevitable: Dispatches on the Right to Die came out, and it’s worth reading. It made me, personally, feel conflicted about the idea of assisted suicide and the right to die — and reminded me that it’s impossible to decide that there’s a “right” answer here. Every case is unique and they all involve a whole bunch of difficult moral decisions that different people weigh in different ways. But blaming others for the very personal decisions that an individual makes seems incredibly dangerous. Yet, the entire structure of the NY Times piece seems to want to put the blame on a website. And, on Section 230.

But, as the article itself noted, the existence of the site in question is due to other sites removing it. It apparently was a response to Reddit shutting down a forum that discussed suicide:

It came online after Reddit shut down a group where people had been sharing suicide methods and encouraging self-harm. Reddit prohibited such discussion, as did Facebook, Twitter and other platforms. Serge wrote days after the new site opened that the two men had started working on it because they “hated to see the community disperse and disappear.” He assured users that “this isn’t our first rodeo and we know how to keep the website safe.”

It seems notable that Section 230’s encouragement for websites to determine on their own what content they find acceptable and what content they do not, resulted in these major sites — Reddit, Facebook, Twitter and “other platforms” — not allowing such a discussion to happen on their websites. And yet… this separate community still formed. That should be notable, but the NY Times piece completely brushes by it. The fact is that people will form communities around such things. It is human nature. Indeed, it could be argued that if such communities had been allowed on places like Reddit, where they could more directly and easily be monitored by experts and professionals, there might be more opportunity to intervene and to help troubled individuals.

Instead, by continually banning such communities, the end result is that they move to ever darker places online, where it is harder and harder to monitor them, and where it’s more likely that unhelpful people begin to exert more and more power over those sites. As we’ve discussed before (also in relation to a NY Times article!), so much of these kinds of attacks on the internet are really just people upset that the internet is shining light on larger societal problems that have not been fixed — including those around mental health. It seems like a better system would be one designed to try to figure out ways to intervene in sites like the one the NY Times covers, and look for ways in which professionals could help guide those who need real help to the kinds of resources they actually need.

Instead, it’s just victimization all the way down, with everyone looking to point the blame finger, and no one looking to fix the underlying problem. They seem to think if only this website is taken offline (despite the fact it was a response to other sites shutting down communities) then, magically, communities of people exploring how to kill themselves will disappear. That’s not how it works.

But, of course, once the NY Times has created the moral panic, grandstanding politicians leap in to fluff it up even more — not noting that this article alone almost certainly drove way more attention to the website than it had received in the past.

Just weeks after the NY Times told so many people where to go to learn about how to kill themselves, the same NY Times reporters announced triumphantly that Congress is on the job of investigating the site. Are they looking to fund more efforts to help deal with mental health issues? Of course not! Are they looking at ways to help guide troubled individuals to better, more helpful resources? What, are you a communist or something? No, Congress wants to punish this website and anyone else who promotes it (except, of course, for the NY Times, which only wrote a giant article about the site, telling people how to find it).

Responding to a New York Times investigation of the site published this month, the House Committee on Energy and Commerce on Monday released a bipartisan statement requesting briefings from search engines, web-hosting companies and other tech companies whose services might have been leveraged by the suicide site.

“It is imperative that companies take the threat of such sites seriously and take appropriate steps to mitigate harm,” said the statement from the panel, led by Representative Frank J. Pallone Jr., Democrat of New Jersey.

A representative for Microsoft’s search engine, Bing, told The Times last week that the company had altered its search engine to lower the ranking of the site, which has been linked to a trail of deaths. On Monday, Senator Richard Blumenthal, Democrat of Connecticut, sent a letter to Google and Bing asking the companies to fully remove the suicide site from their search results — a step further than either search engine was willing to take.

On Tuesday, Representative Lori Trahan, Democrat of Massachusetts, along with six other House members, wrote to Attorney General Merrick B. Garland asking what options the Justice Department had for investigating the site and its founders and what steps lawmakers could take to allow for a prosecution. Noting that other countries had taken steps to restrict access to the site, the lawmakers also asked about removing it from search results in the United States.

Look how proud they are of what they’ve set in action — without any recognition of how none of this actually helps and how much they themselves contributed to the promotion of the site. This is how the media creates a moral panic.

It would be great if we saw politicians respond to this by focusing on the underlying problem — but why do that when you can just randomly try to blame everyone’s favorite bogeyman, “big tech.”

Posted on Techdirt - 5 January 2022 @ 01:40pm

It's Great That Winnie The Pooh Is In The Public Domain; But He Should Have Been Free In 1982 (Or Earlier)

It’s been four years now since the US finally started allowing old works to enter the public domain after decades in which cultural landlords continually moved to actively remove works from the public domain. Every year since the US got back into the public domain business, we’ve happily run a game jam, encouraging people to make use of these newly public domain works, and this year is no different (check out the Gaming Like It’s 1926 game jam page if you’re interested!).

I’m not entirely sure why, but this year, people seem even more interested than in the past few years. We’ve received way more initial signups than in the past, and more community activity as well. I’m also seeing (outside of the game jam), more public awareness of these newly public domain works than in the past, when it sometimes felt like a more muted public level of interest. Hell, even Ryan Reynolds was quick to jump on the newly public domain to help promote the MVNO Mint Mobile, in which he owns the largest stake.

Perhaps some of the excitement over this year’s public domain entries is the fact that the public are now getting used to the fact that every January 1st, new works enter the public domain. Or, perhaps it just has to do with the prominence of some of this year’s works. When the 1923 class of the public domain came around, many people noted that there weren’t very many “big” cultural touchstones in that batch — and to some extent the same has been true over the last few years’ batches as well. The Great Gatsby had name recognition, but still felt kind of old and a bit stuffy.

This year’s inclusion of the first Winnie-the-Pooh book seems to have changed some of that. But, as Alan Cole rightly points out, it’s a complete travesty that Pooh wasn’t in the public domain decades ago.

As we’ve explained at length before, copyright term extension makes no sense, legally, ethically, or morally. The entire point of copyright law (in the US) is that it is an economic incentive to creators: if you create something creative and new, we give you an exclusive right to copy it for this length of time. If the work was then created, the incentive was enough. The deal was made. Clearly, the copyright term at the date of creation served its purpose — to make sure there was enough incentive to create that work. Extending the term of works already created does absolutely nothing to re-incentivize those old works. They were already made. All it does is take things away from the public. The public promised you an exclusive right for a certain number of years, and at the end the public was supposed to get access to those works.

In the case of Pooh, when A.A. Milne created it, copyright term in the US was 28 years, though it could be renewed for another 28 years. Thus, the maximum copyright that Milne could have possibly expected in the US was 56 years. In other words, he knew that when he published the work in the US, it would enter the public domain here by 1982 at the latest. The fact that Milne was British has no bearing on this, since he still chose to publish in the US under these rules, and that was clearly enough incentive at the time. (For what it’s worth, as I understand it, when he published the works in the UK, the term at the time was “life of the author plus 50” and seeing as he died in 1956, it would be expected that his works would enter the public domain in 2006).

Either way, it makes no sense at all that Pooh is only in the public domain now (and just the first book of Pooh). Cole’s piece goes much more in depth into the inherent trade-offs with copyright.

You may have noticed that most of the works discussed here are almost a century old. That is because 95 years is the length of copyright for many works; it is far too long. The most compelling arguments for copyright are about marshaling sufficient compensation to incentivize creators to work. And any work that still earns attention 95 years after publication has surely been lucrative enough that the author is compensated sufficiently. Or put another way, I doubt there were many artists or writers from 1926 who chose not to produce their best work because it might not receive royalties in 2022.

Extremely long-dated copyrights only matter to the wildly successful⁠—and if you are expecting to be wildly successful, you are likely to produce your work anyway. The additional years of copyright are what economists would call “inframarginal;” they don’t affect your decision because they don’t bring you close to the tipping point where you’d change your mind.

Given the costs of copyright⁠—that fewer people enjoy the work, that legal wrangling eats up resources, and that we’d often prefer to allocate rewards in society towards more current innovations⁠—it makes little sense to jealously guard intellectual property for as long as we do.

The issue, as always (which Cole doesn’t get into in his piece) is that many people — incorrectly — view copyright as some sort of moral right. Some of this is due to the concerted efforts by the copyright industry to pretend that this limited monopoly right is a form of “property” over the underlying work, and with that they have tried to establish some sort of analogy between tangible goods that you own and this limited legal right that was granted, with a time limit, in exchange for the act of creation.

But any rational look at the copyright system recognizes that’s (1) never been the purpose of copyright in the US and, even more importantly (2) does significantly more harm to the public than good — and that, therefore, it goes directly against the constitutional clause on copyrights, which only allows Congress to create a copyright system that “promotes the progress of science” (the useful arts part is about patents). Giving Disney the rights to control a cartoon bear for basically four extra decades doesn’t do that at all.

Posted on Techdirt - 5 January 2022 @ 09:29am

A Fight Between Facebook And The British Medical Journal Highlights The Difficulty Of Moderating 'Medical Misinformation'

There are multiple efforts under way in the US to pass laws that require social media sites to take down “medical misinformation.” As we’ve described repeatedly, these are really dangerous ideas. Bills like those from Senators Amy Klobuchar and Ben Ray Lujan seek to force social media to remove medical misinformation as declared by the Ministry of Truth… er… Secretary of Health & Human Services. Of course, it was not all that long ago that we had an administration that was actively anti-science, and wanted to declare anything that made the president look bad as “fake news.”

Also, in the midst of a pandemic, when the data and the science are rapidly evolving, what might seem reasonable at one point, may later turn out to be misinformation — and vice versa. Forcing down misinformation leads to all sorts of dangerous consequences. Hell, we saw this in China, where such a law was used to silence a doctor who tried to raise the alarm about COVID-19, and was forced to apologize for spreading “untruthful information online.”

But there’s another aspect of this which people rarely try to deal with: content moderation involves a lot of very gray areas and an awful lot of context, much of which may not be immediately obvious. An ongoing war of words between the former British Medical Journal (now just “The BMJ”) and Meta/Facebook demonstrates nicely just how impossible it is to claim that “medical misinformation” must be taken offline. There’s a bit of background here, and it’s a, well, touchy subject, so try to go through the whole thing before you react.

First off, the BMJ is not, in any way, anti-vaccine. Somewhat famously, the BMJ was a key player in exposing the fraudulent behavior of Dr. Andrew Wakefield, whose fraudulent study created the modern anti-vax movement. That said, in November, The BMJ published an investigative journalism piece, based on a supposed “whistleblower” suggesting that there was some data integrity issues with the way Pfizer’s vaccine was tested, specifically involving a research partner of Pfizer, Ventavia Research Group.

Ventavia responded to the allegations by noting that the supposed whistleblower in question had raised the issues a year earlier, and they were investigated and found to be unsubstantiated. That said, many reasonable people noted that this should be further investigated and worried that it might lead to further damaging the public’s trust in science.

But, of course, you can fully predict what happened next. It didn’t just “damage the public’s trust in science,” the BMJ article instead was instantly championed by all of the big anti-vax voices all over social media as “proof” that the COVID vaccine was dangerous and rushed into approval — key talking points among that crowd, repeated despite tons of evidence that the vaccine is both incredibly effective and incredibly safe.

This resulted in Lead Stories, a fact checking organization, to fact check the article, and slap it with a “missing context” label, and calling into question the way that people were interpreting the article:

Did the British Medical Association’s news blog reveal flaws that disqualify the results of a contractor’s field testing of Pfizer’s COVID-19 vaccine, and were the problems ignored by the Food & Drug Administration and by Pfizer? No, that’s not true: Pfizer and the FDA were made aware of the allegations about the contractor in 2020. Medical experts say the claims aren’t serious enough to discredit data from the clinical trials, which is also what Pfizer and the FDA say they concluded. The FDA says its position is unchanged: The benefits of the Pfizer vaccine far outweigh rare side effects and the clinical trial data are solid.

Because of this fact check and because of the way the article was being used in a misleading way by thousands of anti-vaxxers, users who tried to share The BMJ article were flagged with fact check warnings saying: “Missing context … Independent fact-checkers say this information could mislead people,” which is accurate, but incomplete, and very dependent on the context of who was sharing it and for what purpose.

The BMJ kinda flipped out about this and published an angry open letter to Mark Zuckerberg (who, I assure you, had nothing to do with the decision on the fact check and flagging). To be honest, I find the BMJ’s anger here completely disingenuous. They act like they don’t understand at all why Lead Stories highlighted the “missing context” point on their story, when — of anyone — the BMJ should be willing to acknowledge how their own article was being weaponized by ignorant anti-vaxxers.

But from November 10, readers began reporting a variety of problems when trying to share our article. Some reported being unable to share it. Many others reported having their posts flagged with a warning about “Missing context … Independent fact-checkers say this information could mislead people.” Those trying to post the article were informed by Facebook that people who repeatedly share “false information” might have their posts moved lower in Facebook’s News Feed. Group administrators where the article was shared received messages from Facebook informing them that such posts were “partly false.”

Readers were directed to a “fact check” performed by a Facebook contractor named Lead Stories.[2]

We find the “fact check” performed by Lead Stories to be inaccurate, incompetent and irresponsible.

— It fails to provide any assertions of fact that The BMJ article got wrong

— It has a nonsensical title: “Fact Check: The British Medical Journal Did NOT Reveal Disqualifying And Ignored Reports Of Flaws In Pfizer COVID-19 Vaccine Trials”

— The first paragraph inaccurately labels The BMJ a “news blog”

— It contains a screenshot of our article with a stamp over it stating “Flaws Reviewed,” despite the Lead Stories article not identifying anything false or untrue in The BMJ article

— It published the story on its website under a URL that contains the phrase “hoax-alert”

We have contacted Lead Stories, but they refuse to change anything about their article or actions that have led to Facebook flagging our article.

The BMJ open letter also gets unnecessarily snarky (which also seems out of character for a prestigious medical journal):

Rather than investing a proportion of Meta’s substantial profits to help ensure the accuracy of medical information shared through social media, you have apparently delegated responsibility to people incompetent in carrying out this crucial task.

That’s ridiculous. Clearly this is a difficult situation. Even if the reporting was accurate — there is crucial context here. Did the revelations support the claims of anti-vaxxers who were using it as evidence that the Pfizer vaccine was not safe? The answer is no, it did not. And there’s a strong argument that The BMJ could have and should have made that point a lot clearer in their own reporting, recognizing how the article would be weaponized by grifters and fed to the ignorant.

Lead Stories then responded to the BMJ, in fairly great detail, more or less saying “you can’t honestly be that naïve.”

It is ironic to read that BMJ.com objects to the headline on Lead Stories’ fact check of a BMJ.com article when the original BMJ piece carries a scare headline that oversells the whistleblower and overstates the jeopardy. Their November 2, 2021, headline “Covid-19: Researcher blows the whistle on data integrity issues in Pfizer’s vaccine trial” is the reason BMJ.com’s article has appeared in hundreds of Facebook posts and tweets, many by anti-vaccine activists using it as “proof” the entire clinical trial was fraudulent and the vaccine unsafe.

Lead Stories also points out that The BMJ’s headline to its article is extremely misleading, as it can be read to say that there were data integrity issues with the entirety of the Pfizer vaccine test, rather than 3 sites out of 153, and then also highlights that the whistleblower in question is not a scientist who is an expert on this. It also notes that the whistleblower appears to have some… questionable beliefs and associations regarding vaccines:

The BMJ.com article eventually gets around to saying she worked at the lab for just two weeks. But BMJ’s open letter fails to mention important context: The Brook Jackson Twitter account agreed with leading COVID misinformation-spreader Robert F. Kennedy Jr.’s criticism of the “Sesame Street” episode in which Big Bird encourages kids to get a COVID-19 vaccine. “Shocking, actually.” she wrote in a November 9, 2021, response to a Kennedy tweet blasting Sesame Street (archived here). Elsewhere on Twitter, the Brook Jackson account wrote to a vaccine-hesitant person that vaccination makes sense if a person is in a high-risk category. When the U.S. 5th Circuit Court of Appeals ruled against a federal employee vaccine mandate, she tweeted “HUGE!” and not with a frowny emoji.

Lead Stories talked to Jackson, looked at available documents (after BMJ refused to permit us to see their basis for the story and did not make the documents available on a transparency site). Unlike BMJ.com, Lead Stories then tested Jackson’s assertions with Pfizer, with the lab contractor in question and with the FDA and then published their responses. It’s not at all clear yet whether there are data integrity issues if you ask the other stakeholders, and that’s the crucial missing context. We also talked to experienced medical researchers for perspective, one of whose credentials BMJ editorial staff demeaned for reasons we can only imagine.

By talking to Ventavia, we contributed context BMJ.com missed: Ventavia said the whistleblower had not worked on the Pfizer trial, but Lead Stories set that straight by embedding in its story a copy of a letter, provided by Jackson in which she was expressly welcomed to the Pfizer trial team. That’s what we mean by context.

The BMJ has thus far failed to document what is “inaccurate” in the Lead Stories fact check, but again oversells by using that and other name-calling to vent frustration at our documentation of obvious missing context

All of this involves an awful lot of judgment calls, understanding of context, and a lot more. But under a law that requires the pulling down of medical misinformation, how the hell would anyone handle this kind of scenario? The BMJ story isn’t wrong per se, but there is a lot of important context that seems like it’s missing (which Lead Stories highlighted above). On top of that, there’s all the important context around how people are using the article and stretching an already weaker-than-it-seems story to pretend to be a lot more damning on the overall vaccine.

In other words, how the article is being represented and used is an important piece of context as well. And this is frequently the case with medical misinformation. People will take something that is factual or accurate, and present it out of context or in a misleading light, in order to make an argument that doesn’t support it. So which part is the “misinformation” and how do you police that?

In an ideal world, we’d be able to see all the details and the back and forth, and figure it all out. Frankly, when I first heard about this — via The BMJ’s open letter — I initially thought that the details would support The BMJ, and that Facebook mislabeled something (which, of course, happens all the time because of the old Masnick Impossibility Theorem). It was only after reading multiple articles on both sides of this, and going through the details of Lead Stories’ process, that I realized that they had (to me) a much stronger argument, that there’s an awful lot of important context that is missing from The BMJ piece that you would hope a journal like that would have considered before publishing the article the way in which it did.

But to expect every social media platform to be able to determine this on every piece of medical sharing out there is next to impossible — and putting legal liability on top of it, as Senators Klobuchar and Lujan want to do — would be dangerously impossible.

Posted on Techdirt - 4 January 2022 @ 12:08pm

Eric Clapton Pretends To Regret The Decision To Sue Random German Woman Who Listed A Bootleg Of One Of His CDs On Ebay

There is no greater example of just how totally broken copyright is than the story of Eric Clapton suing — and winning — a poor German woman for copyright infringement after she listed (but did not sell) a bootleg CD that her late husband had purchased in a store. The woman had no idea it was a bootleg. She just knew that she had the CD and wanted to sell it, so she put it on eBay. Eric Clapton — who has been a despicably awful human being for decades — sued her over this and won. He won, despite the fact that (1) she hadn’t bought the CD, and was just selling her late husband’s CD, (2) she had no idea it wasn’t authorized, (3) she didn’t actually sell it, as she quickly pulled down the listing, and (4) it was just one damn CD and she listed it for less than $12. And not only that, under German copyright law, she was told she also needed to pay Clapton’s legal fees.

Lots of people (reasonably) got mad about Clapton for pursuing this case, and we’ll get to that in a moment, but you should also be furious about copyright laws. Because that’s what makes this sort of absolute nonsense not just possible, but plainly encouraged.

We’ve pointed out in the past that one of the biggest problems of copyright in the internet age is that it was designed for a time when “infringement” generally had to mean deliberate attempts by commercial entities to copy someone else’s work and profit off of it. The internet has laid bare just how unfit for purpose copyright is by suddenly turning us all into lawbreakers many times over every single day. At that point, it should be obvious that it’s the law that’s the main problem.

However, as we highlighted in a guest post a few years back, copyright hung on as relevant for a few decades in part because of the concept of “copyright toleration,” in which the vast, vast majority of those daily infringements were ignored by rights holders. However, as that article has detailed, we’ve seen increasingly less “toleration” these days, which explains things like the nonsense demands for universal upload filters by the music industry.

But, still, there remains some discretion in all of this, and that’s where Eric Clapton is still very much at fault. After this story came out, shortly before Christmas, and went viral with lots of people trashing Clapton for such nonsense, his team, trying to do a bit of damage control put up a statement trying to justify what happened. It’s… not particularly convincing.

Germany is one of several countries where sales of unauthorized and usually poor-quality illegal bootleg CDs are rife, which harms both the industry and purchasers of inferior product. Over a period of more than 10 years the German lawyers appointed by Eric Clapton, and a significant number of other well-known artists and record companies, have successfully pursued thousands of bootleg cases under routine copyright procedures.

So, it starts out with “the bootleggers made me do it.” And also a whiff of “well, the lawyers just kinda went off and did their own thing.” Which doesn’t actually help matters. When lawyers file lawsuits on your behalf, it’s still on your behalf and you’re still responsible for what they do. They represent you. If you don’t like how they represent you, well, that’s your problem.

It is not the intention to target individuals selling isolated CDs from their own collection, but rather the active bootleggers manufacturing unauthorised copies for sale. In the case of an individual selling unauthorised items from a personal collection, if following receipt of a “cease and desist” letter the offending items are withdrawn, any costs would be minimal, or might be waived.

Eric Clapton’s lawyers and management team (rather than Eric personally) identifies if an item offered for sale is illegal, and a declaration confirming that is signed, but thereafter Eric Clapton is not involved in any individual cases, and 95% of the cases are resolved before going to Court.

So, this is Clapton distancing himself from the lawsuit. Except, again, the lawyers file it on his behalf and in his name. You can’t dismiss it so easily like that. If its other people doing it, then get better people. Also, a lot of this sounds like blaming the woman for not just settling up front. It’s very copyright troll-like in its wording. “If only you had given in to our bullying up front, none of this would have happened.” It’s playing victim that the woman didn’t just roll over when Clapton’s legal team sent its ridiculous threat letter.

And, again, if this was not who they intended to target, they could have stopped the case at any point, rather than push it forward. But they just kept pushing it forward. Indeed, again the statement blames the woman for responding in a dismissive fashion to their original cease-and-desist:

This case could have been disposed of quickly at minimal cost, but unfortunately in response to the German lawyers’ first standard letter, the individual’s reply included the line (translation): “feel free to file a lawsuit if you insist on the demands”. This triggered the next step in the standard legal procedures, and the Court then made the initial injunction order.

Note the passive voice here: “this triggered.” No, it “triggered” nothing. Apparently, the lawyers who Clapton hired decided to punish this woman because her reply was snarky, and filed a lawsuit because she listed a single CD on eBay. Again, take some responsibility.

If the individual had complied with the initial letter the costs would have been minimal. Had she explained at the outset the full facts in a simple phone call or letter to the lawyers, any claim might, have been waived, and costs avoided.

More victim blaming. The entire whiny response is just blaming the victim over and over and over again. Then they blame her again for appealing the original injunction, which she was free to do because to anyone looking at the facts of the case, the whole thing seemed ridiculous. But Clapton blames her for appealing:

However, the individual appointed a lawyer who appealed the injunction decision. The Judge encouraged the individual to withdraw the appeal to save costs, but she proceeded. The appeal failed and she was ordered to pay the costs of the Court and all of the parties.

It was only after all this bad publicity (and a chance to spend 7 paragraphs victim blaming), that Clapton’s management says he won’t take any further action. But even then, the letter does so in a way that still suggests that the woman is to blame if any more costs accrue:

However, when the full facts of this particular case came to light and it was clear the individual is not the type of person Eric Clapton, or his record company, wish to target, Eric Clapton decided not to take any further action and does not intend to collect the costs awarded to him by the Court. Also, he hopes the individual will not herself incur any further costs.

Notice also what’s missing: there is no apology. There is no admission of doing anything wrong at all. There is only blaming the woman, the use of a passive voice, and trying to pass the buck from Clapton — in whose name and copyrights this was done — and arguing that it’s all these lawyers’ fault. The lawyers Eric Clapton hired to file lawsuits on his behalf.

Copyright is broken and it’s a problem, but people like Eric Clapton demonstrate just how messed up the law is with nonsense like this.

Posted on Techdirt - 4 January 2022 @ 09:31am

Google Blocked An Article About Police From The Intercept… Because The Title Included A Phrase That Was Also A Movie Title

A week before Christmas, Radley Balko published a typically excellent story about the police chief in Little Rock, Arkansas, Keith Humphrey. It’s a good story, and you should read it. Humphrey, who was appointed police chief as part of a reformist campaign, has faced on ongoing campaign to try to take him down from stalwarts within the Little Rock police department, including a few others who wanted his job — but mainly by the local police union, the Fraternal Order of Police. Anyway, what caught my attention was that a few days after the article went live, The Intercept reported that it had been removed from Google search due to a DMCA copyright takedown notice.

This raised a lot of eyebrows, including questions of whether or not some of the characters who come out of the story negatively were abusing the DMCA to get the story disappeared from Google. It also surprised some people who didn’t realize that you could issue a DMCA complaint to Google to get something removed from search. Over the holidays, however, the actual story came out and it’s even dumber and more pointless than you could have imagined, but it does highlight (yet again) just how incredibly broken the copyright system is these days.

First off, the “Google removal” bit is nothing new. Even though you might think that DMCA takedowns should only be handed to sites that actually host the content in question, hosts are only one part of the DMCA 512 rules. That’s the part that most are familiar with, 512(c) with the rules for dealing with “information residing on systems or networks at direction of users.” That’s the part that has all the standard notification and takedown rules. But there’s also 512(d), which is for “information location tools” and says that if such a tool is notified of infringement — using the same method in 512(c) — you have to “respond expeditiously to remove, or disable access to, the material that is claimed to be infringing or to be the subject of infringing activity.

In other words, yes, if someone wants to block something from being found via Google, they can try to file a DMCA takedown claim, saying that the content is infringing. We’ve seen this used and abused plenty over the years. You may remember revenge pornster Craig Brittain who sought to use this system to get links to a bunch of articles about him removed from Google (this included the press release from the FTC about him settling with them for his sketchy revenge porn efforts). In fact, Brittain tried this multiple times.

Indeed, many copyright holding entities don’t even bother to go after the hosting of infringing materials — they find it more expedient to just have that content de-linked from Google. As Google notes in its transparency report, it has been asked to delete 5.5 billion URLs from its index. For what it’s worth, elsewhere, Google has reported that the vast majority of URLs it is told to delete aren’t even in its index — but it’s still pretty crazy. And while Google at least has a team that tries to review these requests, mistakes happen, because mistakes always happen at this scale.

In this case, this was clearly a mistake. But it’s an incredibly stupid mistake, so it’s worth highlighting. Notably, Google put the link to Balko’s story back into Google a few hours after The Intercept publicly complained about it, but it took another week or so until the actual DMCA notice made its way to the Lumen Database where we could finally see just what caused it. Was it the annoyed Fraternal Order of Police in Arkansas? Or just other annoyed cops?

No. It was a cybersecurity company that is apparently really bad at it’s job.

The notice came from Group IB a “cyber threat” company based in Singapore that claims to specialize in the “prevention of cyberattacks, online fraud, and IP protection.” It claims to be an “industry-leading cybersecurity solutions provider” but it frankly looks like most of the other companies in the space which probably shouldn’t exist. This notice was sent on behalf of a Russian firm: ООО “РАЗВЛЕКАТЕЛЬНЫЙ ОНЛАЙН-СЕРВИС.” As far as I can tell this seems to translate into Online Entertainment Service Limited Liability Company — about as generic a name as you can find. The company was only created in the summer of 2020, so it’s a relatively new company.

And, apparently, it hired Group-IB to issue takedown notices for a bunch of Netflix shows and movies. From the notice, I would guess that the Russian company is supposed to be trying to take down Russian translations of these Netflix shows, because while all of the names listed in the notice are from Netflix, they’re each listed with their English name… and their Russian name. And most of the URLs in the notice do appear to be to various sketchy film download sites. Also, in listing the “original URLs” (which are supposed to show the original copyright covered content), the notice lists both the American IMDB site URLs… and the Kinopoisk.ru links, which is a Russian IMDB-like site owned by Yandex, the big Russian internet company.

So, for example, the takedown for “Stranger Things” in this notice looks like this:

DESCRIPTION: series “Stranger Things / Очень странные дела” (2016)
ORIGINAL URLS:
01. https://www.kinopoisk.ru/series/915196/
02. https://www.imdb.com/title/tt4574334

So… it’s actually possible that this company was hired by Netflix, but that’s not entirely clear. Still, how does this lead to The Intercept having its story taken out of Google? Well, one of the takedowns was for the film The Old Guard, which is a Netflix production starring Charlize Theron, released in 2020. I’d never heard of it but it gets decent reviews on Rotten Tomatoes and apparently a sequel is being made.

Of course, you still may be shaking your head as to what any of this has to do with The Intercept’s story about Police Chief Keith Humphrey. But it’s right there in the takedown demand:

The other URLs listed do seem to lead to sketchy download sites, meaning they likely are pirated versions of the film. But, why is The Intercept article targeted? It seems the most obvious explanation — as stupid as it sounds — is that the subhead to The Intercept story mentions… “the old guard” as those trying to takedown Chief Humphrey.

If you can’t see that, it shows the title and sub from The Intercept:

BIG TROUBLE IN LITTLE ROCK
A Reformist Black Police Chief Faces an Uprising of the Old Guard

So… the most likely explanation here, as stupid as it seems, is that Netflix has some sort of deal with this silly Russian company, which hired the Singaporean “cybersecurity” firm Group IB to try to “police” the internet of infringing works in Russia… and in their lazy Googling for infringing copies of these Netflix shows and movies, they searched for “the old guard” and just grabbed various URLs, and didn’t check all of them, meaning that The Intercept’s story about Chief Humphrey got caught up in the mess… and, especially over the holidays with probably a lot of Google’s copyright takedown checkers on vacation, nobody caught that this was obviously a mistake until The Intercept (understandably) raised a stink.

For most normal people this would be yet another sign of how broken our copyright system has become, but unfortunately it’s the way things work these days.

Posted on Techdirt - 3 January 2022 @ 12:14pm

US Courts Realizing They Have A Judge Alan Albright Sized Problem In Waco

We’ve written a bit about Judge Alan Albright, the only judge in the US district court in Waco, Texas. Judge Albright, a former patent litigator, decided that, upon taking the bench, he’d become the friendliest court for patent cases in the entire country. He even went around advertising that patent plaintiff’s should file there and they’ve taken him up on it in droves. Since he’s the only judge in the district, all the cases get assigned to him and, at last count, more than 25% of new patent cases are all going to him. He’s so busy with patent cases he had to hire a former patent troll lawyer as a magistrate judge to help him out.

He’s also, somewhat famously, been pissing off the notoriously pro-patent appeals court for patent cases, the Federal Circuit, by refusing to rule on transfer requests to more appropriate districts, while making the process for patent defendants more expensive and cumbersome. It got so bad that even the generally pro-patent Senator Thom Tillis sent a couple of letters to Supreme Court Chief Justice John Roberts (who oversees the court system) and to the USPTO, about Albright’s “forum selling.”

It took a little while, but the Administrative Office of the US Courts, has finally responded to the letter sent by Tillis (and Senator Pat Leahy) to Justice Roberts, noting that it appears to be somewhat aware of the problems of Judge Albright.

From a long-standing national policy perspective, the Judicial Conference strongly supports the random assignment of cases and the notion that all district judges remain generalists…. Random case assignment is used in all federal courts and operates to safeguard the Judiciary’s autonomy while deterring judge-shopping and the assignment of cases based on the perceived merits or abilities of a particular judge. It bears mentioning that in September 2021, I submitted my Final Report to Congress pursuant to Section (1)(e) of the Patent Pilot Program in Certain District Courts Act, Pub. L. No. 111-349 (2011) counseling against extending the Patent Pilot Program due, in part, to the Judiciary’s longstanding position on random case assignment and to help ensure that all district judges remain generalists.

While admitting that district courts, including the ones in the Western District of Texas “have wide latitude to establish case assignment systems,” the Administrative Office of the Courts still seems to recognize that there’s a problem in Waco and will explore if anything can be done:

Given these varied divisional case assignment policies as well as the concerns that you have raised, I have asked the Committee on Court Administration and Case Management, which has jurisdiction on matters affecting case management, to consider these issues and any recommendations that may be warranted.

That may sound rather muted, but it’s still quite noteworthy. The US court system is generally resistant to changes, and just acknowledging that it understands the concerns of Tillis and Leahy, and will explore options already suggests that others in the federal judiciary recognize just how sketchy and corrupt-appearing Judge Albight’s courtroom looks right now.

Furthermore, in his year end report on the state of the federal judiciary, Justice Roberts explicitly calls out Albright’s activity (though he’s too chickenshit to note that it’s just Judge Albright who’s the problem):

The third agenda topic I would like to highlight is an arcane but important matter of judicial administration: judicial assignment and
venue for patent cases in federal trial court
. Senators from both sides of the aisle have expressed concern that case assignment procedures allowing the party filing a case to select a division of a district court might, in effect, enable the plaintiff to select a particular judge to
hear a case. Two important and sometimes competing values are at issue. First, the Judicial Conference has long supported the random assignment of cases and fostered the role of district judges as generalists capable of handling the full range of legal issues. But the Conference is also mindful that Congress has intentionally shaped the lower courts into districts and divisions codified by law so that litigants are served by federal judges tied to their communities. Reconciling these values is important to public confidence in the courts, and I have asked the Director of the Administrative Office, who serves as Secretary of the Judicial Conference, to put the issue before the Conference.

The Committee on Court Administration and Case Management is reviewing this matter and will report back to the full Conference. This issue of judicial administration provides another good example of a matter that self-governing bodies of judges from the front lines are in the best position to study and solve—and to work in partnership with Congress in the event change in the law is necessary.

This was one of only three specific topics he called out, so it seems pretty clear that he’s well aware of how Albright has made a complete mockery of his courtroom.

Posted on Techdirt - 3 January 2022 @ 09:20am

NY Senator Proposes Ridiculously Unconstitutional Social Media Law That Is The Mirror Opposite Of Equally Unconstitutional Laws In Florida & Texas

We’ve joked in the past about how Republicans hate Section 230 for letting websites moderate too much content, while Democrats hate it for letting websites not moderate enough content. Of course, the reality is they both are mad about content moderation (at different extremes) because they both want to control the internet in a manner that helps “their team.” But both approaches involve unconstitutional desires to interfere with 1st Amendment rights. For Republicans, it’s often the compelled hosting of speech, and for Democrats, it’s often the compelled deletion of speech. Both of those are unconstitutional.

On the Republican side, we’ve already seen states like Florida and Texas sign into law content moderation bills — and both have been blocked for being wholly unconstitutional.

We’ve already heard that some other Republican-controlled states have shelved plans for similar bills, realizing that all they’d be doing was setting taxpayer money on fire.

Unfortunately, it looks like the message has not made its way to Democratic-controlled states. California has been toying with unconstitutional content moderation bills, and now NY has one as well. Senator Brad Hoylman — who got his law degree from Harvard, where presumably they teach about the 1st Amendment — has proudly introduced a hellishly unconstitutional social media bill. Hoylman announces in his press release that the bill will “hold tech companies accountable for promoting vaccine misinformation and hate speech.”

Have you noticed the problem with the bill already? I knew you could. Whether we like it or not, the 1st Amendment protects both vaccine misinformation and hate speech. It is unconstitutional to punish anyone for that speech, and it’s even more ridiculous to punish websites that host that content, but had nothing to do with the creation of it.

Believe it or not, the actual details of the bill are even worse than Hoylman’s description of it. The operative clauses are outlandishly bad.

Prohibited activities. No person, by conduct either unlawful
In itself or unreasonable under all the circumstances, shall knowingly
or recklessly create, maintain or contribute to a condition in New York
State that endangers the safety or health of the public through the
promotion of content, including through the use of algorithms or other
automated systems that prioritize content by a method other than solely
by time and date such content was created, the person knows or reasonably should know:

1. Advocates for the use of force, is directed to inciting or producing imminent lawless action, and is likely to incite or produce such
action;
2. Advocates for self-harm, is directed to inciting or producing imminent self-harm, and is likely to incite or produce such action; or
3. Includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.

This is so dumb that it deserves to be broken down bit by bit. First off, any kind of content can, conceivably “endanger the health and safety of the public.” That’s so ridiculously broad. I saw an advertisement for McDonalds today on social media. Does that endanger the health and safety of the public? It sure could. Second, the bill says no use of algorithms or otherwise automated systems “other than solely by time and date such content was created” meaning that search is right out. Want the most relevant search result for the medical issues you’re having? I’m sorry, sir, that’s not allowed in New York, as a result might endanger your health and safety.

But it gets worse. The line that says…

Advocates for the use of force, is directed to inciting or producing imminent lawless action, and is likely to incite or produce such
action

… is a weird one because clearly someone somewhere thought that this magical incantation might make this constitutional. The “directed to inciting or producing imminent lawless action, and is likely to incite or produce such action” is — verbatim — the Brandenburg test for a very, very limited exception to the 1st Amendment. But, do you notice the issue? Such speech is already exempted from the 1st Amendment. Leaving aside how astoundingly little content meets this test (especially the “imminent lawless action” part) this part of the law, at best, seems to argue that “unconstitutional speech is unconstitutional.” That’s… not helpful.

The second point is even weirder. It more or less tries to mirror the Brandenburg standard, but with a few not-so-subtle changes:

Advocates for self-harm, is directed to inciting or producing imminent self-harm, and is likely to incite or produce such action

Which, nice try, but just because you mimicked the “inciting or producing imminent” part doesn’t let you get around the fact that discussions of “self-harm” in most cases remains constitutionally protected. So long as the effort is not lawless, then there’s a huge 1st Amendment problem here.

But the really problematic part is point 3:

Includes a false statement of fact or fraudulent medical theory that is likely to endanger the safety or health of the public.

Ooooooooooof. That’s bad. First of all, most “false statements of fact” and many “fraudulent medical theories” do in fact remain protected under the 1st Amendment. And, last I checked, New York is still bound by the 1st Amendment. Also, this is dumber than dumb. Remember, we’re in the middle of a pandemic and the science is changing rapidly. Lots of things we thought were clear at first turned out to be very different — don’t wear masks / wear masks, for example.

In fact, this prong most closely resembles how China first handled reports of COVID-19. Early on in the pandemic we wrote about how China’s laws against medical misinformation very likely helped COVID-19 spread much faster, because the Chinese government silenced Dr. Li Wenliang, one of the first doctors in China who called attention to the new disease. The police showed up to Dr. Li’s home and told him he had violated the law by “spreading untruthful information online” and forced him to take down his warnings about COVID-19.

And rather than realize just how problematic that was, Senator Hoylman wants to make it New York’s law!

It gets worse. The law, like most laws, has definitions. And the definitions are a mess. It uses an existing NY penal law definition of “recklessly” that requires those attempting to prosecute the law to establish the state of mind of… algorithms? Again, the bill says that if an algorithm “recklessly” creates, maintains, or contributes to such banned information, it can violate the law. But the reckless standard requires a “person” be “aware of and consciously disregards a substantial and unjustifiable risk that such result will occur.” Good luck proving that with an algorithm.

Then we get to the enforcement provision. Incredibly, it makes this much, much worse.

Enforcement. Whenever there shall be a violation of this article, the attorney general, in the name of the people of the state of New York, or a city corporation counsel on behalf of the locality, may bring an action in the Supreme Court or federal district court to enjoin and restrain such violations and to obtain restitution and damages.

Private right of action. Any person, firm, corporation or association that has been damaged as a result of a person’s acts or omissions in violation of this article shall be entitled to bring an action for recovery of damages or to enforce this article in the Supreme Court or federal district court.

The government enforcing a speech code is already problematic — but then enabling this private right of action is just ridiculous. Think of how many wasteful stupid lawsuits would be filed within seconds of this law going into effect by anti-vaxxers and anti-maskers against people online advocating in favor of vaccines and masks and other COVID-preventative techniques?

This law is so blatantly unconstitutional and problematic that it’s not even funny. And that’s not even getting to the simple fact that Section 230 pre-empts any such state law, as we saw in Texas and Florida. Hoylman, laughably, suggests that he can ignore the pre-emption issue in his press release by saying:

The conscious decision to elevate certain content is a separate, affirmative act from the mere hosting of information and therefore not contemplated by the protections of Section 230 of the Communications Decency Act.

Except that’s wrong. 230 specifically protects all moderation decisions and that includes elevating content. That’s why Section 230 protects search results. And, as Jeff Kosseff rightly notes the 2nd Circuit (which covers NY) already addressed this exact claim in the Force v. Facebook case (the ridiculous case that attempted to hold Facebook liable for terrorism that impacted the plaintiff, because some unrelated terrorists also used Facebook). There the court said, pretty clearly:

We disagree with plaintiffs’ contention that Facebook’s use of algorithms renders it a non-publisher. First, we find no basis in the ordinary meaning of “publisher,” the other text of Section 230, or decisions interpreting Section 230, for concluding that an interactive computer service is not the “publisher” of third-party information when it uses tools such as algorithms that are designed to match that information with a consumer’s interests. Cf., e.g., Roommates.Com, 521 F.3d at 1172 (recognizing that Matchmaker.com website, which “provided neutral tools specifically designed to match romantic partners depending on their voluntary inputs,” was immune under Section 230(c)(1) ) (citing Carafano, Inc. , 339 F.3d 1119 ); Carafano , 339 F.3d at 1124–25 (“Matchmaker’s decision to structure the information provided by users allows the company to offer additional features, such as ‘matching’ profiles with similar characteristics …, [and such features] [a]rguably promote[ ] the expressed Congressional policy ‘to promote the continued development of the Internet and other interactive computer services.’ 47 U.S.C. § 230(b)(1).”); Herrick v. Grindr, LLC , 765 F. App’x 586, 591 (2d Cir. 2019) (summary order) (“To the extent that [plaintiff’s claims] are premised on Grindr’s [user-profile] matching and geolocation features, they are likewise barred ….”).

So… the law clearly violates the 1st Amendment, is pre-empted by Section 230, and, if it actually went into practice, would actually be both wildly abused and dangerous.

What’s it got going for it?

Well, as Kosseff also points out, if it passed, and somehow the Texas/Florida laws were brought back from the dead, social media websites might get in trouble both for leaving up the same content they could get in trouble for taking down elsewhere. And, at least for those of us who write about content moderation, well, that will be amusing to cover. But, beyond that, this bill is complete garbage. It’s the mirror image of the garbage Florida and Texas passed — equally as dumb, equally as dangerous, and equally as unconstitutional, just at the other end of the spectrum.

Posted on Techdirt - 31 December 2021 @ 09:00am

New Year's Message: The Arc Of The Moral Universe Is A Twisty Path

As long term readers of Techdirt know, each year since 2008 my final post of the year has been a kind of reflection on optimism. This tradition started after I had a few people ask how come it seemed that I was so optimistic when I seemed to spend all my time writing about scary threats to innovation, the internet, and civil liberties. And there is an odd contradiction in there, but it’s one that shows up among many innovation optimists. I’m reminded of Cory Doctorow’s eloquent response to those who called internet dreamers like John Perry Barlow “techno utopians.”

You don’t found an organization like the Electronic Frontier Foundation because you are sanguine about the future of the internet: you do so because your hope for an amazing, open future is haunted by terror of a network suborned for the purposes of spying and control.

And to some extent, my own thinking follows along those lines. I can see amazing, astounding opportunities to continue to make the world a better place through the power of the internet and innovation. I also think we have a bit of amnesia about just how much good the internet and innovation have already created for the world. But, that doesn’t mean we get to stop thinking about ways in which it might go wrong.

If you’d like to read the past years’ New Year’s Messages, here’s the full list:

Just a few months ago in a conversation with some friends in the tech policy world, I had to admit I was kind of surprised at how defeated they sounded. With dozens of laws being proposed (and a few getting passed) around the globe, at the federal level, and at the state level, there was a sense of despair among many internet supporters, that the good parts of the internet were on their last legs. I can understand where this thinking is coming from, and yet… even with all that, I remain optimistic. That’s not to say I don’t expect any of the bad laws to go into practice and destroy some of the value of the internet. I’m pretty sure a few such laws are likely to happen, and the consequences of them will be bad.

But, perhaps I’ve reached the age where I recognize that there is no “end of history” and no final state of things. These very bad ideas may come into play, but the internet is amazingly resilient in routing around such nonsense, one way or another, over time. Martin Luther King Jr.’s famous quote is that “the arc of the moral universe is long, but it bends toward justice.” A similar kind of thing can be said about innovation. How it plays out may take quite a while, but it tends towards improving the world.

That’s not to say that there aren’t setbacks and problems and disasters — because obviously there are. But a key part of innovation is not just the act of creating something new and useful and getting it adopted by the world, but rather having society learn to adapt to it. I’m reminded of Clay Shirky discussing the innovation of the printing press, and how there was about a century of upheaval over that bit of innovation, until society really began to grapple with its power. Obviously, the internet has taken that to an entirely new level, and society is still very much adjusting.

Indeed, as we’ve noted repeatedly, many of the “problems” that are now blamed on the internet are actually problems that have existed in society for centuries that we just see more now because of the internet. I am still waiting for people to do a better job breaking down which of the problems commonly associated with the internet today are actually just the internet shining a light on existing problems v. exacerbating or creating them (and also weighing those against which societal problems have actually diminished thanks to the internet — because that’s a long list as well).

But, in the end, I have faith that society itself adapts. Not always neatly, and certainly not without many (potentially extremely problematic) mistakes. But society adapts. And the innovation drives it forward: not in a straight line, not without trips and falls, but eventually.

Indeed, despite the mess of the last few years — and especially “the narrative” that “everyone hates the internet” — I’ve been seeing more and more recognition that there are opportunities to return to an optimism about tech. Over the summer, I wrote about the concept of the Eternal October, bringing back an optimistic view of how tech and innovation can be good, but with the humility and wisdom gleaned from the mistakes of the past couple of decades.

History doesn’t end. It just teaches us more lessons. The question is what do we do with those lessons.

I’ve spent the past few months exploring these concepts more and more, and in the New Year expect to see a lot more writing on this. I’ve been talking to lots of people who are legitimately exploring ways to turn today’s innovation into something a lot more promising than it is, and it has me more excited than I’ve been in a while. And that’s even with all of the nonsense happening among policy makers and regulators around the world. Even as they do whatever it is that they do, actual innovators are heads down working on creating a better world.

More specific to what’s been happening here at Techdirt and the Copia Institute, we’ve been engaged in a number of different policy discussions to try to prevent governments from making things worse. The Copia Institute officially launched our Copia Gaming initiative (and we’ve been really busy on that front so stay tuned for a bunch of exciting announcements). We’ve also got some fun changes for Techdirt itself in store — including a big one that has been over two years in the making, but where we finally see some light at the end of a tunnel.

This year, we also took all third-party ads off the site as well as all Google tracking (at some point next week, we’ll do our annual stats review — but for the first time without using Google Analytics, since that’s gone). Of course, that also means that we’re more reliant than ever on having our community support us, so please consider supporting the work we do if you can. A few months back, we finally moved on from our own homemade “Insider Chat” and launched the Techdirt Insider Discord, which has been tremendous fun — and we’ve got more planned for that too.

On that note, my final paragraph of these final posts of the year is always about thanking all of you, the community here at Techdirt, for making this all worthwhile. I started Techdirt over twenty years ago as a fun project that allowed me to work out some of my own thoughts on the intersection of technology, innovation, business, and civil liberties, and over the years it’s grown, and I still am amazed each day that anyone pays any attention at all, let alone contributes to the discussions we have here. The community — of which you reading this are a key part — is integral to what makes Techdirt so much fun for me. You challenge me, make me think, introduce me to new ideas, help me explore impossibly challenging subjects, and just generally push me and the rest of Techdirt to be better. So thank you, once again, for making Techdirt such a special and wonderful place where we can share and discuss all of these ideas. I look forward to whatever happens as we enter 2022.

Posted on Techdirt - 30 December 2021 @ 12:11pm

Missouri Governor Still Expects Journalists To Be Prosecuted For Showing How His Admin Leaked Teacher Social Security Numbers

Missouri Governor Mike Parson is nothing if not consistent in his desire to stifle free speech. As you’ll recall, the St. Louis Post-Dispatch discovered that the state’s Department of Elementary and Secondary Education (DESE) website was programming in such an incompetent fashion that it would reveal, to anyone who knew where to look, the social security numbers of every teacher and administrator in the system (including those no longer employed there). The reporting on the vulnerability was done exactly following ethical disclosure best practices — getting just enough evidence of the vulnerability, alerting the state to the problem and not publishing anything until the vulnerability was fixed. The FBI told Missouri officials early on “that this incident is not an actual network intrusion” and DESE initially wrote up a press release thanking the journalists for alerting them to this.

But then Parson blundered his way into making a mess of it, insisting that the reporters were hackers and ordering the Missouri Highway Patrol to “investigate” them for prosecution. When people mocked him for this, he doubled down by insisting that this was real hacking and that those reporting otherwise were part of “the fake news.”

A month later, DESE admitted that it had fucked up, apologized to all the teachers and administrators (current and former) who its own incompetence had exposed, and offered credit monitoring to them all. Notably, DESE did not apologize to the journalists who discovered this mess, and the governor has continued to stand by his call to prosecute them.

Earlier this week the Highway Patrol claimed it had completed its investigation… and turned the findings over to state prosecutors. That alone seems worrisome, as there’s nothing to turn over to prosecutors here beyond “our governor is a very foolish man, who can’t admit to his own failings.”

Capt. John Hotz said the results were turned over to Cole County Prosecuting Attorney Locke Thompson.

“The investigation has been completed and turned over to the Cole County Prosecutor’s office,” Hotz told the Post-Dispatch on Monday.

And the Governor still thinks the end result will be the prosecution of journalists for exposing the fact that his own administration ran a dangerously incompetent computer system that put 600,000 current and former state employees’ private info at risk:

Gov. Mike Parson on Wednesday expressed his opinion the Cole County prosecuting attorney would bring charges in the case of a Post-Dispatch reporter who alerted the state to a significant data vulnerability.

“I don’t think that’ll be the case,” Parson said when asked what he would do if the prosecutor didn’t pursue the case. “That’s up to the prosecutor; that’s his job to do.”

Parson’s continued insistence that this was unauthorized hacking is absolute garbage.

“If somebody picks your lock on your house — for whatever reason, it’s not a good lock, it’s a cheap lock or whatever problem you might have — they do not have the right to go into your house and take anything that belongs to you,” Parson said.

That analogy is just dumb on multiple levels. They didn’t pick any lock. They didn’t intrude somewhere they weren’t supposed to go. The website put the info on their computers in the HTML. They didn’t break any locks. They didn’t access a system they didn’t have access to. They just went where they were allowed to go, and the state’s incompetent technologists handed them info it should not have.

Under Parson’s definition of “hacking” it would be easy to turn anyone into a hacker. Just expose data you shouldn’t expose on a website, and wait until anyone visited the page. That’s not how this should work and the fact that he’s still pressing this issue raises serious questions about Parson’s competence to do anything, let alone run an entire state.

More posts from Mike Masnick >>