Notice: Use of undefined constant EDITION_TOKEN - assumed 'EDITION_TOKEN' in /home/beta6/deploy/itasca_20201215-3691-c395/rss.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss.php on line 2

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss-template.inc on line 2
Techdirt. Easily digestible tech news... https://beta.techdirt.com/ en-us Techdirt.https://beta.techdirt.com/images/td-88x31.gifhttps://beta.techdirt.com/ Thu, 4 Mar 2021 19:45:38 PST Twitter Opposes 'Tweet' Trademark Application For Bird Food Company Timothy Geigner https://beta.techdirt.com/articles/20210224/12034446312/twitter-opposes-tweet-trademark-application-bird-food-company.shtml https://beta.techdirt.com/articles/20210224/12034446312/twitter-opposes-tweet-trademark-application-bird-food-company.shtml Way back in the simpler time of 2010, Mike wrote up an interesting piece on Twitter's trademark enforcement policies and how it handles third parties that interact with Twitter using Twitter-related terms. In short, Twitter built a reputation for itself in freely licensing these terms for use by third parties, believing that tools that made Twitter more useful were good for the platform overall. It was a smart, productive way of looking at protecting trademarks so as not to lose them to genericide.

Which is part of what makes it sort of strange that Twitter seems to take the opposite tact when it comes to totally unrelated business entities attempting to trademark terms like "tweet."

On Friday, Twitter filed a notice of opposition before the Trademark Trial and Appeal Board against applicant Puerto Rican company B. Fernandez & Hnos.’s application for the TWEET mark, asserting that it will be harmed if the applicant’s mark is registered.

Twitter pointed out that the messages on its platform are called tweets. The marks are used in connection with the aforementioned goods and services, along with other goods and services. Twitter argued that it has established extensive common law rights in the TWEET mark in connection with its goods and services and that the TWEET mark is distinctive.

There's no doubt that "tweet" has taken on fame as a result of Twitter's platform, trademarks, and marketing of itself. But there is still a matter of actual or potential customer confusion on specific uses to contend with and the problem with that is that B. Fernandez & Hnos. is a maker of bird food. In that context, the term "tweet" doesn't call back to Twitter at all, because it fits naturally in with the nature of the product in question.

For some reason, Twitter's opposition seems to think the opposite.

Twitter claimed that the applicant seeks to register the TWEET mark in International Class 31, covering bird food. However, Twitter alleged that “consumers will likely associate Applicant’s TWEET Mark with Twitter and the TWEET Goods and Service and will assume there is a relationship between Applicant and Twitter. Twitter asserted that the applicant’s TWEET mark is identical to its TWEET mark, would be “advertised and/or sold in identical or similar channels of trade as Twitter’s and Services”, and would “conflict with Twitter’s lawful and exclusive right to use the TWEET Mark nationwide in connection with Twitter’s Goods and Services.” Consequently, Twitter averred that this similarity is likely to cause consumer confusion, mistake or deception regarding the source, origin, or sponsorship of the respective goods and services.

In other words, Twitter's "tweet" is so famous that a brand of bird food that includes "tweet" will be seen as associated more with Twitter than with bird food. And that's plainly ridiculous.

And so, again, we're left with a company that acts quite good on one set of trademark issues, but is, at least, a bit overly aggressive on others.

]]>
chirp https://beta.techdirt.com/comment_rss.php?sid=20210224/12034446312
Thu, 4 Mar 2021 16:06:48 PST Washington State Also Spits On Section 230 By Going After Google For Political Ads Cathy Gellis https://beta.techdirt.com/articles/20210303/13560746356/washington-state-also-spits-section-230-going-after-google-political-ads.shtml https://beta.techdirt.com/articles/20210303/13560746356/washington-state-also-spits-section-230-going-after-google-political-ads.shtml In the post the other day about Utah trying to ignore Section 230 so it could regulate internet platforms, I explained why it was important that Section 230 pre-empted these sorts of state efforts:

Just think about the impossibility of trying to simultaneously satisfy, in today's political climate, what a Red State government might demand from an Internet platform and what a Blue State might. That readily foreseeable political catch-22 is exactly why Congress wrote Section 230 in such a way that no state government gets to demand appeasement when it comes to platform moderation practices.

We don't have to strain our imaginations very hard, because with this lawsuit, by King County, Washington prosecutors against Google, we can see a Blue State do the same thing Utah is trying to do and come after a platform for how it handles user-generated content.

Superficially there are of course some differences between the two state efforts. Utah's bill ostensibly targets social media posts whereas Washington's law goes after political ads. What's wrong with Washington's law may also be a little more subtle than the abjectly unconstitutional attempt by Utah to trump internet services' expressive and associative rights. But these are not meaningful distinctions. In both cases it still basically all boils down to the same thing: a state trying to force a platform to handle user-generated content (which online ads generally are) the way the state wants by imposing requirements on platforms that will inevitably shape how they do.

In the Washington case prosecutors are unhappy that Google is apparently not following well enough the prescriptive rules Washington State established to help the public follow the money behind political ads. One need not quibble with the merit of what Washington State is trying to do, which, at least on first glance, seems perfectly reasonable: make campaign finance more transparent to the public. Nor is it necessary to take issue with the specific rules the state came up with to try to vindicate this goal. The rules may or may not be good ones, but whether they are good or not is irrelevant. That there are rules is the problem, and one that that Section 230 was purposefully designed to avoid.

As discussed in that other post, Congress went with an all-carrot, no-stick approach in regulating internet content, giving platforms the most leeway possible to do the best they could to help achieve what Congress wanted overall: the most beneficial and least harmful content online. But this approach falls apart once sticks get introduced, which is why Congress included pre-emption in Section 230 so that states couldn't try to. Yet that's what Washington is trying to do with its disclosure rules surrounding political ads: introduce sticks by imposing regulatory requirements that burdens how platforms can facilitate user-generated content, in spite of Congress's efforts to alleviate them of these burdens.

The burden is hardly incidental or slight. Remember that if Washington could enforce its own rules, then so could any other state or any of locality, even when those rules were far more demanding, or ultimately compromise this or any other worthy policy goal—either inadvertently or even deliberately. Furthermore, even if every state had good rules, the differences between them would likely make compliance unfeasible for even the best-intentioned platform. Indeed, even by the state's own admission, Google actually had policies aimed at helping the public learn who had sponsored the ads appearing on its services.

Per Google’s advertising policies, advertisers are required to complete advertiser identity verification. Advertisers seeking to place election advertisements through Google’s advertising networks are required to complete election advertisement verification. Google notifies all verified advertisers, including, but not limited to sponsors of election advertisements, that Google will make public certain information about advertisements placed through Google’s advertising networks. Google notifies verified sponsors of election advertisements that information concerning their advertisements will be made public through Google’s Political Advertising Transparency Report.

Google’s policy states:

With the information you provide during the verification process, Google will verify your identity and eligibility to run election ads. For election ads, Google will [g]enerate, when possible, an in-ad disclosure that identifies who paid for your election ad. This means your name, or the name of the organization you represent, will be displayed in the ad shown to users. [And it will p]ublish a publicly available Political Advertising transparency report and a political ads library with data on funding sources for election ads, the amounts being spent, and more.

Google notifies advertisers that in addition to the company’s online Political Advertising Transparency Report, affected election advertisements "are published as a public data set on Google Cloud BigQuery[,]" and that users "can export a subset of the ads or access them programmatically." Google notifies advertisers that the downloadable election ad "dataset contains information on how much money is spent by verified advertisers on political advertising across Google Ad Services. In addition, insights on demographic targeting used in political advertisement campaigns by these advertisers are also provided. Finally, links to the actual political advertisement in the Google Transparency Report are provided." Google states that public access to "Data for an election expires 7 years after the election." [p. 14-15]

Yet Washington is still mad at Google anyway because it didn't handle user-generated content exactly the way it demanded. And that's a problem, because if it can sanction Google for not handling user-generated content exactly the way it wants, then (1) so could any other state or any of the infinite number of local jurisdictions Google inherently reaches, (2) to enforce an unlimited number of rules, and (3) governing any sort of user-generated content that may happen to catch a local regulator's attention. Utah may today be fixated on social media content and Washington State political ads, but once they've thrown off the pre-emptive shackles of Section 230 they or any other state, county, city or smaller jurisdiction could go after platforms hosting any of the myriad other sort of expression people use internet services to facilitate.

Which would sabotage the internet Congress was trying to foster with Section 230. Again, Congress deliberately gave platforms a free hand to decide how best to moderate user content so that they could afford to do their best at keeping the most good content up and taking the most bad content down. But with all these jurisdictions threatening to sanction platforms, trying to do either of these things can no longer be platforms' priority. Instead they will be forced to devote all their resources to the impossible task of trying to avoid a potentially infinite amount of liability. While perhaps at times this regulatory pressure might result in nudging platforms to make good choices for certain types of moderation decisions, it would be more out of coincidence than design. Trying to stay out of trouble is not the same thing as trying to do the best for the public—and often can turn out to be in direct conflict.

Which we can see from Washington's law itself. In 2018 prosecutors attempted to enforce an earlier version of this law against Google, which led it to declare that it would refuse all political ads aimed at Washington voters.

Three days later, on June 7, 2018, Google announced that the company’s advertising networks would no longer accept political advertisements targeting state or local elections in Washington State. Google’s announced policy was not required by any Washington law and it was not requested by the State. [p. 7]

Prosecutors may have been surprised by Google's decision, but no one should have been. Such a decision is an entirely foreseeable consequence, because if a law makes it legally unsafe for platforms to facilitate expression, then they won't.

Even the complaint itself, albeit perhaps inadvertently, makes clear what a loss for discourse and democracy it is when expression is suppressed.

As an example of Washington political advertisements Google accepted or provided after June 4, 2018, Google accepted or provided political advertisements purchased by Strategies 300, Inc. on behalf of the group Moms for Seattle that ran in July 2019, intended to influence city council elections in Seattle. Google also accepted or provided political advertisements purchased by Strategies 300, Inc. on behalf of the Seattle fire fighters that ran in October 2019, intended to influence elections in Seattle. [p. 9]

While prosecutors may frame it as scurrilous that Google accepted ads "intended to influence elections," influencing political opinion is at the very heart of why we have a First Amendment to protect speech in the first place. Democracy depends on discourse, and it is hardly surprising that people would want to communicate in ways designed to persuade on political matters.

Nor is the fact that they may pay for the opportunity to express it salient. Every internet service needs some way of keeping the lights on and servers running. That it may sometimes charge people to use their systems to convey their messages doesn't alter the fact that it is still a service facilitating user generated content, which Section 230 exists to protect and needs to protect.

Of course, even in the face of unjust sanction sometimes platforms may try to stick it out anyway, and it appears from the Washington complaint that Google may have started accepting ads again at some point after it had initially stopped. It also agreed to pay $217,000 to settle a 2018 enforcement effort—although, notably, without admitting to any wrongdoing, which is a crucial fact prosecutors omit in its current pleading.

On December 18, 2018, the King County Superior Court entered a stipulated judgment resolving Google’s alleged violations of RCW 42.17A.345 from 2013 through the date of the State’s June 4, 2018, Complaint filing. Under the terms of the stipulated judgment, Google agreed to pay the State $200,000.00 as a civil penalty and an additional $17,000.00 for the State’s reasonable attorneys’ fees, court costs, and costs of investigation. A true and correct copy of the State’s Stipulation and Judgment against Google entered by the King County Superior Court on December 18, 2018, is attached hereto as Exhibit B. [p. 8. See p. 2 of Exhibit B for Google expressly disclaiming any admission of liability.]

Such a settlement is hardly a confession. Google could have opted to settle rather than fight for any number of reasons. Even platforms as well-resourced as Google will still need to choose their battles. Because it's not just a question of being able to afford to hire all the lawyers you may need; you also need to be able to effectively manage them all, and every skirmish on every front that may now be vulnerable if Section 230 no longer effectively preempts those attacks. Being able to afford a fight means being able to afford it in far more ways than just financially, and thus it is hardly unusual for those threatened with legal process to simply try to purchase relief from onslaught instead of fighting for the just result.

Without Section 230, or its preemption provision, however, that's what we'll see a lot more of: unjust results. We'll also see less effective moderation as platforms redirect their resources from doing better moderation to avoiding liability instead. And we'll see what Google foreshadowed, of platforms withdrawing their services from the public entirely as it becomes financially prohibitive to pay off all the local government entities that might like to come after them. It will not get us a better internet, more innovative online services, or solve any of the problems any of these state regulatory efforts hope to fix. It will only make everything much, much worse.

]]>
guys-it's-still-the-law https://beta.techdirt.com/comment_rss.php?sid=20210303/13560746356
Thu, 4 Mar 2021 13:46:11 PST Arizona Moves Forward With Law To Force Google & Apple To Open Up Payments In App Stores Mike Masnick https://beta.techdirt.com/articles/20210303/22592246359/arizona-moves-forward-with-law-to-force-google-apple-to-open-up-payments-app-stores.shtml https://beta.techdirt.com/articles/20210303/22592246359/arizona-moves-forward-with-law-to-force-google-apple-to-open-up-payments-app-stores.shtml Arizona appears to be moving forward with an interesting (though, potentially unconstitutional) bill to say that Apple and Google would need to allow alternative payment systems in their app stores. I think this bill means well in that it's targeting what appears to be a real issue: the control that Apple (especially) and Google (to a lesser, but still significant extent) have over getting apps onto iOS and Android devices. Both companies take a pretty large cut out of in app-purchases -- basically 30% (it's a little more complicated than that).

The argument from both companies is that (1) it's their system and their providing value by creating the very platform that effectively allows all these apps to exist in the first place, and (2) part of the value of having a single app store model is that it allows for more security and privacy protections for end users (that's a big part of Apple's argument, certainly). Google is slightly more open in that it does allow for sideloading and even third party app stores, but it strongly discourages such practices. And, there is some validity to that argument... but it's also partially nonsense. For many apps, Google and Apple aren't really adding that much value, and for them to demand such a large cut seems silly. 30% is also... quite a lot. It's way more than other platforms in more competitive situations take, which often take closer to 5 to 10%. That certainly suggests some rent seeking.

That said, the bill has some issues as well. The biggest being that this is a state bill, which likely makes it unconstitutional. Regulating Apple and Google services like that likely violates the Commerce Clause, which limits the states' ability to pass laws that regulate "interstate" commerce. It seems like if this kind of law is being written, it should be a federal law, rather than a state one.

The other big question is what are the downstream impacts of such a bill. If Google and Apple rely on their cut of these in-app sales for revenue, and those effectively go away with such a law, then they're going to seek to make up that revenue elsewhere. Now, one hopes that they would do this by improving their offerings, adding additional value and figuring out ways to charge for those value-added features. And perhaps that would happen. But the fear is that the companies would seek to find a different revenue stream to tap -- such as charging for access to dev tools or even just to list an app on the app store. And, the end result of that might be to shut down or shut out smaller app developers.

The other odd thing about this bill is that it literally exempts the equivalent situation with video game consoles (which also take a ~30% cut):

The bill specifically exempts game consoles “and other special-purpose devices that are connected to the internet,” and it also bars companies like Apple and Google from retaliating against developers who choose to use third-party payment systems.

I don't quite understand this. If this approach is good for mobile phone devices, why shouldn't it also apply to video game consoles? I can't see any consistent reason to not treat the two similarly.

So, there does seem to be a legitimate concern about Apple and Google's effective control over the phone device software ecosystem. Perhaps it would be less of a problem if web apps had more access to core device functionality and could bypass the app stores entirely. Or, if sideloading was more common (or even allowed, as in the case with iOS). However, that doesn't change the fact that this particular bill doesn't seem like the best way of dealing with this particular situation.

]]>
some-good-some-bad https://beta.techdirt.com/comment_rss.php?sid=20210303/22592246359
Thu, 4 Mar 2021 12:14:46 PST US Navy On The Hook For 'Pirating' German Company's Software Timothy Geigner https://beta.techdirt.com/articles/20210304/08351446360/us-navy-hook-pirating-german-companys-software.shtml https://beta.techdirt.com/articles/20210304/08351446360/us-navy-hook-pirating-german-companys-software.shtml A couple of years ago, we discussed the somewhat ironic story of a German software company suing the United States Navy for pirating its software. The initial story was a bit messy, but essentially the Navy tested out Bitmanagement's software and liked it well enough that it wanted to push the software out to hundreds of thousands of computers. After Bitmanagement sued for hundreds of millions of dollars as a result, the Navy pointed out that it had bought concurrent use licenses through a third party reseller. While Bitmanagement pointed out that it didn't authorize that kind of license itself, the court at the time noted that without a contractual arrangement between the company and the Navy, the Navy had an implied license for concurrent users and dismissed the case.

Bitmanagement appealed that ruling, however, arguing that the lower court stopped its analysis too soon. The story there is that such an implied license would require the Navy track concurrent users across its 500k-plus computers it installed the software on, but it appears the Navy didn't bother to track concurrent users at all.

“We do not disturb the Claims Court’s findings. The Claims Court ended its analysis of this case prematurely, however, by failing to consider whether the Navy complied with the terms of the implied license,” the Appeals Court writes.

“The implied license was conditioned on the Navy using a license-tracking software, Flexera, to ‘FlexWrap’ the program and monitor the number of simultaneous users. It is undisputed that the Navy failed to effectively FlexWrap the copies it made,” the Court adds.

And just like that, the dismissal flips entirely and the Appeals Court has now remanded the case to determine damages. Again, Bitmanagement is asking for just under $600,000,000 in damages, given the wide scale of installations the Navy undertook with its software. With nothing tracking how many users concurrently used the software, the Navy doesn't really have any way to argue back that it complied with the implied license.

The real lesson in this is just how messy these sorts of copyright conundrums are. It's reasonable to believe that the Navy thought it was doing the right thing, even if it failed to comply with the implied license by monitoring concurrent users. But it's also reasonable for a software provider, with no evidence providing nuance, to simply see 500k-plus installations as mass copyright infringement.

But, in the eyes of the same United States that likes to put out reports on how terrible other countries are in respecting intellectual property rights, I guess the United States Navy is just a bunch of pirates now.

]]>
yarrrr! https://beta.techdirt.com/comment_rss.php?sid=20210304/08351446360
Thu, 4 Mar 2021 10:48:53 PST Federal Legislators Take Another Run At Ending Qualified Immunity Tim Cushing https://beta.techdirt.com/articles/20210303/13084346355/federal-legislators-take-another-run-ending-qualified-immunity.shtml https://beta.techdirt.com/articles/20210303/13084346355/federal-legislators-take-another-run-ending-qualified-immunity.shtml Last summer as protests raged around the nation in response to the killing of an unarmed black man by a white Minnesota police officer, federal legislators offered up a solution to one of the hot garbage problems of our time. A federal police reform bill contained a number of fixes to policing in America, including one crucial element that would make it far easier for citizens to pursue lawsuits over rights violations: the termination of the qualified immunity defense.

Over the years, qualified immunity has morphed from a limited protection for officers to allow them to make split-second decisions in dangerous situations to a blanket excuse for rights violations. Thanks mainly to the US Supreme Court, qualified immunity now shields officers from large numbers of legitimate accusations of rights violations. SCOTUS has shifted the emphasis to judicial precedent, rather than any discussion of the alleged violations brought before federal court judges. As long as law enforcement personnel violate rights in new ways that aren't covered by existing precedent, the officers are allowed to dodge lawsuits, juries, and fact-finding.

The Supreme Court has made it easier for lower courts to dodge questions about rights violations -- and, in turn, prevent them from establishing new precedent -- by forcing them to defer to a limited test that only involves established precedent and a very limited examination of the facts of the case. Only recently has the Supreme Court realized it may have had this wrong. Two remands to the Fifth Circuit Court of Appeals (the circuit most protective of cops) in the past few months indicate the nation's top court now feels the lower courts have followed its damaging instructions too closely.

So, there may be hope going forward. But it will be slow in arriving and still somewhat limited by the Supreme Court's precedential blanket instructions on QI cases. Nonetheless, there is hope.

What may be faster-acting is some federal legislation. Far too often, courts defer to legislators who seemingly have zero interest in deterring the wreckage qualified immunity has wrought. Asking politicians to go head-to-head with some of their most powerful supporters is kind of a non-starter. But if it's legislation courts are demanding, at least a few legislators are willing to give it to them.

The last effort to eliminate qualified immunity died quietly, even as cities continued to burn. The effort has been renewed by a bipartisan group of legislators who have seen immunity and the damage done and refuse to offer their tacit blessing of this accountability escape hatch by doing nothing. Akela Lacy has more details for The Intercept:

Rep. Ayanna Pressley and Sens. Ed Markey and Elizabeth Warren, Democrats of Massachusetts, are introducing a bill to fully end qualified immunity, a legal doctrine that protects police and law enforcement officials from civil liability in cases where they are accused of violating someone's constitutional rights.

The "Ending Qualified Immunity Act" [PDF] would do exactly that, building on Rep. Justin Amash's attempt to terminate this bullshit last year, when the irons were hot and setting fire to precinct houses. The bill notes law enforcement has been on the wrong side of history since the Ku Klux Klan Act of 1871. Since then, law enforcement hasn't bothered to correct its course. It engages in biased policing pretty much all the time and sinks its funding into efforts that reinforce its foregone (and often bigoted) conclusions.

As the bill points out, qualified immunity actually subverts the intention of federal legislators. It was created solely by a single court with no deferral to legislators who had already expressed their intent through this legislation, which created a cause of action for citizens whose rights had been violated.

This doctrine of qualified immunity has severely limited the ability of many plaintiffs to recover damages under section 1983 when their rights have been violated by State and local officials. As a result, the intent of Congress in passing the law has been frustrated, and Americans’ rights secured by the Constitution have not been appropriately protected.

In short, screw qualified immunity. It undercuts the Constitution as well as legislative intent. With this bill, QI would no longer be considered a defense to allegations of rights violations.

It shall not be a defense or immunity to any action brought under this section that the defendant was acting in good faith, or that the defendant believed, reasonably or otherwise, that his or her conduct was lawful at the time when it was committed. Nor shall it be a defense or immunity that the rights, privileges, or immunities secured by the Constitution or Federal laws were not clearly established at the time of their deprivation by the defendant, or that the state of the law was otherwise such that the defendant could not reasonably have been expected to know whether his or her conduct was lawful.

This doesn't prevent cops from escaping civil rights lawsuits. They still can. But they can't do it with a motion to dismiss prior to any fact-finding. Instead, they'll have to deal with lawsuits like most civilians have to: by bringing their own evidence and waiting for a judge to rule on the merits. In some cases, this will mean going to trial. And going to trial should never be considered a failure of the system. That's supposed to be the desired outcome. Instead, we've been given years of cops pressing the eject button and simply nodding along as allegations remain unaddressed, even when the courts are still supposed to assume plaintiffs' allegations are true.

This won't be the litigation apocalypse cops will claim it to be. Instead, it will put them on the same playing field the rest of us have to work with. Government employees should be holding themselves to higher standards. This bill only demands law enforcement officers abide by the same rules governing non-cop-related litigation.

]]>
no-longer-above-the-law,-but-subject-to-it https://beta.techdirt.com/comment_rss.php?sid=20210303/13084346355
Thu, 4 Mar 2021 10:45:00 PST Daily Deal: Avanca T1 Bluetooth Wireless Earbuds Daily Deal https://beta.techdirt.com/articles/20210304/10183546364/daily-deal-avanca-t1-bluetooth-wireless-earbuds.shtml https://beta.techdirt.com/articles/20210304/10183546364/daily-deal-avanca-t1-bluetooth-wireless-earbuds.shtml With the Avanca T1 Bluetooth Wireless Earbuds, you can enjoy your favorite songs with complete wireless freedom. Enjoy deep bass and high-quality audio in a compact and stylish design. The earbuds automatically connect to your iPhone or Android phone via Bluetooth, allowing you to immediately listen to your favorite music. You can easily pause, resume, and switch to the next song via touch controls. You can also answer and hang up phone calls with just a tap on the earphones. These wireless earbuds are perfect for everyday use with a battery life of up to 30 hours. They're on sale for $29.95

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

]]>
good-deals-on-cool-stuff https://beta.techdirt.com/comment_rss.php?sid=20210304/10183546364
Thu, 4 Mar 2021 09:32:32 PST CIA To FOIA Requester: Assassination Attempts Are Illegal So Of Course We Don't Have Any Records About Our Illegal Assassination Attempts Tim Cushing https://beta.techdirt.com/articles/20210228/11234946336/cia-to-foia-requester-assassination-attempts-are-illegal-so-course-we-dont-have-any-records-about-our-illegal-assassination.shtml https://beta.techdirt.com/articles/20210228/11234946336/cia-to-foia-requester-assassination-attempts-are-illegal-so-course-we-dont-have-any-records-about-our-illegal-assassination.shtml The CIA has delivered a rather curious response to a records requester. J.M. Porup sent a FOIA request to the agency asking it for documents about its rather well-documented assassination attempts and received a very curious non-answer from the US's foremost spooks.

The CIA’s response to the question about assassinations wasn’t a denial that it had engaged in such activity. It just explained that such activity is illegal: “Please refer to Executive Order 12333 which describes the conduct of intelligence activities, citation 2.11, which pertains to the prohibition on assassinations,” the brief response from the CIA read.

No Glomar. No "no records found." No complaints that the request was too burdensome. No invocation of national security exemptions. Just this, which basically says, "Hey guys, assassination is illegal." And, of course, it is. But that hasn't stopped the CIA from engaging in assassination attempts.

The Church Committee exposed this (along with a long list of other violations by government agencies) back in the 1970s. In fact, one of the smoking gun moments of the Church Committee hearings was the production of a non-smoking poison dart gun developed by the CIA. And, as Matthew Gault points out for Vice, the CIA spent years trying to make a Fidel Castro death look like an accident.

The Agency attempted to lace Castro’s shoes with thallium salts in an attempt to make his hair fall out, developed a special hallucinogen it planned to spray on him during a live broadcast, and created a pen that concealed a hypodermic needle full of poison it planned to use against Castro.

And that's just assassination attempts targeting this particular politician. The CIA has global reach and endless potential. That it has been mostly ineffective is beside the point. The CIA has created records detailing its assassination attempts. Citing an Executive Order forbidding government employees from engaging in assassination attempts is a non-starter, especially when there's already documentation in the (regular) history books.

Now, if we want to grant the CIA more credibility than it actually deserves, we can read this Executive Order citation as a barely coded message: of course the CIA doesn't have records pertaining to assassination attempts because what government agency in its right mind would do anything with inculpatory documents other than feed them to the nearest shredder? That's the best case scenario: the CIA has been illegally destroying documents detailing its illegal activities.

The worst case scenario is the CIA has plenty of documents on hand but is choosing to hide behind an Executive Order that forbids the things the CIA has already done and, may in fact still be doing.

The answer is, of course, fuck right off with this. J.M. Porup will be suing the CIA over these documents, which will at least force it to drop its Executive Order pretense and engage this request a bit more honestly. At that point, it will have access to a bunch of (slightly more legitimate) FOIA exemptions. But until it's willing to address this more honestly, it can't expect "well, no, that would be illegal" to be a satisfactory answer. Courts aren't going to be receptive to this particular strain of bullshit, even if they've been willing to grant a whole lot of leeway on the national security front historically.

]]>
heads-up,-cops:-excessive-force-is-illegal-so-we-have-never-deployed-excessive-f https://beta.techdirt.com/comment_rss.php?sid=20210228/11234946336
Thu, 4 Mar 2021 07:01:08 PST AT&T Spins Off DirecTV After Losing Billions On Its TV Dreams Karl Bode https://beta.techdirt.com/articles/20210225/14040846321/att-spins-off-directv-after-losing-billions-tv-dreams.shtml https://beta.techdirt.com/articles/20210225/14040846321/att-spins-off-directv-after-losing-billions-tv-dreams.shtml We've noted a few times how giant telecom providers, as companies that have spent the better part of the last century as government-pampered monopolies, are adorable when they try (then inevitably fail) to innovate or seriously compete in more normal markets. Verizon's attempt to pivot from curmudgeonly old phone company to sexy new ad media darling, for example, has been a cavalcade of clumsy errors, missteps, and wasted money.

AT&T has seen similar issues. Under former CEO Randall Stephenson, AT&T spent nearly $200 billion on mergers with DirecTV and Time Warner, hoping this would secure its ability to dominate the pay TV space through brute force. But the exact opposite happened. Saddled with so much debt from the deal, AT&T passed on annoying price hikes to its consumers. It also embraced a branding strategy so damn confusing -- with so many different product names -- it even confused its own employees.

As a result, AT&T intended to dominate the pay TV space, but instead lost 8 million pay TV subscribers since early 2017. Hoping to buy itself a little financial breathing room, AT&T has been shopping DirecTV around for months. But with few suitors interested in paying for a traditional satellite TV provider in the middle of a cord cutting revolution, AT&T instead last week settled on spinning off DirecTV and the rest of its pay TV operations into a new company. Under this new structure, AT&T will retain a 70% majority stake, with the other 30% being owned by private-equity giant TPG.

As part of the deal, AT&T valued the new DirecTV at around $16.2 billion, a massive loss from the $67 billion (including debt) AT&T paid for DirecTV back in 2015. AT&T begrudgingly admitted in a statement this wasn't a particularly impressive feat:

"With our acquisition of DirecTV, we invested approximately $60 billion in the US video business," AT&T said in materials distributed to reporters. "It's fair to say that some aspects of the transaction have not played out as we had planned, such as pay TV households in the US declining at a faster pace across the industry than anticipated when we announced the deal back in 2014. In fact, we took a $15.5 billion impairment on the business in 4Q20."

The deal buys AT&T a little financial leeway (immediately countered by its recent huge payout to grab additional wireless spectrum), but does little to change the underlying equation in AT&T's attempt to dominate video. One of the bigger ironies is that AT&T spent countless man hours and millions in lobbying to grease the regulatory skids for its domination of television, be it the repeal of net neutrality, all the efforts to kiss up to the Trump administration, or the long legal battle over the anti-competitive impact of its Time Warner deal.

Yet all that money, energy, and political power couldn't buy AT&T the kind of innovative chops needed to make inroads in the TV sector in the way they'd originally intended. It would be funny if not for the 54,000 AT&T employees laid off since 2017 in a bid to help manage megamerger debt. There was a real human cost to AT&T's ambition that the press, more interested in hyping pre-merger "synergy" claims than tracking the deal's actual impact, usually can't be bothered to talk much about.

]]>
money can't buy you love https://beta.techdirt.com/comment_rss.php?sid=20210225/14040846321
Thu, 4 Mar 2021 04:01:08 PST How The Third Party Cookie Crumbles: Tracking And Privacy Online Get A Rethink Mike Masnick https://beta.techdirt.com/articles/20210303/17051346358/how-third-party-cookie-crumbles-tracking-privacy-online-get-rethink.shtml https://beta.techdirt.com/articles/20210303/17051346358/how-third-party-cookie-crumbles-tracking-privacy-online-get-rethink.shtml Google made some news Wednesday by noting that once it stops using 3rd party cookies to track people, it isn't planning to replace such tracking with some other (perhaps more devious) method. This news is being met cynically (not surprisingly), with people suggesting that Google has plenty of 1st party data, and really just doesn't need 3rd party cookie data any more. Or, alternatively, some are noting (perhaps accurately) that since Google has a ton of 1st party data -- more than just about anyone else -- this could actually serve to lock in Google's position and diminish the alternatives from smaller advertising firms who rely on 3rd party cookies to bootstrap enough information to better target ads. Both claims might be accurate. Indeed, in the "no good deed goes unpunished" category, the UK has already been investigating Google's plans to drop 3rd party cookies on the grounds that it's anti-competitive. This is at the same time that others have argued that 3rd party cookies may also violate some privacy laws.

And, yes, it's possible that it can be both good for privacy and anti-competitive, which raises all sorts of interrelated issues.

In theory cookies should have been very pro-privacy. After all, they're putting data on end user computers where they have control over them. Users can delete those cookies or block them from being placed. In theory. The reality, though, is that deleting or blocking cookies takes a lot of effort, and while there are some services that help you out, they're not always great. In an ideal world, we would have built tools that made it clearer to end users what information cookies were tracking, and what was being done with that information -- as well as consumer-friendly tools to adjust things. But that's not the world we ended up in. Instead, we ended up in a world where the hamfisted use of 3rd party cookies is generally just kinda creepy. In the past, I've referred to it as the uncanny valley of advertising: where the advertising is not so well targeted as to be useful, but just targeted enough to be creepy and annoying by reminding you that you're being tracked.

The actual death knell for 3rd party cookies happened a while back. Firefox and Safari phased out 3rd party cookies a long time ago, and Google announced plans to do the same a year ago, with an actual target date for implementation a year from now. Today's news was more about what happens next, with Google promising not to use some sneaky method to basically replace cookies with something even worse. There is a concerted effort by some to track you through a "hashed email address". This is really creepy and kinda sketchy.

As a side note, a few years back, we were approached by a company doing this. They basically asked us to hand over a hashed set of emails we had collected. We looked over the details, and highlighted that they wanted us to use their hash, meaning that they could easily reverse the hash and figure out the emails. We explained that they must be mistaken, because that's really not all that different from just handing over emails, which would be a violation of our own privacy policy. We were told that, no, the whole idea was everyone had to use the same hash, and it was fine because the email addresses were hashed (ignoring the point we made about that being meaningless if everyone is using the same hash). We rejected this deal, even though they were actually offering decent money. I do sometimes wonder how many other publishers just coughed up everyone's emails, though.

So, Google's latest point is that it's not going to use some other unique identifier, and recognizes that the hashed email based-identifier is a bad idea:

We realize this means other providers may offer a level of user identity for ad tracking across the web that we will not — like PII graphs based on people’s email addresses. We don’t believe these solutions will meet rising consumer expectations for privacy, nor will they stand up to rapidly evolving regulatory restrictions, and therefore aren’t a sustainable long term investment. Instead, our web products will be powered by privacy-preserving APIs which prevent individual tracking while still delivering results for advertisers and publishers.

Instead, Google is pushing for a different kind of solution -- what it has referred to for a while now as a "Privacy Sandbox." The idea is not to track individuals but rather to dump you into a "cohort" of similar users, thereby not needing unique identifiers, just slightly more general ones. Google has taken to calling this cohort setup "Federated Learning of Cohorts", or FLoC, which it recently declared to be 95% as good at targeting ads, but in a less creepy way.

In many ways, this is obviously better than the use of full-on individual tracking via 3rd party cookies (or hashed emails). It's sort of a step away from individual targeting and at least a very slight movement back towards contextual advertising, which is something I've argued both Google and Facebook should do. But it's still not ideal. You still have the concerns about how much data Google has about you, and you still have the concerns about whether or not this locks in Google's position. Those don't go away with this move.

And, of course, there's the other framework to think about this: the never-ending threat of new privacy laws. So much of the focus on privacy legislation is (stupidly) about fighting the last battles, and that's why things like the GDPR and California's CCPA focused on useless and counterproductive cookie notifications. In some ways, this could be seen as a step towards getting ahead of that coming meteor, sidestepping it by saying "okay, okay, there are no more third party cookies."

In the end, you can't argue that this is a great solution or a terrible one. It is... just a change. A change that helps one aspect of how our current online privacy paradigm works, but which might cause other problems. It's good in that it's a further step towards the end of 3rd party cookies, which have been abused in creepy ways for too long. But it doesn't really fix overall privacy issues, and could still help lock Google into a position of dominance.

]]>
important,-but-not-as-important https://beta.techdirt.com/comment_rss.php?sid=20210303/17051346358
Wed, 3 Mar 2021 19:56:08 PST Another Game Developer DMCAs Its Own Game In Dispute With Publisher Timothy Geigner https://beta.techdirt.com/articles/20210303/09453946352/another-game-developer-dmcas-own-game-dispute-with-publisher.shtml https://beta.techdirt.com/articles/20210303/09453946352/another-game-developer-dmcas-own-game-dispute-with-publisher.shtml Way back in early 2019, we wrote about an odd story with a game developer DMCAing its own game on Valve's Steam platform over a dispute with its publisher. The short version of the story is that the developer accused the publisher of ghosting out on royalty payments, so the takedown allowed the developer to wrestle back control of the game and put it back up themselves. Steam, which has a reputation of being far more friendly to publishers than developers, in this case actually helped the developer wade through getting control of its game.

And now, two years later, it's happening again. Frogwares, developer of The Sinking City game, issued a DMCA notice for the game to Steam. At issue again is the publisher, Nacon in this case, being accused of both of skipping out on royalty payments last summer and cracking Frogwares' game and altering it, putting out a completely unauthorized version. See, due to the royalty issues, Frogwares had already pulled the game off of digital storefronts last summer. Suddenly, Nacon published a new version of the game on Steam in the past few days. The details as laid out by Frogwares on that last bit are... quite a thing.

In a post it put up yesterday afternoon, Frogwares further detailed the situation, writing, “[T]o our great surprise, we found a new version of The Sinking City was uploaded to Steam and launched, but Frogwares didn’t deliver such a version… Nacon, under the management of its president Alain Falc, asked some of their employees to crack, hack and pirate our game, change its content in order to commercialize it under their own name, and this is how they did it.”

The game developer’s post goes on to share a variety of information that, Frogwares writes, is evidence proving the French publisher bought The Sinking City from a separate platform and altered the game’s data to hide its tracks. This included replacing online retailer Gamesplanet’s logo in the opening credits and loading screen as well as removing a dynamic “Play More” option from the main menu that pointed players towards Frogwares’ other games and acted as a non-intrusive security measure by connecting to external servers.

Nacon claims otherwise, of course. The publisher says it has a contractual arrangement with Frogwares, that the new release is authorized, and that all is on the up and up. But two facts seem to suggest that might not be true. For starters, if this were an authorized release, why the mucking about with buying and cracking other copies of the game from other storefronts? Assuming the evidence Frogwares is putting out there is true, there should be no need to do any of that if there is an arrangement between developer and publisher.

But Nacon knows all of that, as it's been locked in a legal battle in French courts over the rights to publish the game for months. From a statement Frogwares put out:

Regarding our use of a DMCA to remove the game from Steam. We believe in a very short time, we were able to collect extremely strong evidence to indicate this version of the game was pirated and contains content that Nacon has absolutely no rights to – namely The Merciful Madness DLC. A DMCA notice proved to be our most effective tool to give us time to gain further potential evidence and to also start the required and lengthy additional legal processes to prevent this from happening again.

We are aware that a final ruling on whether Frogwares are obligated to deliver a Steam version has yet not been made and could take years. As it stands, we have an appeals court ruling saying, until further notice Frogwares do not need to deliver a Steam version to Nacon. In the meantime, Nacon decided to take justice into their own hands and release a pirated build.

Which sort of makes that publisher a pirate if true. And this is the sort of piracy that damned well should be punished.

]]>
pirate-publisher https://beta.techdirt.com/comment_rss.php?sid=20210303/09453946352
Wed, 3 Mar 2021 15:41:18 PST Content Moderation Case Study: Decentralized Social Media Platform Mastodon Deals With An Influx Of Gab Users (2019) Copia Institute https://beta.techdirt.com/articles/20210303/14474346357/content-moderation-case-study-decentralized-social-media-platform-mastodon-deals-with-influx-gab-users-2019.shtml https://beta.techdirt.com/articles/20210303/14474346357/content-moderation-case-study-decentralized-social-media-platform-mastodon-deals-with-influx-gab-users-2019.shtml Summary: Formed as a more decentralized alternative to Twitter that allowed users to more directly moderate the content they wanted to see, Mastodon has experienced slow, but steady, growth since its inception in 2016.

Unlike other social media networks, Mastodon is built on open-source software and each "instance" (server node) of the network is operated by users. These separate "instances" can be connected with others via Mastodon's interlinked "fediverse." Or they can remain independent, creating a completely siloed version of Mastodon that has no connection with the service's larger "fediverse."

This puts a lot of power in the hands of the individuals who operate each instance: they can set their own rules, moderate content directly, and prevent anything the "instance" and its users find undesirable from appearing on their servers. But the larger "fediverse" -- with its combined user base -- poses moderation problems that can't be handled as easily as those presenting themselves on independent "instances." The connected "fediverse" allows instances to interact with each other, allowing unwanted content to appear on servers that are trying to steer clear of it.

That's where Gab -- another Twitter alternative -- enters the picture. Gab has purposely courted users banned from other social media services. Consequently, the platform has developed a reputation for being a haven for hate speech, racists, and bigots of all varieties. This toxic collection of content/users led to both Apple and Google banning Gab's app from their app stores.

Faced with this app ban, Gab began looking for options. It decided to create its own Mastodon instance. With its server now technically available to everyone in the Mastodon "fediverse," those not explicitly blocking Gab's "instance" could find Gab content available to its users -- and also allow for Gab’s users to direct content to their own users. It also allowed Gab to utilize the many different existing Mastodon apps to sidestep the app bans handed down by Google and Apple.

Decisions to be made by Mastodon:

  • Should Gab (and its users) be banned from setting up "instances," given that they likely violate the Mastodon Server Covenant?

  • Is it possible to moderate content across a large number of independent nodes?

  • Is this even an issue for Mastodon itself to deal with, given that the individuals running different servers can decide for themselves whether or not to allow federation with the Gab instance?

  • Given the open source and federated nature of Mastodon, would there reasonably be any way to stop Gab from using Mastodon?

Questions and policy implications to consider:
  • Will moderation efforts targeting the "fediverse" undercut the independence granted to "instance" owners?

  • Do attempts to attract more users create moderation friction when the newly-arriving users create content Mastodon was created to avoid?

  • If Mastodon continues to scale, will it always face challenges as certain instances are created to appeal to audiences that the rest of the “fediverse” is trying to avoid?

  • Can a federated system, in which unique instances choose not to federate with another instance, such as Gab, work as a form of “moderation-by-exclusion”?

Resolution: Mastodon's founder, Eugen Rochko, refused to create a blanket ban on Gab, leaving it up to individual "instances" to decide whether or not to interact with the interlopers. As he explained to The Verge, a blanket ban would be almost impossible, given the decentralized nature of the service.

On the other hand, most "fediverse" members would be unlikely to have to deal with Gab or its users, considering the content contained in Gab's "instance" routinely violates the Mastodon "covenant." Violating these rules prevents instances from being listed by Mastodon itself, lowering the chances of other "instance" owners inadvertently adding toxic content and users to their server nodes. And Rochko himself encouraged users to preemptively block Gab's "instance," resulting in ever fewer users being affected by Gab's attempted invasion of the Mastodon fediverse.

But running a decentralized system creates an entirely new set of moderation issues, which has turned Mastodon itself into a moderation target. Roughly a year after the Gab "invasion," Google threatened to pull Mastodon-based apps from its store for promoting hate speech, after users tried to get around the Play Store ban by creating apps that pointed to Mastodon “instances” filled with hateful content. Google ultimately decided to leave Mastodon-based apps up, but appears ready to pull the trigger on a ban in future.

Originally posted to the Trust & Safety Foundation website.

]]>
decentralized-content-moderation-challenges https://beta.techdirt.com/comment_rss.php?sid=20210303/14474346357
Wed, 3 Mar 2021 13:31:39 PST Court Tells Government It Can't Hide Behind Its Third-Party DNA Analysis Vendor To Withhold Evidence Tim Cushing https://beta.techdirt.com/articles/20210227/16435246333/court-tells-government-it-cant-hide-behind-third-party-dna-analysis-vendor-to-withhold-evidence.shtml https://beta.techdirt.com/articles/20210227/16435246333/court-tells-government-it-cant-hide-behind-third-party-dna-analysis-vendor-to-withhold-evidence.shtml The government says we have no right to access information about its law enforcement "means and methods." To give these secrets away is to instigate a criminal apocalypse.

That's the argument the government has made to protect everything from sketchy confidential informant testimony to Stingray devices. Even when the public has a pretty good idea about what's going on, the government still argues the public can't be trusted. Stingrays aren't a big secret anymore. And confidential informants are only trustworthy until the government decides they aren't and starts feeding them to the criminal justice system.

The government has obligations to the public. Court cases have a presumption of openness -- what happens there can be accessed by everyone. To dodge this, the government seals cases and demands ex parte hearings that cut the defense side out of the equation.

The government also avails itself of a number of private contractors. The government is big enough it can't do everything by itself. And it doesn't hurt that its contracts with private companies help keep some of its questionable activities out of the public eye.

Ask a private company to do your dirty work and you can fend off judges and presumptions of transparency. Add law enforcement "means and methods" arguments to claims about trade secrets and you can wield the private sector against the public for as long as possible.

For the most part this process works. Every so often a federal judge kicks back, prompting everyone involved to come up with better arguments as to why defendants shouldn't be allowed to take a deep look at the evidence being used against them.

Government agencies have ditched cases when defendants have asked about cell tower spoofers or forensic software used to generate evidence against them. But they only do this when courts have decided the people whose life and liberty are at stake deserve answers.

If a court doesn't act to intercede, the government will continue to wield the private sector against the public sector. In cases where proprietary software is involved, the government will allow private companies to assert that giving defendants a chance at a fair trial would undercut the contractors' ability to turn a profit.

When these private entities intercede, they're asking the courts to declare it's more important for these companies to remain viable than allow Americans to fully exercise their rights.

Fortunately, courts haven't always been sympathetic to the arguments the government has raised on behalf of its private contractors. One of the more frequent private intercessors have been DNA companies who argue that revealing their algorithms would cause the collapse of the private DNA-sequencing industry… starting with those who have aided the government the most.

Not true, says at least one federal court. In at least one case involving DNA evidence, a federal court has said hiding behind trade secrets and confidentiality agreements doesn't serve the public. If the government wants to use evidence derived from proprietary software, it had better be ready to share that software with the person it's accusing of criminal acts.

The EFF's intercession into another case involving DNA software and government/private sector secrecy has paid off for the defendant. The basic tenets of due process say criminal defendants must have access to the evidence used against them. Private contractors like Cybergenetics -- which is hoping to shield its "trade secrets" -- are subject to the same discovery rules that affect the government.

A short ruling [PDF] issued by a Pennsylvania federal court says private contractors working with the government are obligated to hand over information to criminal defendants.

The court resists the government's resistance:

The Government resists disclosure of the source code on grounds that Cybergenetics considers it a trade secret, and that disclosure is not necessary. The Court has considered the present record, including the amicus submission made on Defendant’s behalf and Dr. Perlin’s declaration. Here, there can be no dispute that the DNA evidence is central to the case against Defendant.

And if it's central, it must be disclosed:

Based on all applicable factors and considerations previously identified in my January 21 Order, Paragraph 5)2c of the Amended Subpoena Schedule, attached as Exhibit 2 to Docket No. 73, will not be quashed.

There are some limitations -- like the possible deployment of a protective order that will shield this info (at least temporarily) from public view. But the overriding presumption is transparency. If the government wants to use evidence derived from a private company's DNA analysis, it has an obligation to let the defendant examine it. The company's concerns about its proprietary calculations ultimately makes no difference. If it wants to work with the government, it needs to be prepared to hand over this info to criminal defendants.

We'll have to see where it goes from here, but this ruling makes it clear private contractors are considered public when they choose to do business with public agencies. To rule otherwise is to allow the government to have its evidence and hide it too. That's not how America works.

]]>
time-to-play-fair,-g-men https://beta.techdirt.com/comment_rss.php?sid=20210227/16435246333
Wed, 3 Mar 2021 12:14:30 PST Broadband ISP Frontier Just Keeps Happily Ripping People Off With Bogus Fees, And Zero Real Repurcussions Karl Bode https://beta.techdirt.com/articles/20210216/07340346251/broadband-isp-frontier-just-keeps-happily-ripping-people-off-with-bogus-fees-zero-real-repurcussions.shtml https://beta.techdirt.com/articles/20210216/07340346251/broadband-isp-frontier-just-keeps-happily-ripping-people-off-with-bogus-fees-zero-real-repurcussions.shtml When you're a natural monopoly in America you get away with a lot. Take for example Frontier Communications, which has spent the last few years stumbling in and out of bankruptcy while dodging no shortage of scandals, including allegations of subsidy fraud. Last year, Frontier got a light wrist slap for fraudulently charging its customers a "rental" fee for modems they already owned. The company also paid a tiny $900,000 fine last year to Washington State AG Bob Ferguson for using bogus fees to rip off the company's captive subscriber base.

Of particular annoyance in consumer complaints has been the company's $4 per month "Internet Infrastructure Surcharge," which is a completely nonsensical, bullshit charge the company levies below the line. The surcharge doesn't really go to "infrastructure" (that's what your entire bill is for). What it does do is give Frontier a way to continually increase consumer prices while falsely advertising a lower rate. Other ISPs engage in similar behavior with little real penalty (see CenturyLink's "Internet Cost Recovery" fee).

While the $900,000 Washington State AG fine is semi-helpful, like most US regulatory "penalties" it's a tiny fraction of the money made via the dubious business practice. And while the company stopped charging the fee in Washington, it still charges it across the rest of its 22 state footprint. Note that Frontier has 3,735,000 broadband subscribers, each paying $4 a month in completely erroneous surcharges. That's nearly $15 million in bullshit charges in just one month, or $180 million in dodgy revenue every year.

Facing only a light wrist slap for the practice, Frontier seems intent on doubling down on this behavior. The company this week announced it will be bumping the fee to $7 per month. Frontier attempted to explain away the bogus surcharge this way:

"The increase applies to Frontier customers based on individual service packages and reflects increasing maintenance and other network costs, including the rapidly rising costs of supporting our customers' increased Internet traffic and usage, and consumer demand for greater bandwidth, services, and other requirements that affect our Internet network. Customers on price-lock and promotional pricing will not see this increase until their terms expire."

But again, "maintenance and other network costs" is what the entirety of your bill is for, and the fee's real purpose is to help the iSP engage in false advertising on pricing.

Despite decades of this, federal regulators at the FCC have largely been utterly pathetic on this issue. While there was some basic rules requiring at least some transparency in pricing baked into the FCC's net neutrality rules, those were gutted by industry lobbyists during the Trump administration repeal. This kind of misleading pricing could also be mitigated via policies that push more competition to market, but since most US markets lack competitive options, and building more competitive options tends to upset politically powerful telecom monopolies, we usually only pay lip service to that concept as well.

]]>
do-not-pass-go,-do-not-collect-$200 https://beta.techdirt.com/comment_rss.php?sid=20210216/07340346251
Wed, 3 Mar 2021 10:47:09 PST New York City Shifting Mental Health Calls From NYPD To Actual Mental Health Professionals Tim Cushing https://beta.techdirt.com/articles/20210227/14092546332/new-york-city-shifting-mental-health-calls-nypd-to-actual-mental-health-professionals.shtml https://beta.techdirt.com/articles/20210227/14092546332/new-york-city-shifting-mental-health-calls-nypd-to-actual-mental-health-professionals.shtml In all honesty, we've been asking the police to do too much for years. If we really care about the most vulnerable members of our community, we would stop calling cops to handle it. But for years, that's been pretty much our only option. We call 911 and 911 tends to send cops to deal with people in the throes of mental health crises.

This has worked out badly. Cops aren't trained to handle mental health issues. They're trained to apprehend criminals and meet latent threats with deadly force. People who just need a good doctor are ending up with bullets in them. In far too many cases, suicide threats end with the suicidal person dead. That's not what we want from the police. Unfortunately, that's all they really have to offer. And that's how courts end up excusing cops for, say, tasing a person doused in gasoline, ensuring the latent threat they poised became a reality, killing the person needing help, and burning down the house around him.

Cities are beginning to take another approach to mental health issues. Wellness checks are better handled by mental health professionals. It's a conclusion that seems obvious but is rarely embraced by law enforcement and the local governments overseeing them. There's a time and place for law enforcement response. Someone suffering from mental health issues isn't a police matter. Neither is homelessness. Neither is bog standard trespassing, which often just means someone saw someone where they didn't expect to see someone.

Routing these calls to people trained to respond appropriately works. A pilot program in Denver, Colorado just wrapped up six months of rerouting, resulting in no deaths, no wounding, and no arrests, despite handling more than 350 calls. Police still handle most 911 calls, but even in a part-time capacity, Denver's new mental health response team has shown an improvement over how these calls have been handled historically.

And now it appears the largest police department in the nation will be handing off mental health calls to mental health professionals. The NYPD will no longer be handling some calls related to issues that really don't require a show of force in response. The program was first announced late last year in response to the killing of Daniel Prude -- a man suffering a mental breakdown -- by Rochester, New York police officers.

Mental health workers will replace police officers in responding to some 911 calls next year in New York City, Mayor Bill de Blasio announced Tuesday.

The test program, to be rolled out in two neighborhoods, will give mental health professionals the lead role when someone calls 911 because a family member is in crisis, officials said.

The initiative is modeled on existing programs in cities including Eugene, Oregon, where teams of paramedics and crisis workers have been responding to mental health 911 calls for more than 30 years.

The limited rollout is now expanding to cover one of New York City's largest boroughs.

New York City police will stay out of many mental health crisis calls and social workers will respond instead in parts of northern Manhattan starting this spring, an official told lawmakers Monday.

The test program will begin in three Harlem and East Harlem police precincts that together accounted for a highest-in-the-city total of over 7,400 mental health-related 911 calls last year, said Susan Herman, who heads a wide-ranging city mental health initiative called ThriveNYC.

The program will continue to expand for the next couple of years. NYPD officers will no longer be expected to handle certain calls and will be able to ask for assistance from this unit if a call they respond to requires their assistance.

Officers will still respond to calls involving weapons or "imminent risk of harm." This leeway should keep mental health professionals out of harm's way. But it will also increase the risk that mental health crises will see force -- rather than knowledge and de-escalation -- deployed in response to certain 911 calls.

Still, it's a positive step. The NYPD -- and its union reps -- have been uninterested in seeing EMTs and healthcare professionals insert themselves into this part of the law enforcement equation. And the stats show the NYPD hasn't been as awful at handling health issues as some other police departments elsewhere in the nation. Fewer than 1 in 100 calls resulted in arrest. However, half of those calls ended with hospitalization. This could be a positive. Hospitalization is often the desired outcome in mental health crises. But the stats cited in this report do not break down hospitalizations to show which are due to injuries sustained during arrests/detainments and which were due to appropriate responses to mental health issues.

But overall this appears to be a positive step forward. It has worked elsewhere in the nation so there's no reason to believe this won't be a net gain for New York City residents. Cops don't have all the answers. And they certainly don't have all the training needed to handle problems better addressed by healthcare professionals. Anything that removes these judgment calls from the equation will help more New Yorkers stay alive and unharmed when suffering mental health issues.

]]>
keeping-more-people-alive-is-always-a-net-positive https://beta.techdirt.com/comment_rss.php?sid=20210227/14092546332
Wed, 3 Mar 2021 10:44:00 PST Daily Deal: The 2021 Ultimate Mixology And Cocktail Bundle Daily Deal https://beta.techdirt.com/articles/20210303/10093946353/daily-deal-2021-ultimate-mixology-cocktail-bundle.shtml https://beta.techdirt.com/articles/20210303/10093946353/daily-deal-2021-ultimate-mixology-cocktail-bundle.shtml Learn about popular liquors and up your mixology game with the 2021 Ultimate Mixology And Cocktail Bundle. The 5 courses cover gin, tequila, whiskey, rum, and vodka. In each course you will not only learn about the background and history of 20 of the most popular cocktails made with that particular liquor, but also you will know the techniques that will enable you to mix truly world-class versions of each one. It's on sale for $30.

Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.

]]>
good-deals-on-cool-stuff https://beta.techdirt.com/comment_rss.php?sid=20210303/10093946353
Wed, 3 Mar 2021 09:37:47 PST Microsoft Attacks The Open Web Because It's Jealous Of Google's Success Mike Masnick https://beta.techdirt.com/articles/20210302/17264946349/microsoft-attacks-open-web-because-jealous-googles-success.shtml https://beta.techdirt.com/articles/20210302/17264946349/microsoft-attacks-open-web-because-jealous-googles-success.shtml Lots of attention has been paid to the mess down in Australia with its news link tax "bargaining code", and Facebook's response to it, including the eventual caving. So now both Google and Facebook have effectively agree to pay a tax to Australia's largest media companies... for daring to send them free traffic. It's the most bizarre thing in the world. Imagine if every time TV stations ran an advertisement, they also had to pay the advertiser. That's what this is.

However, we should focus in a bit on Microsoft's role in all of this. First, before Google agreed to its deal and was threatening to shut down news links in Australia, Microsoft stepped in and said it would gladly support the law. This was so transparently greedy of the company. Basically, Microsoft has realized that it's failure to actually be able to compete in the marketplace means that it wants to support this kind of law knowing that one of two things will happen: (1) Google will bail out of a market, leaving it open to Microsoft or (2) it'll just cost its competitor Google a lot of money.

The fact that it also fucks with the basic concept of the open web and not having to pay to link doesn't seem to enter into Microsoft's calculus at all. This takes Microsoft back to the shameful era in which it paid some godawful amount of money to political trickster Mark Penn not to help Microsoft better compete, but to simply attack Google like a political candidate. This is classic political entrepreneurship rather than market entrepreneurship. It's a sign of failure, when you're not trying to actually innovate, but simply abusing the political process to hamstring competitors.

But, in this case, it's even worse, because it's not just Google and Facebook that get screwed, but the entire concept of the open web.

And it gets worse. Microsoft seems so positively giddy about how this all worked out in Australia, that it's now taken the campaign global. Microsoft's President, Brad Smith wrote a blog post calling for this policy to be adopted elsewhere. Incredibly, Smith seems to argue that the attack on the Capitol might not have happened if Google and Facebook were taxed this way globally. The whole thing is just... so obnoxiously dishonest. It bemoans the loss of "professional journalism" and blames it all on social media.

But that's garbage. Multiple studies have shown that Fox News was a bigger problem in spreading disinformation than social media. And, remember, that Fox News boss Rupert Murdoch is the main beneficiary of the Australian law. It's literally taking money from the less problematic spreader of disinfo and giving it to the more problematic one. But Smith/Microsoft act as if this is all for the good of society:

The ideas are straightforward. Dominant tech properties like Facebook and Google will need to invest in transparency, including by explaining how they display news content.

Even more important, the legislation will redress the economic imbalance between technology and journalism by mandating negotiations between these tech gatekeepers and independent news organizations. The goal is to provide the news organizations with compensation for the benefit derived by tech gatekeepers from the inclusion of news content on their platforms.

This has been Murdoch's framing all along and it's total bullshit. There is no "imbalance" here because there's nothing to negotiate over. You don't need to negotiate to link. It's a free thing, and as we saw when Facebook pulled its links, and all these news organizations started whining, it showed that those news organizations actually recognize they derive value from the free links. The whole thing is ridiculous.

And Smith tries to frame it as if it's protecting democracy and the American way. That's garbage.

These are now pressing questions for the Biden administration. Facebook and Google persuaded the Trump administration to object to Australia’s proposal. However, as the United States takes stock of the events on January 6, it’s time to widen the aperture.

The ultimate question is what values we want the tech sector and independent journalism to serve. Yes, Australia’s proposal will reduce the bargaining imbalance that currently favors tech gatekeepers and will help increase opportunities for independent journalism. But this a defining issue of our time that goes to the heart of our democratic freedoms. As we wrote in 2019, “The tech sector was born and has grown because it has benefited from these freedoms. We owe it to the future to help ensure that these values survive and even flourish long after we and our products have passed from the scene.”

The United States should not object to a creative Australian proposal that strengthens democracy by requiring tech companies to support a free press. It should copy it instead.

I'm an independent journalist. This isn't going to help me. There's no way in hell any of these solutions will ever involve us getting any money out of this, and if it did, then suddenly there would be all sorts of questions about whether or not we could even cover those companies fairly, since they'd be a key source of revenue. How the hell does this help democracy?

And, then it gets even worse. Soon after, Microsoft announced that it was teaming up with EU publishers to seek a similar link tax in the EU.

Microsoft is teaming up with European publishers to push for a system to make big tech platforms pay for news, raising the stakes in the brewing battle led by Australia to get Google and Facebook to pay for journalism.

The Seattle tech giant and four big European Union news industry groups unveiled their plan Monday to work together on a solution to “mandate payments” for use of news content from online “gatekeepers with dominant market power.”

They said they will “take inspiration” from proposed legislation in Australia to force tech platforms to share revenue with news companies and which includes an arbitration system to resolve disputes over a fair price for news.

Some people are seeing through all of this -- and even Inc. Magazine called out Microsoft as "the ultimate troll" for its stance here. But this isn't a game. This isn't just politics. This is about the nature of the open web.

And Microsoft is singing happily along as it helps regulators and old, failed publishers around the globe break the open web. All because they're jealous of Google.

It's not difficult to see the path this is likely to head down, and it's bad. A few countries force Google/Facebook to pay these old school publishers. Then, basically everyone else on the web notices this and says "hey, how come they get to link to me for free? Shouldn't they be paying me too?!?" And then, one by one, we'll just hear of every failed and flopped industry demanding free money from the companies that actually innovated. The music industry must be so excited. Book publishing? Absolutely. What about boxed software providers (hi, Microsoft!).

Basically, every industry that failed to adapt and innovate online is likely to go running to government demanding payment. And the very nature of the open internet ceases to exist the way it has for the past three decades. It's a terrible, terrible idea, and it was ridiculous that it went ahead in Australia. But Microsoft is an actual tech company, which should know better, but it's trollish obsession with Google beating Microsoft in the market means it's willing to toss out the open internet if it thinks it will harm Google.

It's shameful and disgusting.

]]>
stop it https://beta.techdirt.com/comment_rss.php?sid=20210302/17264946349
Wed, 3 Mar 2021 06:23:01 PST After Hyping 5G For Years, Verizon Advises Users To Turn It Off To Avoid Battery Drain Karl Bode https://beta.techdirt.com/articles/20210302/09301446347/after-hyping-5g-years-verizon-advises-users-to-turn-it-off-to-avoid-battery-drain.shtml https://beta.techdirt.com/articles/20210302/09301446347/after-hyping-5g-years-verizon-advises-users-to-turn-it-off-to-avoid-battery-drain.shtml If you listen to Verizon marketing, it goes something like this: fifth generation (5G) wireless is going to absolutely transform the world by building the smart cities of tomorrow, revolutionizing medicine, and driving an ocean of innovation.

In reality, US 5G has largely landed with a thud, studies showing how the US version is notably slower than overseas 5G (and in fact often slower than the 4G networks you're used to) because the US didn't do enough to drive middle-band spectrum to market. Contrary to Verizon claims it's not a technology that's likely to revolutionize medicine. Service availability also remains very spotty, and US consumers continue to pay some of the highest prices for mobile data in the developed world, regardless of standard.

Some variations of the technology are also a bit of a battery hog, something Verizon support was begrudgingly forced to acknowledge this week by informing users that if they want better battery life, they're better off turning 5G off:

"Verizon has spent years hyping 5G despite it bringing just a minor speed upgrade outside the limited areas where millimeter-wave spectrum has been deployed, but the carrier's support team advised users yesterday to shut 5G off if their phones are suffering from poor battery life. The tweet from VZWSupport, now deleted, said, "Are you noticing that your battery life is draining faster than normal? One way to help conserve battery life is to turn on LTE. Just go to Cellular > Cellular Data Options > Voice & Data and tap LTE."

The Tweet was deleted once it began getting attention from users and journalists amused by the disconnect given two straight years of marketing that claimed the technology was near-miraculous. Granted, companies like Apple have adapted to this reality by unveiling a "Smart Data mode" that shifts each phone from 5G to LTE when 5G speeds aren't necessary, conserving battery life. Samsung, Huawei, and other handset manufacturers have issued similar warnings about how the wireless industry's shiny new standard is a bit of an energy hog.

Again, that's not to say 5G isn't going to offer meaningful improvements, eventually. The standard ideally delivers significant improvements in speed, latency, and reliability (assuming it's actually available to you). But these improvements are more evolutionary than revolutionary. The problem: as wireless carriers looked to use the standard to justify higher and higher rates, they've over-promised what it's capable of. That, in turn, is creating a consumer brand impression that associates 5G not with reliability, but hype and bluster; the exact opposite of what marketing departments were hoping for.

]]>
not-the-revolution-we-were-promised https://beta.techdirt.com/comment_rss.php?sid=20210302/09301446347
Wed, 3 Mar 2021 03:23:01 PST Judge Presiding Over Arizona Prosecution Of Backpage Denies Discovery Requests Targeting Her Husband, Who Happens To Be State Attorney General Tim Cushing https://beta.techdirt.com/articles/20210228/17180446340/judge-presiding-over-arizona-prosecution-backpage-denies-discovery-requests-targeting-her-husband-who-happens-to-be-state.shtml https://beta.techdirt.com/articles/20210228/17180446340/judge-presiding-over-arizona-prosecution-backpage-denies-discovery-requests-targeting-her-husband-who-happens-to-be-state.shtml Here's one more horrifying postscript to the still-ongoing criminal prosecution(s) of Backpage's executives. Courts and attorneys general (including newly installed VP Kamala Harris) tried to run the company in on prostitution charges but often found their efforts rebuffed by courts who didn't see how hosting third-party ads was the same thing as aiding and abetting sex trafficking.

Prosecutions abounded. So did a cottage industry of pearl clutchers and hand wringers -- many of which were holding powerful offices in Washington DC. These people were convinced the only way to fight sex trafficking was to punch holes in Section 230. Despite being warned against doing so by none other than the DOJ, they went ahead and passed FOSTA. This anti-sex trafficking law has been used exactly once in a criminal case since its inception.

But here's the new thing, via Stephen Lemons writing for Front Page Confidential. The undercurrent of corruption behind the Backpage prosecutions continues to flow. It was never meant to be a fair fight. It was meant to make Backpage an example after other online services managed to shrug off misguided investigations and prosecutions attempting to turn hosts into criminal confederates.

One of the goals of government work -- especially as it pertains to checks and balances -- is to avoid any appearances of impropriety. But in Arizona, appearances appear to be unimportant. Impropriety is in the eye of the beholder. And if the beholder wields less power, too fucking bad. Here's how things are being handled in the government's attempt to prosecute Michael Lacey and Jim Larkin of Backpage.

Appearance? No, actual impropriety!

A game of legal ping pong has ensued in the Lacey/Larkin case, with U.S. District Court Judge Susan Brnovich shooting down a defense subpoena seeking the same docs from her husband, Arizona Attorney General Mark Brnovich, as a public records request from the defendants, now pending at the AG’s office.

On February 11, the judge ruled against the defense’s motion for a subpoena to her spouse’s office, requesting “all correspondence or records” discussing the defendants, their case and the defunct listings website at the center of it all, Backpage.com.

Welp.

The only judge who has yet to recuse herself from this case is married to the state Attorney General who has made comments concerning Backpage that might be relevant to the case. And yet, Judge Brnovich sees nothing wrong with presiding over it and denying discovery materials to the people attempting to defend themselves -- when that discovery involves her husband.

Since Judge Brnovich is unwilling to address the obvious implications of her decision to stay involved with this case, the defendants have asked the Ninth Circuit Appeals Court to step in. Hopefully a set of judges far more impartial than Judge Susan Brnovich appears to be will force the judge to step down and let someone else not married to the state Attorney General, who has publicly discussed the case, take the reins.

Until then, Backpage is at the mercy of a system that seems to willingly be ignoring the "checks and balances" ideals that make this country great, at least when respected.

Update: In the initial version of this post, we falsely claimed that state Attorney General Brnovich was prosecuting the case. It is actually a federal case, prosecuted by the DOJ. The subpoena just relates to statements made by the state Attorney General regarding Backpage. We regret the error.

]]>
i'm sorry but wtaf https://beta.techdirt.com/comment_rss.php?sid=20210228/17180446340
Tue, 2 Mar 2021 20:18:25 PST Not OK, Zoomer: Here's Why You Hate Videoconference Meetings -- And What To Do About It Glyn Moody https://beta.techdirt.com/articles/20210226/06104846322/not-ok-zoomer-heres-why-you-hate-videoconference-meetings-what-to-do-about-it.shtml https://beta.techdirt.com/articles/20210226/06104846322/not-ok-zoomer-heres-why-you-hate-videoconference-meetings-what-to-do-about-it.shtml With much of the world in various states of lockdown, the videoconference meeting has become a routine part of many people's day, and a hated one. A fascinating paper by Jeremy Bailenson, director of Stanford University's Virtual Human Interaction Lab, suggests that there are specific problems with videoconference meetings that have led to what has been called "Zoom fatigue", although the issues are not limited to that platform. Bailenson believes this is caused by "nonverbal overload", present in at least four different forms. The first involves eye gaze at a close distance:

On Zoom, behavior ordinarily reserved for close relationships -- such as long stretches of direct eye gaze and faces seen close up -- has suddenly become the way we interact with casual acquaintances, coworkers, and even strangers.

There are two aspects here. One is the size of the face on the screen, and the other is the amount of time a person is seeing a front-on view of another person's face with eye contact. Bailenson points out that in another setting where there is a similar problem -- an elevator -- people typically look down or avert their glance in order to minimize eye contact with others. That's not so easy with videoconferencing, where looking away suggests lack of attention or loss of interest. Another problem with Zoom and other platforms is that people need to send extra nonverbal cues:

Users are forced to consciously monitor nonverbal behavior and to send cues to others that are intentionally generated. Examples include centering oneself in the camera's field of view, nodding in an exaggerated way for a few extra seconds to signal agreement, or looking directly into the camera (as opposed to the faces on the screen) to try and make direct eye contact when speaking.

According to Bailenson, research shows people speak 15% louder on videoconference calls compared to face-to-face interaction. Over a day, this extra effort mounts up. Also problematic is that it's hard to read people's head and eye movements -- important for in-person communication -- in a video call. Often they are looking at something that has popped up on their screen, or to the side, and it may be unclear whether the movement is a nonverbal signal about the conversation that is taking place. Another oddity of Zoom meetings is that participants generally see themselves for hours on end -- an unnatural and unnerving experience:

Imagine in the physical workplace, for the entirety of an 8-hr workday, an assistant followed you around with a handheld mirror, and for every single task you did and every conversation you had, they made sure you could see your own face in that mirror. This sounds ridiculous, but in essence this is what happens on Zoom calls. Even though one can change the settings to "hide self view," the default is that we see our own real-time camera feed, and we stare at ourselves throughout hours of meetings per day.

Finally, Bailenson notes that the design of cameras used for videoconferencing means that people tend to remain within a fairly tight physical space (the camera's "frustrum"):

because many Zoom calls are done via computer, people tend to stay close enough to reach the keyboard, which typically means their faces are between a half-meter and a meter away from the camera (assuming the camera is embedded in the laptop or on top of the monitor). Even in situations where one is not tied to the keyboard, the cultural norms are to stay centered within the camera's view frustrum and to keep one's face large enough for others to see. In essence users are stuck in a very small physical cone, and most of the time this equates to sitting down and staring straight ahead.

That's sub-optimal, because in face-to-face meetings, people move around: "they pace, stand up, stretch, doodle on a notepad, get up to use a chalkboard, even walk over to the water cooler to refill their glass", as Bailenson writes. That's important because studies show that movements help create good meetings. The narrow physical cone that most people inhabit during videoconferences is not just tiring, but reduces efficiency.

The good news is that once you analyze what the problems are with Zoom and other platforms, it's quite straightforward to tweak the software to deal with them:

For example, the default setting should be hiding the self-window instead of showing it, or at least hiding it automatically after a few seconds once users know they are framed properly. Likewise, there can simply be a limit to how large Zoom displays any given head; this problem is simple technologically given they have already figured out how to detect the outline of the head with the virtual background feature.

Other problems can be solved by changing the hardware and office culture. For example, using an external webcam and external keyboard allows more flexibility and control over various seating arrangements. It might help to make audio-only Zoom meetings the default, or to use the old-fashioned telephone as an alternative to wall-to-wall videoconferencing. Exploring these changes is particularly important since it seems likely that working from home will remain an option or perhaps a requirement for many people, even after the current pandemic is brought under control. Now would be a good time to fight the fatigue it so often engenders.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

]]>
fighting-fatigue https://beta.techdirt.com/comment_rss.php?sid=20210226/06104846322
Tue, 2 Mar 2021 16:22:09 PST The Unasked Question In Tech Policy: Where Do We Get The Lawyers? Cathy Gellis https://beta.techdirt.com/articles/20200511/09191844474/unasked-question-tech-policy-where-do-we-get-lawyers.shtml https://beta.techdirt.com/articles/20200511/09191844474/unasked-question-tech-policy-where-do-we-get-lawyers.shtml When we criticize Internet regulations like the CCPA and GDPR, or lament the attempts to roll back Section 230, one of the points we almost always raise is how unduly expensive these policy decisions can be for innovators. Any law that increases the risk of legal trouble increases the need for lawyers, whose services rarely come cheap.

But bare cost is only part of the problem. All too often, policymakers seem to assume an infinite supply of capable legal counsel, and it's an assumption that needs to be questioned.

First, there are not an infinite number of lawyers. For better or worse, the practice of law is a heavily regulated profession with significant barriers to entry. The legal industry can be fairly criticized, and often is, for making it more difficult and expensive to become a lawyer than perhaps it should be, but there is at least some basic threshold of training, competence, and moral character we should want all lawyers to have attained given the immense responsibility they are regularly entrusted with. These requirements will inevitably limit the overall lawyer population.

(Of course, there shouldn't be an infinite number of lawyers anyway. As discussed below, lawyers play an important role in society, but theirs is not the only work that is valuable. In the field of technology law, for example, our need for people to build new things should well outpace our need for lawyers to defend what has been built. We should be wary of creating such a need for the latter that the legal profession siphons off too much of the talent able to do the former.)

But even where we have lawyers we still need the right kind of lawyers. Lawyers are not really interchangeable. Different kinds of lawyering need different types of skills and subject-matter expertise, and lawyers will generally specialize, at least to some extent, in what they need to master for their particular practice area. For instance, a lawyer who does estate planning is not generally the one you'd want to defend you against a criminal charge, nor would one who does family law ordinarily be the one you'd want writing your employment manual. There are exceptions, but generally because that particular lawyer went out of their way to develop parallel expertise. The basic fact remains: simply picking any old lawyer out of the yellow pages is rarely likely to lead to good results; you want one experienced with dealing with the sorts of legal issues you actually have, substantively and practically.

True, lawyers can retrain, and it is not uncommon for lawyers to switch their focus and develop new skills and expertise at some point in their careers. But it's a problem if a disproportionate number start to specialize in the same area because, just as we need people available to work professions other than law, even within the law we still need other kinds of lawyers available to work on other areas of law outside these particular specialized areas.

And we also need to be able to afford them. We already have a serious "access to justice" problem, where only the most resourced are able to obtain legal help. A significant cause of this problem is the expense of law school, which makes it difficult for graduates to resist the siren call of more remunerative employment, but it's a situation that will only get worse if lawyer-intensive regulatory schemes end up creating undue demand for certain legal specializations. For example, as we increasingly pass a growing thicket of complex privacy regulations we create the need for more and more privacy lawyers to help innovators deal with these rules. But as the need for privacy lawyers outstrips the ready availability of lawyers with this expertise, it threatens to raise the costs for anyone needing any sort of lawyering at all. It's a basic issue of supply and demand: the more privacy lawyers that are needed, the more expensive it will be to attract them. And the more these lawyers are paid a premium to do this work, the more it will lure lawyers away from other areas that still need serving, thus making it all the more expensive to hire those who are left to help with it.

Then there is the question of where lawyers even get the expertise they need to be effective counsel in the first place. The dirty little secret of legal education is that, at least until recently, it probably wasn't at their law schools. Instead lawyers have generally been trained up on the job, and what newbie lawyers ended up learning has historically depended on what sort of legal job it was (and how good a legal job it was). Recently, however, there has been the growing recognition that it really doesn't make sense to graduate lawyers unable to competently do the job they are about to be fully licensed to do, and one way law schools have responded is by investing in legal clinics.

By and large, clinics are a good thing. They give students practical legal training by letting them basically do the job of a lawyer, with the benefit of supervision, as part of their legal education. In the process they acquire important skills and start to develop subject-matter expertise in the area the clinic focuses on, which can be in almost every practice area, including, as is relevant here, technology law. Meanwhile, clinics generally let students provide these legal services to clients far more affordably than clients would normally be able to obtain them, which partially helps address the access to justice problem.

However, there are still some significant downsides to clinics, including the inescapable fact that it is students who are basically subsidizing the legal services they are providing by having to pay substantial amounts of money in tuition for the privilege of getting to do this work. A recurrent theme here is that law schools are notoriously expensive, often underwritten with loans, which means that students, instead of being paid for their work, are essentially financing the client's representation themselves.

And that arrangement matters as policymakers remain inclined to impose regulations that increase the need for legal services without better considering how that need will be met. It has been too easy for too many to assume that these clinics will simply step in to fill the void, with an endless supply of students willing and able to pay to subsidize this system. Even if this supposition were true, it would still prompt the question of who these students are. The massive expense of law school is already shutting plenty of people out of the profession and robbing it of needed diversity by making it financially out of reach for too many, as well as making it impossible for those who do make it through to turn down more lucrative legal jobs upon graduation and take ones that would be more socially valuable instead. The last thing we need is a regulatory environment dependent on this teetering arrangement to perpetuate it.

Yet that's the upshot of much of the policy lawmakers keep crafting. For instance, in the context of Section 1201 Rulemakings, it has been openly presumed that clinics would always be available to do the massive amount of work necessary to earn back for the public the right to do something it was already supposed to be legally allowed to do. But it's not just these cited examples of copyright or privacy law that are a problem; any time a statute or regulatory scheme establishes an unduly onerous compliance requirement, or reduces any of the immunities and safe harbors innovation has depended on, it puts a new strain on the legal profession, which now has to come up with the help from somewhere.

At the same time, however, good policy doesn't mean necessarily eliminating the need for lawyers entirely, like the CASE Act tries to do. The bottom line is that legal services are not like other professional services. Lawyers play a critical role in upholding due process, and laws like the CASE Act that short-circuit those protections are a problem. But so are any laws that have the effect of interfering with that greater Constitutional purpose of the legal profession.

For a society that claims to be devoted to the "rule of law," ensuring that the public can realistically obtain any of the legal help it needs should be a policy priority at least on par with anything else driving tech regulation. Lawmakers therefore need to take care in how they make policy to ensure they do not end up distorting the availability and affordability of legal services in the process. Such care requires (1) carefully calibrating the burden of any imposed policy to not unnecessarily drive up the need for lawyers, and (2) specifically asking the question: who will do the work. They cannot continue to simply leave "insert lawyers here" in their policy proposals and expect everything to be fine. If they don't also pointedly address exactly where it is these lawyers will come from then it won't be.

]]>
they-don't-grow-on-trees https://beta.techdirt.com/comment_rss.php?sid=20200511/09191844474
Tue, 2 Mar 2021 13:30:00 PST Techdirt Podcast Episode 272: Section 230 Matters, With Ron Wyden & Chris Cox Leigh Beadon https://beta.techdirt.com/articles/20210302/13091746348/techdirt-podcast-episode-272-section-230-matters-with-ron-wyden-chris-cox.shtml https://beta.techdirt.com/articles/20210302/13091746348/techdirt-podcast-episode-272-section-230-matters-with-ron-wyden-chris-cox.shtml

Last week, we hosted Section 230 Matters, a virtual Techdirt fundraiser featuring a panel discussion with the two lawmakers who wrote the all-important text and got it passed 25 years ago: Chris Cox and Senator Ron Wyden. It was informative and entertaining, and for this week's episode of the podcast, we've got the full audio of the panel discussion about the history, evolution, and present state of Section 230.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

]]>
celebrating-25-years https://beta.techdirt.com/comment_rss.php?sid=20210302/13091746348
Tue, 2 Mar 2021 12:10:13 PST State Court Says Tennessee's Anti-SLAPP Law Is Constitutional, Shuts Down Litigant Involved In Baseless Libel Litigation Tim Cushing https://beta.techdirt.com/articles/20210228/10350146335/state-court-says-tennessees-anti-slapp-law-is-constitutional-shuts-down-litigant-involved-baseless-libel-litigation.shtml https://beta.techdirt.com/articles/20210228/10350146335/state-court-says-tennessees-anti-slapp-law-is-constitutional-shuts-down-litigant-involved-baseless-libel-litigation.shtml Tennessee is filled with awful legislators. Fortunately, despite itself, the legislature passed an anti-SLAPP law that appears to finally be putting an end to ridiculous libel lawsuits in the state. Prior to this, residents and libel tourists were abusing the law to do things like silence legitimate criticism and -- believe it or not -- sue a journalist for things said by someone he interviewed.

While the state legislature continues pissing tax dollars away asking the federal government to institute criminal penalties for flag burning and requesting state colleges forbid student-athletes from expressing anything other than reverence for the flag, state courts are quietly ensuring their better legislative efforts remain viable.

A short ruling [PDF] issued by a Tennessee circuit court says the state's anti-SLAPP law is not only constitutional, but serves a valuable purpose. (via Courthouse News)

The plaintiff -- Tiny House Chattanooga -- sued Sinclair Broadcasting after news coverage of the fallout from a reality program episode involving the tiny house manufacturer resulted in some acrimonious behavior by both parties: Mike Bedsole of Tiny House and the nominal recipients of his tiny house, Rebecah and Ben Richards. The couple was apparently promised a house -- and some TV coverage -- but received neither.

They went to court and because the title is in Bedsole's name, the judge considered the couple tenants and evicted them.

"Based on the fact that the builder has the title in his name he had to rule in position to the builder and he gave us 10 days to vacate the property. And during those 10 days the builder took the house off the property," he said.

The couple said they have repeatedly written to Bedsole asking where their home is, but have not received a response from him.

Bedsole sued Sinclair for KABC's coverage. KABC filed an anti-SLAPP motion asking for the defamation lawsuit to be dismissed. Bedsole then filed this cross motion, asking the court to examine the constitutionality of the state's anti-SLAPP law.

The anti-SLAPP law wins. Bedsole loses.

The TPPA, at least in the eyes of this court, is clearly predicated upon public policy concerns. "The purpose of this chapter is to encourage and safeguard the constitutional rights of persons to petition, to speak freely, to associate freely, and to participate in government to the fullest extent permitted by law…" T.C.A. 20-17-102. There can be no serious question that the intent of the legislature in passing this statute was to effect a more beneficial public policy.

To better benefit the public, the anti-SLAPP law allows for fee-shifting, placing the burden back on the party engaging in litigation solely for the purpose of silencing criticism. This is integral and constitutional, says the court.

This additional remedy is significant. [...] [A] greater burden has been placed upon the plaintiff than the mere requirements of Rules 8 and 12 of the Tennessee Rules of Civil Procedure.

However, that's not the only outcome available. Like any other litigation, the decision to shift fees still lies with the court. The new law simply provides another outlet for it to use should it decide the litigation has been pursued in bad faith.

The provisions of the TPPA do not mandate any particular result but leave the ultimate decision to within the discretion of the trial court. None of these provisions remove from the trial court the authority to interpret and apply the applicable law.

Since there's no removal of power, the law is constitutional. The challenge fails and Bedsole and his company will no longer be able to sue journalists for reporting on a housing dispute, no matter how unfavorable the coverage is to Bedsole. This ruling solidifies the state's anti-SLAPP law, making it more resilient against future challenges of its constitutionality. Bad faith litigants should be further deterred from filing bogus libel lawsuits and/or claiming the law bypasses the Constitution at the state level.

]]>
maybe-work-on-your-reputation-rather-than-suing-third-parties https://beta.techdirt.com/comment_rss.php?sid=20210228/10350146335
Tue, 2 Mar 2021 10:46:36 PST ICE Is Also Using Utility Databases Run By Private Companies To Hunt Down Undocumented Immigrants Tim Cushing https://beta.techdirt.com/articles/20210228/09111246334/ice-is-also-using-utility-databases-run-private-companies-to-hunt-down-undocumented-immigrants.shtml https://beta.techdirt.com/articles/20210228/09111246334/ice-is-also-using-utility-databases-run-private-companies-to-hunt-down-undocumented-immigrants.shtml ICE has always had a casual relationship with the Fourth Amendment. Since it's in the business of tracking foreigners, it has apparently decided the rights traditionally extended to them haven't actually been extended to them.

Anything not nailed down by precedential court decisions or federal legislation gets scooped up by ICE. This includes location data pulled from apps that would appear to be subject to Supreme Court precedent on location tracking. ICE routinely engages in warrantless device searches -- something its legal office has failed to credibly justify in light of the Riley decision. And the Fourth Amendment -- along with judicial oversight -- is swept away completely by ICE's practice of deploying pre-signed warrants to detain immigrants. The agency is also not above forging judges' signatures to send "dangerous" immigrants packing.

The latest exposure of ICE's tactics shows it will gather everything and anything to hunt down people who, for the most part, are just trying to give their families a better shot at survival. Whatever can be had without a warrant will be had. That's the message being sent by ICE, and relayed to us by Drew Harwell of the Washington Post. (h/t Magenta Rocks)

U.S. Immigration and Customs Enforcement officers have tapped a private database containing hundreds of millions of phone, water, electricity and other utility records while pursuing immigration violations, according to public documents uncovered by Georgetown Law researchers and shared with The Washington Post.

ICE’s use of the private database is another example of how government agencies have exploited commercial sources to access information they are not authorized to compile on their own. It also highlights how real-world surveillance efforts are being fueled by information people may never have expected would land in the hands of law enforcement.

I'm not a law enforcement professional. Nor am I a immigration and customs officer. But it beggars belief that utility records can provide evidence of illegal immigration. While I understand ICE is likely using the records to find people it has already flagged as illegal immigrants, the justification for demanding these records is nonexistent. ICE may want to match names to addresses but it's on shaky legal ground when it demands records under the theory that utility bills may offer evidence of illegal immigration.

And yet, ICE can do this. This is the Third Party Doctrine in action. If immigrants give their names to utility companies, ICE can get this info without a warrant. It's a "voluntary" exchange, even though there's nothing voluntary about exchanging personal info to access the little things in life that make it worth living, like electricity and indoor plumbing.

But is it evidence of illegal immigration? There's a lot that's still unsettled at the point ICE obtains this information. Immigration status is ultimately handled by judges. Until then, everything else is apparently fair game, including utility bill records.

At the top of this evidentiary food chain is a private company. And that company doesn't appear to care who accesses its database or for what reason.

CLEAR is run by the media and data conglomerate Thomson Reuters, which sells “legal investigation software solution” subscriptions to a broad range of companies and public agencies. The company has said in documents that its utility data comes from the credit-reporting giant Equifax. Thomson Reuters, based in Toronto, also owns the international news service Reuters as well as other prominent subscription databases, including Westlaw.

Thomson Reuters has not provided a full client list for CLEAR, but the company has said in marketing documents that the system has been used by police in Detroit, a credit union in California and a fraud investigator in the Midwest. Federal purchasing records show that the departments of Justice, Homeland Security and Defense are among the federal agencies with ongoing contracts for CLEAR data use.

Even if you believe immigrants shouldn't be given constitutional protections, you have to be concerned that a private company is amassing data from private citizens and granting access to government agencies. This isn't how America is supposed to work. But that's the way it actually works, thanks to opportunistic data brokers and the hundreds of utility companies willing to sell customers' data to whoever will buy it.

ICE is paying Thomson Reuters $21 million a year for access. Reuters -- a company that needs to answer to shareholders -- has zero interest in terminating this working relationship. On the public sector side, ICE needs to continue to justify its existence. So it has no interest in terminating contracts that enable it to apprehend and eject immigrants. The legality of its efforts is unsettled. Since no one above ICE in the governmental org chart has yet determined this is unacceptable, it will continue. And private databases like this still lie beyond the minimal protections afforded by federal privacy laws.

Until someone's willing to step in and curb ICE's all access pass to everything that's just on the outside of current Fourth Amendment case law, ICE will continue to hoover up everything it can, no matter how negligible its effect on immigration enforcement. And, as long as companies can continue to demand a wealth of info in exchange for services, there will always be an endless supply of third parties only a subpoena away from handing over personal information to federal law enforcement.

]]>
whatever-isn't-nailed-down-by-legislation-or-precedent https://beta.techdirt.com/comment_rss.php?sid=20210228/09111246334
Tue, 2 Mar 2021 09:35:58 PST The Most Important Part Of The Facebook / Oversight Board Interaction Happened Last Week And Almost No One Cared Mike Masnick https://beta.techdirt.com/articles/20210301/12045946346/most-important-part-facebook-oversight-board-interaction-happened-last-week-almost-no-one-cared.shtml https://beta.techdirt.com/articles/20210301/12045946346/most-important-part-facebook-oversight-board-interaction-happened-last-week-almost-no-one-cared.shtml The whole dynamic between Facebook and the Oversight Board has received lots of attention -- with many people insisting that the Board's lack of official power makes it effectively useless. The specifics, again, for most of you not deep in the weeds on this: Facebook has only agreed to be bound by the Oversight Board's decisions on a very narrow set of issues: if a specific piece of content was taken down and the Oversight Board says it should have been left up. Beyond that, the Oversight Board can make recommendations on policy issues, but the companies doesn't need to follow them. I think this is a legitimate criticism and concern, but it's also a case where if Facebook itself actually does follow through on the policy recommendations, and everybody involved acts as if the Board has real power... then the norms around it might mean that it does have that power (at least until there's a conflict, and you end up in the equivalent of a Constitutional crisis).

And while there's been a tremendous amount of attention paid to the Oversight Board's first set of rulings, and to the fact that Facebook asked it to review the Trump suspension, last week something potentially much more important and interesting happened. With those initial rulings on the "up/down" question, the Oversight Board also suggested a pretty long list of policy recommendations for Facebook. Again, under the setup of the Board, Facebook only needed to consider these, but was not bound to enact them.

Last week Facebook officially responded to those recommendations, saying that it had agreed to take action on 11 of the 17 recommendations, is assessing the feasibility on another five, and was taking no action on just one. The company summarized those decisions in that link above, and put out a much more detailed pdf exploring the recommendations and Facebook's response. It's actually interesting reading (or, at least for someone like me who likes to dig deep into the nuances of content moderation).

Since I'm sure it's most people's first question: the one "no further action" was in response to a policy recommendation regarding COVID-19 misinformation. The Board had recommended that when a user posts information that disagrees with advice from health authorities, but where the "potential for physical harm is identified but is not imminent" that "Facebook should adopt a range of less intrusive measures." Basically, removing such information may not always make sense, especially if it's not clear that the information in disagreement with health authorities might not be actively harmful. As per usual, there's a lot of nuance here. As we discussed, early in the pandemic, the suggestions from "health authorities" later turned out to be inaccurate (like the WHO and CDC telling people not to wear masks in many cases). That makes relying on those health authorities as the be all, end all for content moderation for disinformation inherently difficult.

The Oversight Board's response in this issue more or less tried to walk that line, recognizing that health authorities' advice may adapt over time as more information becomes clear, and automatically silencing those who push back on the official suggestions from health officials may lead to over-blocking. But, obviously, this is a hellishly nuanced and complex topic. Part of the issue is that -- especially in a rapidly changing situation, where our knowledge base starts out with little information and is constantly correcting -- it's difficult to tell who is pushing back against official advice for good reasons or for conspiracy theory nonsense reasons (and there's a very wide spectrum between those two things). That creates (yet again) an impossible situation. The Oversight Board was suggesting that Facebook should be at least somewhat more forgiving in such situations, as long as they don't see any "imminent" harm from those disagreeing with health officials.

Facebook's response isn't so much pushing back against the Board's recommendation -- but rather to argue that it already takes a "less intrusive" approach. It also argued that Facebook and the Oversight Board basically disagree on the definition of "imminent danger" from bad medical advice (the specific issue came up in the context of someone in France recommending hydroxychloroquine as a treatment for COVID). Facebook said that, contrary to the Board's finding, it did think this represented imminent danger:

Our global expert stakeholder consultations have made it clear that, that in the context of a health emergency, the harm from certain types of health misinformation does lead to imminent physical harm. That is why we remove this content from the platform. We use a wide variety of proportionate measures to support the distribution of authoritative health misinformation. We also partner with independent third-party fact-checkers and label other kinds of health misinformation.

We know from our work with the World Health Organization (WHO) and other public health authorities that if people think there is a cure for COVID-19 they are less likely to follow safe health practices, like social distancing or mask-wearing. Exponential viral replication rates mean one person’s behavior can transmit the virus to thousands of others within a few days.

We also note that one reason the board decided to allow this content was that the person who posted the content was based in France, and in France, it is not possible to obtain hydroxychloroquine without a prescription. However, readers of French content may be anywhere in the world, and cross-border flows for medication are well established. The fact that a particular pharmaceutical item is only available via prescription in France should not be a determinative element in decision-making.

As a bit of a tangent, I'll just note the interesting dynamic here: despite "the narrative" which claims that Facebook has no incentive to moderate content due to things like Section 230, here the company is arguing for the ability to be more heavy handed in its moderation to protect the public from danger, and against the Oversight Board which is asking the company to be more permissive.

As for the items that Facebook "took action" on, a lot of them are sort of bland commitments to do "something" rather than concrete changes. For example, at the top of the list were things around confusion between the Instagram community guidelines and the Facebook community guidelines, and to be more transparent about how those are enforced. Facebook says that they're "committed to action" on this, but I'm not sure I can actually tell you what actions it's actually taken.

We’ll continue to explore how best to provide transparency to people about enforcement actions, within the limits of what is technologically feasible. We’ll start with ensuring consistent communication across Facebook and Instagram to build on our commitment above to clarify the overall relationship between Facebook’s Community Standards and Instagram’s Community Guidelines.

Um... great? But what does that actually mean? I have no idea.

Evelyn Douek, who studies this issue basically more than anyone else, notes that many of these commitments from Facebook are kind of weak:

Some of the “commitments” are likely things that Facebook had in train already; others are broad and vague. And while the dialogue between the FOB and Facebook has shed some light on previously opaque parts of Facebook’s content moderation processes, Facebook can do much better.

As Douek notes, some of the answers do reveal some pretty interesting things that weren't publicly known before -- such as how its AI deals with nudity, and how it tries to distinguish the nudity it doesn't want from things like nudity around breast cancer awareness:

Facebook explained the error choice calculation it has to make when using automated tools to detect adult nudity while trying to avoid taking down images raising awareness about breast cancer (something at issue in one of the initial FOB cases). Facebook detailed that its tools can recognize the words “breast cancer” but users have used these words to evade nudity detection systems, so Facebook can’t just rely on just leaving up every post that says “breast cancer.” Facebook has committed to providing its models with more negative samples to decrease error rates.

Douek also notes that some of Facebook's claims to be implementing the Board's recommendations are... misleading. They're actually rejecting the Board's full recommendation:

In response to the FOB’s request for a specific transparency report about Community Standard enforcement during the COVID-19 pandemic, Facebook said it was “committed to action.” Great! What “action,” you might ask? It says that it had already been sharing metrics throughout the pandemic and would continue to do so. Oh. This is actually a rejection of the FOB’s recommendation. The FOB knows about Facebook’s ongoing reporting and found it inadequate. It recommended a specific report, with a range of details, about how the pandemic had affected Facebook’s content moderation. The pandemic provided a natural experiment and a learning opportunity: Because of remote work restrictions, Facebook had to rely on automated moderation more than normal. The FOB was not the first to note that Facebook’s current transparency reporting is not sufficient to meaningfully assess the results of this experiment.

Still, what's amazing to me is that these issues, which might actually change key aspects of Facebook's moderation basically got next to zero public attention last week as compared to the simple decisions on specific takedowns (and the massive flood of attention the Trump account suspension decision will inevitably get).

]]>
pay-attention https://beta.techdirt.com/comment_rss.php?sid=20210301/12045946346
Tue, 2 Mar 2021 06:23:13 PST The New York Times (Falsely) Informs Its 7 Million Readers Net Neutrality Is 'Pointless' Karl Bode https://beta.techdirt.com/articles/20210226/08093046323/new-york-times-falsely-informs-7-million-readers-net-neutrality-is-pointless.shtml https://beta.techdirt.com/articles/20210226/08093046323/new-york-times-falsely-informs-7-million-readers-net-neutrality-is-pointless.shtml Let's be clear about something: the net neutrality fight has always really been about monopolization and a lack of broadband competition. Net neutrality violations, whether it's wireless carriers blocking competing mobile payment services or an ISP blocking competing voice services, are just symptoms of a lack of competition. If we had meaningful competition in broadband, we wouldn't need net neutrality rules because consumers would vote with their wallets and leave an ISP that behaved like an asshole.

But American broadband is dominated by just a handful of very politically powerful telecom giants fused to our national security infrastructure. Because of this, lawmakers and regulators routinely don't try very hard to fix the problem lest they upset a trusted partner of the FBI/NSA/CIA, or lose out on campaign contributions. As a result, US broadband is heavily monopolized, and in turn, mediocre in nearly every major metric that matters. US ISPs routinely, repeatedly engage in dodgy behavior that sees zero real penalty from our utterly captured regulators.

The net neutrality fight has always really been a proxy fight about whether we want functional government oversight of these monopolies. The monopolies, it should be said, would prefer it if there were absolutely none. It's why for the last 20 years or so they've been on a relentless tear to strip away all state and federal regulatory oversight of their broken business sector, culminating in 2018's repeal of net neutrality -- which not only (and this part is important) killed net neutrality rules, but gutted the FCC's consumer protection authority (right before a pandemic, as it turned out).

The repeal even attempted to ban states from being able to protect consumers from things like billing fraud, an effort the courts haven't looked kindly upon so far. But again, the goal here is clear: zero meaningful oversight of telecom monopolies.

So with that as background, imagine my surprise when New York Times columnist Shira Ovide, whose tech coverage is usually quite insightful, informed the paper's 7.5 million subscribers that this entire several decade quest to thwart corruption and monopolization is "pointless":

"People may scream at me for saying this, but net neutrality is one of America’s longest and now most pointless fights over technology."

Yeah I'm not going to scream (too worn out), but I will politely note that the paper of record has absolutely no idea what it's talking about.

Again, the net neutrality repeal didn't just kill net neutrality! It effectively gutted the FCC's consumer protection authority, shoveling any remaining authority to an FTC the broadband industry knew lacked the resources, authority, or staff to do a good job. That was the entire point. The repeal also tried to ban states from being able to stand up to companies like AT&T and Comcast. The goal: little to no real oversight of one of the more broken, monopolized markets in America. During a pandemic in which broadband is being showcased as essential to survival, healthcare, education, and employment. Anybody calling a fight on this subject "pointless" hasn't taken the time to understand what's actually at stake.

The whole story paints the effort to have some modest oversight of telecom monopolies as droll and pointless. At one point, the story (which is really just the New York Times interviewing itself) even oddly implies the debate over what to do about "big telecom" is irrelevant and that "big tech" is all we really need to worry about:

"However, the debate feels much less urgent now that we’re talking about threats of online disinformation about vaccine deployment and elections. The net neutrality debate focused on internet service providers as powerful gatekeepers of internet information. That term now seems better applied to Facebook, Google and Amazon."

This idea that "big tech" is the root of all of our problems, and that "big telecom" is not worth worrying about is a message AT&T and Comcast have been sending out for the better part of the last several years. Given how often I see this concept parroted by the press and lawmakers, it's been fairly effective. It's certainly been effective on GOP mainstays like Josh Hawley, who performatively insists he's an anti-monopolist, but has never had a single bad word to say about the nation's most obvious monopolistic market (telecom). This isn't some errant coincidence.

But here's the thing: the US is dominated by monopolies. They're everywhere (banking, airlines, telecom, advertising). Here's the crazy part: we can tackle the monopolization impacting numerous industries simultaneously. It's not some either-or proposition where you forget about telecom monopolization because Amazon or Google are also behaving badly. You can make sure the FCC has the authority and resources it needs to police telecom and focus on how to loosen Facebook and Google's dominance over the advertising market.

There are several other instances where the Times demonstrates it really doesn't understand the subject it's covering. Like here, where Ovide suggests that having net neutrality rules are pointless because... Google has undersea cables?:

"When Google has its own undersea internet cables, isn’t the reality that some internet services reach us faster no matter what the law says?"

People who don't understand net neutrality often over-simplify it down to something about how the rules "prevented ISPs from offering faster speeds for some services." But that's never been true. The rules only really care about if an ISP uses network management to harm competitors. The rules also had components that required ISPs be transparent about what kind of broadband connection you're buying so consumers could avoid getting ripped off with connections that promise 30 Mbps but come with all manner of hidden restrictions, throttling, or caveats.

There's one part the Times gets (sort of) right, and it's here:

"There probably isn’t much of a middle ground. There are either net neutrality rules or there aren’t. And the internet service providers see net neutrality as a slippery slope that leads to broader regulation of high-speed internet services or government-imposed limits on prices they can charge. They will fight any regulation. And that’s true, too, of the lobbyists who are hired to argue against anything."

The reason we can't find a middle ground is because the broadband industry refuses to meet anyone even a quarter of the way onto the playing field. When the FCC, in 2010, passed some utterly flimsy, loophole-filled net neutrality rules that didn't even cover wireless, Verizon sued anyway (Google, falsely cited as an advocate of net neutrality by many, even lent a hand, if you recall). Meanwhile the reason Congress can't pass even a modest net neutrality law isn't because there's no desire for a "middle ground," it's because lawmakers are utterly awash in AT&T, Comcast, Verizon, Centurylink, Charter, and T-Mobile campaign contributions.

In reality, the broadband industry wants no state or federal oversight of its businesses. Particularly not any oversight that could meaningfully harm their regional monopolies, drive competition to market, and lower consumer rates. And while there are some free market policy folk who still like to pretend that removing regulation of natural monopolies like Comcast, Verizon, and AT&T somehow results in Utopia (a line they've been feeding the American public for 35 straight years now), that's not the case. When you kill oversight of natural monopolies, while simultaneously refusing to adopt pro-competition policies that seriously challenge them, they only double down on the same bad behaviors. It's a lesson the US seems bizarrely unwilling to learn.

Let's also be clear about something. To feebly justify this handout to industry (something surveys showed an overwhelming bipartisan majority of Americans opposed), the broadband sector used completely fabricated data. It hired firms that had to resort to using dead and fake people to pretend their policy proposal was a good idea. All to effectively lobotomize the nation's top telecom regulator, leaving it incapable of meaningfully holding telecom giants responsible for fraud and anticompetitive behavior right before a pandemic.

Imagine, for just a second, thinking that the quest for accountability, justice, and common sense on this subject is "pointless."

]]>
maybe do some research first https://beta.techdirt.com/comment_rss.php?sid=20210226/08093046323