Notice: Use of undefined constant EDITION_TOKEN - assumed 'EDITION_TOKEN' in /home/beta6/deploy/itasca_20201215-3691-c395/rss.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss.php on line 2

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss-template.inc on line 2
Techdirt. Stories filed under "aws" Easily digestible tech news... https://beta.techdirt.com/ en-us Techdirt. Stories filed under "aws"https://beta.techdirt.com/images/td-88x31.gifhttps://beta.techdirt.com/ Fri, 5 Mar 2021 09:38:47 PST Parler Drops Its Loser Of A Lawsuit Against Amazon In Federal Court, Files Equally Dumb New Lawsuit In State Court Mike Masnick https://beta.techdirt.com/articles/20210304/16492046368/parler-drops-loser-lawsuit-against-amazon-federal-court-files-equally-dumb-new-lawsuit-state-court.shtml https://beta.techdirt.com/articles/20210304/16492046368/parler-drops-loser-lawsuit-against-amazon-federal-court-files-equally-dumb-new-lawsuit-state-court.shtml As you may recall, Parler had filed a ridiculously weak antitrust lawsuit against Amazon the day after it had its AWS account suspended. A judge easily rejected Parler's request for an injunction, and made it pretty clear Parler's chances of succeeding were slim to none. Parler, which has since found a new host, had indicated it would file an amended complaint, but instead it chose to drop that lawsuit in federal court and file an equally laughable lawsuit in state court in Washington (though with some additional lawyers).

Rather than claiming antitrust (which was never going to work) the new complaint claims breach of contract, defamation and deceptive and unfair practices. The complaint makes a big deal over the fact that in December Twitter and Amazon signed an agreement to use AWS for hosting some Twitter content, and hints repeatedly that Amazon's move a month later was to help Twitter stomp out a competitor. But this is all just random conspiracy theory nonsense, and not at all how any of this actually works.

The defamation claim is particularly silly.

On January 9, 2021, AWS sent an email to Parler declaring that AWS would indefinitely suspend Parler’s service, claiming that Parler was unable or unwilling “to remove content that encourages or incites violence against others.” AWS or one of its employees publicly leaked that email in bad faith to BuzzFeed at or around the same time AWS sent the email to Parler.

AWS’s email was false and AWS knew it was false. Parler was willing and able to remove such content and AWS knew that, because there was a lengthy history between the parties of Parler removing such content as quickly as AWS brought it to Parler’s attention. What is more, AWS was well aware that Parler was testing a new AI-based system to remove such content before it was even posted, that Parler had success with initial testing of the program, and that Parler had in fact shared those testing results with AWS.

The AWS email received wide play in the media.

This itself is kind of fascinating. Because even though we had highlighted that Parler takes down content, publicly it had claimed over and over again that it did not take down content. In fact, nearly all of Parler's brand was built on it's (misleading) claim to not do content moderation. So, how the hell could Amazon claiming that Parler wasn't doing content moderation be defamatory when that's the very reputation that Parler itself tried to highlight for itself?!?

Also, uh, this isn't going to fly in any court:

Parler is not a public figure and the success of its “content moderation” policies was not a matter of public concern until Google and AWS decided to make it one, but the defendant cannot by a defamatory statement turn a private matter into a public one or all matters would be public in nature.

No, sorry, Parler was very much a public figure way before Google and AWS's decisions.

The complaint also argues, repeatedly, that AWS had always been happy with Parler until it told it was suspending the account, but the filings in the original lawsuit in federal court clearly indicated otherwise, and noted that Amazon had reached out to Parler months earlier. I'm confused as to whether Parler's lawyer thinks that Amazon won't immediately point that out? Also, it seems decently likely that Amazon is going to try to get the case removed right back to federal court, so it's not clear why Parler thinks it can avoid federal court with this case. The whole thing, once again, seems performative and stands just as much of a chance as the original.

]]>
that-ain't-gonna-work-champ https://beta.techdirt.com/comment_rss.php?sid=20210304/16492046368
Thu, 21 Jan 2021 15:51:16 PST Judge Easily Rejects Parler's Demands To Have Amazon Reinstate Parler Mike Masnick https://beta.techdirt.com/articles/20210121/14045546097/judge-easily-rejects-parlers-demands-to-have-amazon-reinstate-parler.shtml https://beta.techdirt.com/articles/20210121/14045546097/judge-easily-rejects-parlers-demands-to-have-amazon-reinstate-parler.shtml As was totally expected, US district court judge, Barbara Jacobs Rothstein, has handily rejected Parler's motion to force Amazon to turn Parler's digital lights back on. The order is pretty short and sweet, basically saying that Parler hasn't even remotely shown a likelihood of success in the case that would lead to having the court order Amazon to take the social media site back.

On the antitrust claims, the judge points out that these appear to be a figment of Parler's imagination:

At this stage in the proceedings, Parler has failed to demonstrate that it is likely to succeed on the merits of its Sherman Act claim. While Parler has not yet had an opportunity to conduct discovery, the evidence it has submitted in support of the claim is both dwindlingly slight, and disputed by AWS. Importantly, Parler has submitted no evidence that AWS and Twitter acted together intentionally—or even at all—in restraint of trade....

Indeed, Parler has failed to do more than raise the specter of preferential treatment of Twitter by AWS. The sum of its allegation is that “by pulling the plug on Parler but leaving Twitter alone despite identical conduct by users on both sites, AWS reveals that its expressed reasons for suspending Parler’s account are but pretext.”... But Parler and Twitter are not similarly situated, because AWS does not provide online hosting services to Twitter. Parler’s unsupported allegation that “AWS provides online hosting services to both Parler and Twitter” is explicitly denied in a sworn declaration by an AWS executive.... (“Twitter’s principal social-media service (the “Twitter Feed”) does not run on AWS. . . . On December 15, 2020, AWS announced that it signed an agreement with Twitter for AWS to begin servicing the Twitter Feed for the first time. . . . We do not yet service the Twitter Feed, and I am not aware of any particular timeline for doing so.”). Thus, as AWS asserts, “it could not have suspended access to Twitter’s content” because “it does not host Twitter.”

For what it's worth the judge doesn't even note the other huge weakness in Parler's "antitrust claims." I had intended to write a post about this, but now that this order is out, that post may be moot: Parler's CEO in his own declaration undermined the entirety of the antitrust claim by admitting that there were at least half a dozen other "large" cloud providers beyond Amazon. It's true that none of them wanted to do business with Parler, but it sort of highlights that there's competition in the market:

Parler reached out to at least six extremely large potential providers— all of which refused to host Parler for one of two reasons.

The "strongest" (and I use that term in the sense of the "tallest of the ants" meaning) of the claims was probably the breach of contract claim, in which Parler said AWS's terms require 30 days notice for termination. As we wrote, however, the terms also allow for a suspension of service in much less time, and Amazon insists that Parler's service was suspended rather than terminated. The judge, not surprisingly, did read the whole of the terms of service, rather than just the convenient bit Parler's lawyer wanted her to read:

Parler has not denied that content posted on its platform violated the terms of the CSA and the AUP; it claims only that AWS failed to provide notice to Parler that Parler was in breach, and to give Parler 30 days to cure, as Parler claims is required per Section 7.2(b)(i). However, Parler fails to acknowledge, let alone dispute, that Section 7.2(b)(ii)—the provision immediately following—authorizes AWS to terminate the Agreement “immediately upon notice” and without providing any opportunity to cure “if [AWS has] the right to suspend under Section 6.” And Section 6 provides, in turn, that AWS may “suspend [Parler’s or its] End User’s right to access or use any portion or all of the Service Offerings immediately upon notice” for a number of reasons, including if AWS determines that Parler is “in breach of this Agreement.” In short, the CSA gives AWS the right either to suspend or to terminate, immediately upon notice, in the event Parler is in breach.

Parler has not denied that at the time AWS invoked its termination or suspension rights under Sections 4, 6 and 7, Parler was in violation of the Agreement and the AUP. It has therefore failed, at this stage in the proceedings, to demonstrate a likelihood of success on its breach of contract claim.

Then there's the intentional interference claim, which almost never flies, because it's almost always just an attempt to repeat earlier claims with a "and this is serious." Here, it's just pathetic. And the judge knows that.

Parler has failed to allege basic facts that would support several elements of this claim. Most fatally, as discussed above, it has failed to raise more than the scantest speculation that AWS’s actions were taken for an improper purpose or by improper means. Conversely, AWS has denied it acted improperly, justifying its actions as a lawful exercise of rights it had pursuant to either the suspension or the termination provisions of the CSA. Further, for the reasons outlined supra, §§ III.B.(1) & (2), Parler has failed to demonstrate the likelihood that AWS breached the CSA. To the contrary, the evidence at this point suggests that AWS’s termination of the CSA was in response to Parler’s material breach. Parler has therefore not demonstrated a likelihood of success on this claim.

The judge does admit that Parler may be right that there are irreparable harms here, but its failure to plead a winnable case means that doesn't much matter. Finally, there's an interesting paragraph on the public interest arguments in the case:

The Court explicitly rejects any suggestion that the balance of equities or the public interest favors obligating AWS to host the kind of abusive, violent content at issue in this case, particularly in light of the recent riots at the U.S. Capitol. That event was a tragic reminder that inflammatory rhetoric can—more swiftly and easily than many of us would have hoped—turn a lawful protest into a violent insurrection. The Court rejects any suggestion that the public interest favors requiring AWS to host the incendiary speech that the record shows some of Parler’s users have engaged in. At this stage, on the showing made thus far, neither the public interest nor the balance of equities favors granting an injunction in this case.

Separately, it's worth noting that the judge called out the fact that this is not a case about free speech or the 1st Amendment, as some have tried to frame it:

It is important to note what this case is not about. Parler is not asserting a violation of any First Amendment rights, which exist only against a governmental entity, and not against a private company like AWS. And indeed, Parler has not disputed that at least some of the abusive and violent posts that gave rise to the issues in this case violate AWS’s Acceptable Use Policy.

Overall, the ruling was basically exactly what most people were expecting. The case still moves on, for now, as this was just rejecting the request for a temporary restraining order (effectively forcing Amazon to rehost Parler). But I would imagine this does not bode well for the next step, which is likely a motion to dismiss the entire lawsuit from Amazon, which the judge seems likely to grant on similar grounds as was used for this ruling.

]]>
nicely-done https://beta.techdirt.com/comment_rss.php?sid=20210121/14045546097
Fri, 15 Jan 2021 17:39:18 PST A Few More Thoughts On The Total Deplatforming Of Parler & Infrastructure Content Moderation Mike Masnick https://beta.techdirt.com/articles/20210115/00240746061/few-more-thoughts-total-deplatforming-parler-infrastructure-content-moderation.shtml https://beta.techdirt.com/articles/20210115/00240746061/few-more-thoughts-total-deplatforming-parler-infrastructure-content-moderation.shtml I've delayed writing deeper thoughts on the total deplatforming of Parler, in part because there was so much else happening (including some more timely posts about Parler's lawsuit regarding it), but more importantly because for years I've been calling for people to think more deeply about content moderation at the infrastructure layer, rather than at the edge. Because those issues are much more complicated than the usual content moderation debates.

And once again I'm going to make the mistake of offering a nuanced argument on the internet. I urge you to read through this entire post, resist any kneejerk responses, and consider the larger issues. In fact, when I started to write this post, I thought it was going to argue that the moves against Parler, while legal, were actually a mistake and something to be concerned about. But as I explored the arguments, I simply couldn't justify any of them. Upon inspection, they all fell apart. And so I think I'll return to my initial stance that the companies are free to make decisions here. There should be concern, however, when regulators and policymakers start talking about content moderation at the infrastructure layer.

The "too long, didn't read" version of this argument (and again, please try to understand the nuance) is that even though Parler is currently down, it's not due to a single company having total control over the market. There are alternatives. And while it appears that Parler is having difficulty finding any such alternative to work with it, that's the nature of a free market. If you are so toxic that companies don't want to do business with you, that's on you. Not them.

It is possible to feel somewhat conflicted over this. I initially felt uncomfortable with Amazon removing Parler from AWS hosting, effectively shutting down the service, and with Apple removing its app from the app store, effectively barring it from iPhones. In both cases, those seemed like very big guns that weren't narrowly targeted. I was less concerned about Google's similar removal, because that didn't block Parler from Android phones, since you don't have to go through Google to get on an Android phone. But (and this is important) I think all three moves are clearly legal and reasonable steps for the companies to take. As I explored each issue, I kept coming back to a simple point: the problems Parler is currently facing are due to its own actions and the unwillingness of companies to associate with an operation so toxic. That's the free market.

If Parler's situation was caused by government pressure or because there were no other options for the company, then I would be a lot more concerned. But that does not appear to be the case.

The internet infrastructure stack is represented in different ways, and there's no one definitive model. But an easy way to think of it is that there are "edge" providers -- the websites you interact with directly -- and then there's everything beneath them: the Content Delivery Networks (CDNs) that help route traffic, the hosting companies/data centers/cloud providers that host the actual content, the broadband/network/access providers, and the domain registers and registrars that help handle the naming and routing setup. And there are lots of other players in there as well, some (like advertising and certain communications providers) with elements on the edge and elements deeper in the stack.

But a key thing to understand is the level of granularity with which different players can moderate, and the overall impact their moderation can have. It's one thing for Twitter to remove a tweet. It's another thing for Comcast to say "you can't access the internet at all." The consequences of moderation get much more severe the deeper you go into the stack. In this case, AWS's only real option for Parler was to remove the entire service, because it couldn't just target the problematic content (of which there was quite a lot). As for the app stores, it's a tricky question. Are app stores infrastructure, or edge? Perhaps they are a little of both, but they had the same limited options: remove the app entirely, or leave it up with all its content intact.

For many years, we've talked about the risks of saying that players deeper in the infrastructure stack should be responsible for content moderation. I was concerned, back in 2014, when there was talk of putting liability on domain registrars if domains they had registered were used for websites that broke the law. There have been a few efforts to hold such players responsible as if they were the actual lawbreakers, and that obviously creates all sorts of problems, especially at the 1st Amendment level. As you move deeper into the stack, the moderation options look less like scalpels and more like sledgehammers that remove entire websites from existence.

Almost exactly a decade ago, in a situation that has some parallels to what's happened now, I highlighted concerns about Amazon deciding to deplatform Wikileaks in response to angry demands from then Senator Joe Lieberman. I found that to be highly problematic, and likely unconstitutional -- though Wikileaks, without a US presence, had little standing to challenge it at the time. My concern was less with Amazon's decision, and more with Lieberman's pressure.

But it's important to go back to first principles in thinking through these issues. It's quite clear that companies like Amazon, Apple, and Google have every legal right to remove services they don't want to associate with, and there are a ton of reasons why people and companies might not want to associate with Parler. But many people are concerned about the takedowns based on the idea that Parler might be "totally" deplatformed, and that one company saying "we don't want you here" could leave them with no other options. That's not so much a content moderation question, as a competition one.

If it's a competition question, then I don't see why Amazon's decision is really a problem either. AWS only has 32% marketshare. There are many other options out there -- including the Trump-friendly cloud services of Oracle, which promotes how easy it is to switch from AWS on its own website. Oracle's cloud already hosts Zoom (and now TikTok's US services). There's no reason they can't also host Parler.*

But, at least according to Parler, it has been having trouble finding an alternative that will host it. And on that front it's difficult to feel sympathy. Any business has to build relationships with other businesses to survive, and if no other businesses want to work with you, you might go out of business. Landlords might not want to rent to troublesome tenants. Fashion houses might choose not to buy from factories with exploitative labor practices. Businesses police each other's business practices all the time, and if you're so toxic that no one wants to touch you... at some point, maybe that's on you, Parler.

The situation with Apple and Google is slightly different, and again, there are lots of nuances to consider. With Apple, obviously, it is controlling access to its own hardware, the iPhone. And there's a reasonable argument to be made that Apple offers the complete package, and part of that deal is that you can only add apps through its app store. Apple has long argued that it does this to keep the phone secure, though it could raise some anti-competitive concerns as well. But Apple has banned plenty of apps in the past (including Parler competitor Gab). And that's part of the nature of iPhone ownership. And, really, there is a way to route around Apple's app store: you can still create web apps that will work on iOS without going through the store. This does limit functionality and the ability to reach deeper into the iPhone for certain features, but those are the tradeoffs.

With Google, it seems like there should be even less concern. Not only could Parler work as a web app, Google does allow you to sideload apps without using the Google Play store. So the limitation was simply that Google didn't want the app in its own store. Indeed, before Amazon took all of Parler down, the company was promoting its own APK to sideload on Android phones.

In the end, it's tough to argue that this is as worrisome as my initial gut reaction said. I am still concerned about content moderation when it reaches the infrastructure layer. I am quite concerned that people aren't thinking through the kind of governance questions raised by these sledgehammer-not-scalpel decisions. But when exploring each of the issues as it relates to Parler specifically, it's hard to find anything to be that directly concerned about. There are, mostly, alternatives available for Parler. And in the one area that there apparently aren't (cloud hosting) it seems to be less because AWS has market power, and more because lots of companies just don't want to associate with Parler.

And that is basically the free market telling Parler to get its act together.

* It's noteworthy that AWS customers can easily migrate to Oracle Cloud only because Oracle copied AWS's API without permission which, according to its own lawyers is copyright infringement. Never expect Oracle to not be hypocritical.

]]>
it's-tricky https://beta.techdirt.com/comment_rss.php?sid=20210115/00240746061
Thu, 14 Jan 2021 15:28:43 PST Judge Not Impressed By Parler's Attempt To Force Amazon To Put It Back Online Mike Masnick https://beta.techdirt.com/articles/20210114/14013746058/judge-not-impressed-parlers-attempt-to-force-amazon-to-put-it-back-online.shtml https://beta.techdirt.com/articles/20210114/14013746058/judge-not-impressed-parlers-attempt-to-force-amazon-to-put-it-back-online.shtml It appears that Parler's antitrust lawsuit against Amazon for suspending its AWS account isn't off to a very good start. In an emergency hearing on Thursday to see whether or not the judge would order Amazon to turn AWS back on for Parler, the judge declined to do so:

U.S. District Judge Barbara J. Rothstein in Seattle said during a hearing Thursday she’s not inclined to order Amazon to immediately put Parler back online. Instead, she expressed interest in taking a more measured approach to deciding whether she should order a permanent injunction to restore web-services to Parler.

Having spoken to two people who followed the hearing, it sounds like the judge did not make an official ruling yet, but said she will quickly. Another comment I heard from people who listened to the hearing was that Parler's lawyer did not seem to understand some fairly basic concepts regarding how all of this works, which does not bode well for his client. Also, Amazon's lawyer has said that they told Parler that the they would allow the site to return to AWS if it put in place a real content moderation strategy -- which again leans into the fact that they suspended, rather than terminated Parler's account (this has become a key point in the lawsuit, as Parler argues that termination violates their contract, while Amazon says the account was merely suspended, which is different from terminated).

One other point: Parler's lawyer apparently told the judge that Parler could not afford to litigate this case all the way to judgment (in the context of arguing that there would be irreparable harm in not turning the site back on immediately, when asked why any harm couldn't later be dealt with by an award of damages). I find this amusing, because just last week (which feels like a century ago, of course), Parler insisted that it didn't need Section 230 at all and CEO John Matze was saying that Parler was big enough to fight off any lawsuits that would come about without 230. At the time, I pointed out to him that while his backers, the Mercer family, are wealthy, they're not that wealthy.

Still, it's pretty stunning to go from "eh, we can handle such lawsuits if we're liable for our users postings" to "uh, we can't afford this lawsuit we filed to keep our site alive" in just one week.

]]>
not-how-it-works https://beta.techdirt.com/comment_rss.php?sid=20210114/14013746058
Wed, 13 Jan 2021 12:03:00 PST Parler's Laughably Bad Antitrust Lawsuit Against Amazon Mike Masnick https://beta.techdirt.com/articles/20210113/11333746046/parlers-laughably-bad-antitrust-lawsuit-against-amazon.shtml https://beta.techdirt.com/articles/20210113/11333746046/parlers-laughably-bad-antitrust-lawsuit-against-amazon.shtml As you may have heard, over the weekend Amazon removed Parler from its AWS cloud hosting services, causing the website to shut down. I've been working on a longer piece about all of this, but in the meantime, I did want to write about the laughably bad antitrust lawsuit that Parler filed against Amazon in response. Notably, this came just days after Parler's CEO claims that his own lawyers quit (would these be the same "lawyers" who stupidly advised that the company doesn't need Section 230?). Instead, they found a small time independent practitioner who doesn't even have a website* to file what may be the silliest antitrust lawsuit I've seen in a long time. It's so bad that by the end of it, Parler may very well be paying Amazon a lot of money.

There are so many other things I'd rather be writing about, so I'll just highlight a few of the problems with Parler's very bad, no good, horrible, stupidly ridiculous lawsuit. If you want more, I recommend reading Twitter threads by Akiva Cohen or Neil Chilson or Berin Szoka or basically any lawyer with any amount of basic knowledge of antitrust law. The lawsuit is dumb and bad and it's going to do more harm to Parler than good.

The key part of the lawsuit is that Parler, without evidence, claims that Amazon had "political animus" against it, and that it conspired with Twitter to shut down a competitor. It provides no proof of either thing, and... even if it did show proof of political animus, that's... not against the law. And that's kind of a big deal. They're basically saying it's an antitrust violation to dislike Parler. Which it's not. But even if it were, they are simply making up false reasons for why AWS booted Parler.

AWS’s decision to effectively terminate Parler’s account is apparently motivated by political animus. It is also apparently designed to reduce competition in the microblogging services market to the benefit of Twitter.

I mean, even just this paragraph makes no sense. You may have noticed that Amazon and Twitter are different companies. The complaint is against Amazon. Amazon doesn't compete with Parler. None of this makes any sense. The next paragraph demonstrates how rushed and stupid and bad this lawsuit is:

Thus, AWS is violating Section 1 of the Sherman Antitrust Act in combination with Defendant Twitter. AWS is also breaching it contract with Parler, which requires AWS to provide Parler with a thirty-day notice before terminating service, rather than the less than thirty-hour notice AWS actually provided. Finally, AWS is committing intentional interference with prospective economic advantage given the millions of users expected to sign up in the near future

With Defendant Twitter? Let's scroll back up and look at the caption again:

There's only one defendant. And it's not Twitter.

The complaint goes on and on about how there's also bad stuff on Twitter, as if somehow that makes it wrong for AWS to be upset about Parler. But... Parler's whole entire claim to fame is that it moderates differently than Twitter, so claiming that there's the same stuff on Twitter is meaningless. Even worse, the example that Parler uses of how Twitter and Parler have similar content is around people suggesting that political officials including Congressional Representatives, Senators, and VP Mike Pence should be hanged. But the evidence that Parler itself provides undermines its own case, and in some cases directly contrasts its own claims. That's not just bad lawyering, that's legal malpractice.

Here is what Parler says:

What is more, by pulling the plug on Parler but leaving Twitter alone despite identical conduct by users on both sites, AWS reveals that its expressed reasons for suspending Parler’s account are but pretext. In its note announcing the pending termination of Parler’s service, AWS alleged that “[o]ver the past several weeks, we’ve reported 98 examples to Parler of posts that clearly encourage and incite violence.” Exhibit A. AWS provide a few examples, including one that stated, “How bout make them hang?”, followed by a series of hashtags, including “#fu-- mikepence.”...

AWS further stated to Parler that the “violent content on your website . . . violates our terms.” Id. Because, AWS declared, “we cannot provide services to a customer that is unable to effectively identify and remove content that encourages or incites violence against others,” AWS announced the pending termination of Parler’s account.

However, the day before, on Friday, one of the top trends on Twitter was “Hang Mike Pence,” with over 14,000 tweets. See Peter Aitken, ‘Hang Mike Pence’ Trends on Twitter After Platform Suspends Trump for Risk of ‘Incitement of Violence’, Fox News (Jan. 9, 2021), https://www.foxnews.com/politics/twittertrending-hang-mike-pence. And earlier last week, a Los Angeles Times columnist observed that Twitter and other social media platforms are partly culpable for the Capital Hill riot, by allowing rioters to communicate and rile each other up. See Erika D. Smith, How Twitter, Facebook are Partly Culpable for Trump DC Riot, LA Times (Jan. 6, 2021), https://www.latimes.com/california/story/2021-01-06/howtwitter-facebook-partly-culpable-trump-dc-riot-capitol. Yet these equivalent, if not greater, violations of AWS’s terms of service by Twitter have apparently been ignored by AWS

This leaves out some fairly important context. For one, the "Hang Mike Pence" trend was driven mainly by people calling out the insurrectionists who were saying that -- and which Twitter very quickly removed under their content moderation practices. Parler, on the other hand, made it clear that it was still trying to figure out how to moderate, and hoped to rely on volunteers. That's in Parler's evidence. That it didn't have a real plan in place yet. And that is why Amazon kicked it off.

On top of that, Parler's lawsuit claims that AWS needed to give it 30 days notice, but really only gave it a couple of days. Yet, in the evidence that Parler itself provides, Amazon mentions to Parler's policy chief that it has been sending dozens of examples of content that violate its policy for several weeks.

Amazon, for its part, appears to have not even waited to be served by Parler, but hit back hard with a very damning response to Parler that just dismantles Parler's argument bit by bit in fairly explicit terms.

This case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (“AWS”) content that threatens the public safety, such as by inciting and planning the rape, torture, and assassination of named public officials and private citizens. There is no legal basis in AWS’s customer agreements or otherwise to compel AWS to host content of this nature. AWS notified Parler repeatedly that its content violated the parties’ agreement, requested removal, and reviewed Parler’s plan to address the problem, only to determine that Parler was both unwilling and unable to do so. AWS suspended Parler’s account as a last resort to prevent further access to such content, including plans for violence to disrupt the impending Presidential transition.

As Amazon says, the antitrust claims are obviously silly, but even the breach of contract claims are ridiculous because if anyone breached the contract, it was Parler:

Despite Parler’s rhetoric, its lawsuit is no more than a meritless claim for breach of contract. But the facts are unequivocal: If there is any breach, it is Parler’s demonstrated failure and inability to identify and remove such content. AWS was well within its rights to suspend Parler immediately for those failures. Parler also cannot hold AWS liable in tort for enforcing the agreement’s express terms. And there is no antitrust claim where, as here, Parler cannot plausibly plead an agreement to cause it harm and the complained-of conduct is undeniably compatible with a legitimate purpose.

Compelling AWS to host content that plans, encourages, and incites violence would be unprecedented. Parler has no likelihood of prevailing on the merits, and the balance of equities and public interest strongly tip against an injunction. The motion for a temporary restraining order should be denied.

In the Amazon filing, the company notes that it began sending breach reports to Parler in November of last year and detailed the nature of the content that it was concerned about, often directly calling for violence. They include a ton of screenshots of the kind of violent speech that was on Parler, that goes way beyond what you'd see on other platforms, and which other platforms would remove.

Amazon also notes that the exhibit is only a small sampling:

The content AWS provided to Parler is merely representative of volumes of content that poses a security risk and harms others, in direct violation of the AUP. See id. Exs. E-F (examples). That content includes, but is not limited to, calls for violence against a wide range of individuals, including elected officials, law enforcement officers, and teachers. People have acted on these calls: Parler was used to incite, organize, and coordinate the January 6 attack on the U.S. Capitol. See Doran Decl. Exs. F-G. AWS reported to Parler, over many weeks, dozens of examples of content that encouraged violence, including calls to hang public officials, kill Black and Jewish people, and shoot police officers in the head. Executive 2 Decl. Exs. D-F. Parler systematically failed to “suspend access” to this content, much less to do so immediately, and demonstrated that it has no effective process in place to ensure future compliance.3 Executive 2 Decl. 7. Parler itself has admitted it has a backlog of 26,000 reports of content that violates its (minimal) community standards that it had not yet reviewed. Id. Parler’s own failures left AWS little choice but to suspend Parler’s account.

As for Amazon treating Twitter differently? Turns out (beyond everything I mentioned above) there's a bigger problem: Twitter doesn't use AWS:

Parler’s Complaint is replete with insinuations that AWS had equal grounds to suspend Twitter’s account and thus discriminated against Parler. For example, Parler cites the hashtag “#hangmikepence,” which briefly trended on Twitter. ... But AWS does not host Twitter’s feed, so of course it could not have suspended access to Twitter’s content.

Finally, Amazon notes that Section 230 also protects its practices here:

In addition to their facial deficiencies, Parler’s interference and antitrust claims also fail under Section 230(c)(2) of the Communications Decency Act. Under that statute, the provider of an “interactive computer service” is immune for acting in good faith to restrict access to material that is excessively violent, harassing, or otherwise objectionable.

This is actually interesting, in that rather than using 230(c)(1) like nearly every case, Amazon recognizes this is one of those rare (c)(2) cases, giving it the right to restrict access to violent, harassing, or otherwise objectionable content. This part of the law is rarely tested, as (c)(1) handles most moderation claims, but probably doesn't fit here, given the fact that Amazon was denying overall service to Parler, not just moderating specific speech.

Parler's going to lose this lawsuit. And it's going to lose badly.

* In a very, very strange set of circumstances, there is another lawyer with the identical name, David J. Groesbeck, (including middle initial) who is a patent lawyer, also based in Washington state and registered to practice in NY, but they are different, and the patent lawyer (who does have a website) had to put a notice on his website saying he's not the same David J. Groesbeck who has Parler for a client, and giving that lawyer's phone number, since he's being inundated with calls, yelling at him for representing Parler.

]]>
i mean, what? https://beta.techdirt.com/comment_rss.php?sid=20210113/11333746046
Mon, 11 Jan 2021 09:41:14 PST The Slope Gets More Slippery As You Expect Content Moderation To Happen At The Infrastructure Layer Konstantinos Komaitis https://beta.techdirt.com/articles/20210111/09253546032/slope-gets-more-slippery-as-you-expect-content-moderation-to-happen-infrastructure-layer.shtml https://beta.techdirt.com/articles/20210111/09253546032/slope-gets-more-slippery-as-you-expect-content-moderation-to-happen-infrastructure-layer.shtml What a week the first week of January has been! As democracy and its institutions were tested in the United States, so were the Internet and its actors.

Following the invasion of the Capitol Hill by protesters, social media started taking action in what appeared to be a ripple effect: first, Twitter permanently suspended the account of the President of the United States, while Facebook and Instagram blocked his account indefinitely and, at least, through the end of his term; Snapchat followed by cutting access to the President’s account, and Amazon’s video-streaming platform Twitch took a similar action; YouTube announced that it would tighten its election fraud misinformation policy in a way that it would allow them to take immediate action against the President in the case of him posting misleading or false information. In the meantime, Apple also announced that it would kick off Parler, the social network favored by conservatives and extremists, from its app store on the basis that it was promoting violence associated with the integrity of the US institutions.

It is the decision of Amazon, however, to kick off Parler from its web hosting service that I want to turn to. Let me first make clear that if you are Amazon, this decision makes total sense from a business and public relations perspective – why would anyone want to be associated with anything that even remotely hinges on extremism? The decision also falls within Amazon’s permissible scope given that, under its terms of service, Amazon reserves the right to terminate users from their networks at their sole discretion. Similarly, from a societal point of view, Amazon may be seen as upholding most peoples’ values. But, I want to offer another perspective here. What about the Internet? What sort of a message does Amazon’s decision send to the Internet and everyone who is watching?

There are several actors participating in the way a message – whether an email, cat video, voice call, or web page – travels through the Internet. Each one of them might be considered an “intermediary” in the transmission of the message. Examples of Internet infrastructure intermediaries include Content Delivery Networks (CDNs), cloud hosting services, domain name registries, and registrars. These infrastructure actors are responsible for a bunch of different things, from managing network infrastructure, to providing access to users, and ensuring the delivery of content. These – mostly – private sector companies provide investment as well as reliability and upkeep of the services we all use.

In the broadcasting world, a carrier also controls the content that is being broadcast; with the Internet, however, an actor responsible for the delivery of infrastructure services (e.g., an Internet Service Provider or a cloud hosting provider) is unlikely or not expected to be aware of the content of the message they are carrying. They simply do not care about the content; it is not their job to care. Their one and only responsibility is to relay packets on the Internet to other destinations. Even if, for the sake of the argument, they were to care, at the end of the day, they are not the producers of the content. Like postal and telephone services, they have the essential role of carrying the underlying message efficiently.

Over the past year, the role and responsibility of intermediaries has been placed under the policy microscope. The focus is currently on user-generated content platforms, including Facebook, Twitter and YouTube. In the United States, policy makers on both sides of the aisle have been considering anew the role of intermediaries in disseminating dis- and mis-information. Section 230, the law that has systematically, consistently and predictably shielded online platforms from liability over the content their users post, has been highly politicized and change now is almost inevitable. In Europe, after a year of intense debate, the newly released Digital Services Act has majorly upheld the long-standing intermediary liability regime, but, still, there are implementation details that could see some change (e.g, all of provisions on ‘trusted flaggers’).

It is the actions like the one that Amazon took against Parler, however, that go beyond issues of just speech and can set a precedent that could have an adverse effect on the Internet and its architecture. By denying cloud hosting services, Amazon is essentially taking Parler offline and denying its ability to operate, unless the platform can find another hosting service. This might be seen as a good thing, prima facie; at the end of the day, who wants such content to even exist, let alone circulate online? But, it does send a quite dangerous message: as infrastructure intermediaries can take action that cuts the problem from its root (i.e., getting a service completely offline), regulators might start looking at them to “police” the Internet. In such a scenario, infrastructure intermediaries would have to deploy content-blocking measures, including IP and protocol-based blocking, deep packet inspection (i.e., viewing content of “packets” as they move across the network), and URL and DNS-based blocking. Such measures ‘over-block’, imposing collateral damage on legal content and communications. They also interfere with the functioning of critical Internet systems, including the DNS, and compromise Internet security, integrity, and performance.

What Amazon did is not unprecedented. In 2017, Cloudflare took a similar action against the Daily Stormer website when it stopped answering DNS requests for their sites. At the time, Cloudflare said: “The rules and responsibilities for each of the organizations [participating in Internet] in regulating content are and should be different.” A few days later, in an op-ed, published at the Wall Street Journal, Cloudflare’s CEO, Matthew Prince said: “I helped kick a group of neo-Nazis off the internet last week, but since then I’ve wondered whether I made the right decision.[…] Did we meet the standard of due process in this case? I worry we didn’t. And at some level I’m not sure we ever could. It doesn’t sit right to have a private company, invisible but ubiquitous, making editorial decisions about what can and cannot be online. The pre-internet analogy would be if Ma Bell listened in on phone calls and could terminate your line if it didn’t like what you were talking about.”

Most likely Amazon faced the same dilemma; or, it might have not. One thing, however, is certain: so far, none of these actors appears to be considering the Internet and how some of their actions may affect its future and the way we all may end up experiencing it. It is becoming increasingly important that we start looking into the salient, yet extremely significant, differences between moderation happening by user-generated content platforms as opposed to moderation happening by infrastructure providers.

It is about time we make an attempt to understand how the Internet works. From where I am sitting, this past year has been less lonely and semi-normal because of the Internet. I want it to continue to function in a way that is effective; I want to continue seeing the networks interconnecting and infrastructure providers focusing on what they are supposed to be focusing on: providing reliable and consistent infrastructure services.

It is about time we show the Internet we care!

Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.

]]>
sliding, sliding https://beta.techdirt.com/comment_rss.php?sid=20210111/09253546032
Wed, 29 Nov 2017 13:35:36 PST Government Exposes Documents Detailing Sensitive NSA Software, Surveillance Programs Tim Cushing https://beta.techdirt.com/articles/20171129/11211738697/government-exposes-documents-detailing-sensitive-nsa-software-surveillance-programs.shtml https://beta.techdirt.com/articles/20171129/11211738697/government-exposes-documents-detailing-sensitive-nsa-software-surveillance-programs.shtml Another leak is causing some headaches for the NSA. Still reeling from the worldwide exposure of one of its exploit hoards, along with documents handed over to journalists by Ed Snowden (and unnamed others), the NSA's latest embarrassment is an unsecured intelligence system the NSA shares with the military.

The exposed data was discovered by security researcher Chris Vickery, who informed the government about the leak back in October.

On September 27th, 2017, UpGuard Director of Cyber Risk Research Chris Vickery discovered an Amazon Web Services S3 cloud storage bucket configured for public access. Set to allow anyone entering the URL to see the exposed bucket’s contents, the repository, located at the AWS subdomain “inscom,” contained 47 viewable files and folders in the main repository, three of which were also downloadable. The subdomain name provides some indication as to the provenance of the data: INSCOM, an intelligence command overseen by both the US Army and the NSA.

The three downloadable files contained in the bucket confirm the highly sensitive nature of the contents, exposing national security data, some of it explicitly classified.

The largest file is an Oracle Virtual Appliance (.ova) file titled “ssdev,” which, when loaded into VirtualBox, is revealed to contain a virtual hard drive and Linux-based operating system likely used for receiving Defense Department data from a remote location. While the virtual OS and HD can be browsed in their functional states, most of the data cannot be accessed without connecting to Pentagon systems - an intrusion that malicious actors could have attempted, had they found this bucket.

Included in the exposed data were files marked "Top Secret" and "NOFORN," the latter denoting information considered too sensitive to even be shared with foreign allies. Some of the exposed software could conceivably allow malicious actors to access sensitive (and live) Pentagon systems. Considering the sensitivity of this information, one has to wonder why no attempt was made to secure it.

Regrettably, this cloud leak was entirely avoidable, the likely result of process errors within an IT environment that lacked the procedures needed to ensure something as impactful as a data repository containing classified information not be left publicly accessible. Given how simple the immediate solution to such an ill-conceived configuration is - simply updated the S3 bucket’s permission settings to only allow authorized administrators access - the real question is, how can government agencies keep track of all their data and ensure they are correctly configured and secured?

Perhaps part of the reason this was overlooked was the software's relative uselessness. The military spent $93 million attempting to build a scalable solution for shared intelligence, but a 2014 memo called the software (known as "Red Disk") "a major hindrance to operations." Even though this may be all but abandoned, other files left exposed contained plenty of sensitive information.

Vickery noted that the disk image also contains other sensitive files, including private keys used for the system to access other servers on the intelligence community's network. The keys belong to a third-party firm, Invertix, a working partner of INSCOM and a key developer of Red Disk.

On top of that, the exposed files provided more information about NSA collection program Ragtime, which allowed (allows?) the agency to collect info on US persons.

The document seen by ZDNet, dated November 2011, shows the Ragtime program has eleven variants, including the four that were already known. The document alludes to Ragtime-BQ, F, N, PQ, S, and T.

The eleventh version refers to Ragtime-USP. "USP" is a common term used across the intelligence community to refer to "US person," like a US citizen or lawful permanent resident.

Ragtime is more than a decade old, but apparently still in use. It was part of the Stellar Wind warrantless surveillance bundle put together by the agency and the Bush administration shortly after the 9/11 attacks in 2001. While Stellar Wind is no longer in use thanks to domestic surveillance concerns (it's actually just been offshored to dodge FISA obligations), Ragtime appears to still be running, although there's little publicly-available information discussing its use in surveilling American citizens. An undated document leaked by Snowden in 2013 discusses Ragtime collection in the context of thwarting Congressional oversight.

What is known is Ragtime's super-secret status. It's a "need to know" program that only certain analysts can access. Collections from this program are considered so sensitive they aren't shared with foreign allies, with the exception of the Ragtime-C variant, which allows UK intelligence agency access.

With the Section 702 renewal deadline fast approaching, another leak showing possible domestic surveillance can't be helpful. Then again, serious reform of the expiring collection authorities doesn't seem to be in the cards this year, what with both House and Senate committees offering uninspiring legislation that won't do much to rein in surveillance abuses.

]]>
password:-passw0rd https://beta.techdirt.com/comment_rss.php?sid=20171129/11211738697
Fri, 17 Nov 2017 10:45:17 PST Defense Department Spied On Social Media, Left All Its Collected Data Exposed To Anyone Mike Masnick https://beta.techdirt.com/articles/20171117/10330438637/defense-department-spied-social-media-left-all-collected-data-exposed-to-anyone.shtml https://beta.techdirt.com/articles/20171117/10330438637/defense-department-spied-social-media-left-all-collected-data-exposed-to-anyone.shtml There are two big WTFs in this story. First, the Defense Departments Central Command (Centcom) was collecting tons of data on social media posts... and then the bigger one, they somehow left all the data they collected open on an Amazon AWS server. This was discovered -- as so many examples of careless data exposure on Amazon servers -- by Chris Vickery and UpGuard, who have their own post about the mess. You may recall Vickery from such previous stories as when the GOP left personal data on 200 million voters on an open Amazon server. Or when Verizon left private data available on millions of customers. Or when a terrorist watch list was left (you guessed it) on an open server. Or when he discovered that Hollywood studios were leaving their own screeners available on an open server. In short, this is what Vickery seems particularly good at: finding large organizations leaving sensitive data exposed on a server.

You would think (wouldn't you?) that Centcom would be better about these things than, say, Verizon or the GOP or Hollywood. But, nope.

"[It's] a pretty serious leak when you're talking about intelligence information being stored in an Amazon cloud service and not properly safeguarded," said Timothy Edgar, a former White House official in the Obama administration and former U.S. intelligence official.

Centcom's response is... sketchy. It uses the important term "unauthorized access," which suggests that it may be pushing for CFAA charges against Vickery/Upguard, since "unauthorized access" is a key part of the CFAA:

"We determined that the data was accessed via unauthorized means by employing methods to circumvent security protocols," said Maj. Josh Jacques, a spokesperson for U.S. Central Command. "Once alerted to the unauthorized access, Centcom implemented additional security measures to prevent unauthorized access."

But if it was truly left open, then the access was not "unauthorized." Indeed, it appears that Centcom went for convenience over security by making its Amazon S3 bucket open for access, and hoping obscurity would hide it.

Amazon servers where data is stored, called S3 buckets, are private by default. Private means only authorized users can access them. For one to be made more widely accessible, someone would have to configure it to be available to all Amazon Web Services users, but users would need to know or find the name of the bucket in order to access it.

By searching specific keywords, Vickery identifies information that companies and organizations inadvertently expose. In this case, he looked for buckets containing the word "com."

Three S3 buckets were configured to allow anyone with an Amazon Web Services account to access them. They were labeled "centcom-backup," "centcom-archive" and "pacom-archive," Vickery said.

As for just what Centcom was doing here -- it does appear that it was publicly available social media content, so that's less of a direct concern, but it still does make you wonder why Centcom was storing all of this social media info. There are also, of course, related concerns about the US Defense Department conducting surveillance on Americans. This is from Upguard's post on the matter (linked above):

The data exposed in one of the three buckets is estimated to contain at least 1.8 billion posts of scraped internet content over the past 8 years, including content captured from news sites, comment sections, web forums, and social media sites like Facebook, featuring multiple languages and originating from countries around the world. Among those are many apparently benign public internet and social media posts by Americans, collected in an apparent Pentagon intelligence-gathering operation, raising serious questions of privacy and civil liberties.

While a cursory examination of the data reveals loose correlations of some of the scraped data to regional US security concerns, such as with posts concerning Iraqi and Pakistani politics, the apparently benign nature of the vast number of captured global posts, as well as the origination of many of them from within the US, raises serious concerns about the extent and legality of known Pentagon surveillance against US citizens. In addition, it remains unclear why and for what reasons the data was accumulated, presenting the overwhelming likelihood that the majority of posts captured originate from law-abiding civilians across the world.

I know that the US government still has this "collect it all" mentality, but as we've discussed over and over again, adding more hay to the haystack doesn't make it easier to find the needles.

]]>
not-cool-guys https://beta.techdirt.com/comment_rss.php?sid=20171117/10330438637
Fri, 18 Aug 2017 13:39:00 PDT Contractor Exposes Personal Information Of 1.8 Million Chicago Voters On AWS Timothy Geigner https://beta.techdirt.com/articles/20170818/09560938025/contractor-exposes-personal-information-18-million-chicago-voters-aws.shtml https://beta.techdirt.com/articles/20170818/09560938025/contractor-exposes-personal-information-18-million-chicago-voters-aws.shtml At some point, it seems clear that if Chris Vickery comes a-callin', you've screwed up when it comes to keeping the private information of customers/voters secure. Vickery works for Upguard, a cyber-security consulting firm that regularly seeks out insecure sites and works with their owners to secure them. Vickery's fingerprints have been on discoveries such as Verizon's exposure of the personal information of 6 million of its customers and a firm contracted by the GOP exposing the personal data of roughly every American voter everywhere.

And now Vickery and Upguard have found that a contractor managing the city of Chicago's voter rolls appears to have exposed more personal information on an AWS server.

The acknowledgment came days after a data security researcher alerted officials to the existence of the unsecured files. The researcher found the files while conducting a search of items uploaded to Amazon Web Services, a cloud system that allows users to rent storage space and share files with certain people or the general public. The files had been uploaded by Election Systems & Software, a contractor that helps maintain Chicago's electronic poll books.

Election Systems said in a statement that the files "did not include any ballot information or vote totals and were not in any way connected to Chicago's voting or tabulation systems." The company said it had "promptly secured" the files on Saturday evening and had launched "a full investigation, with the assistance of a third-party firm, to perform thorough forensic analyses of the AWS server."

So, a couple of things to note here. First, while it's true no voting information was exposed, a good deal of personal information certainly was. Names, addresses, last four digits of social security numbers; you know, all of the things one would need to wreak havoc on a person using their identifying information. Second, it appears that "promptly securing" the files mostly had to do with actually having a password needed to access them. There was no hacking required for Vickery to get to these files, because there was no password protecting them. Great.

Now, where I will give ES&S credit is that they are working with Upguard, rather than trying to vilify it, as we've seen done to so many other security researchers. That's a good thing. Still, Chicago officials are pretty pissed off.

"We were deeply troubled to learn of this incident, and very relieved to have it contained quickly," Chicago Election Board Chairwoman Marisel A. Hernandez said in a statement. "We have been in steady contact with ES&S to order and review the steps that must be taken, including the investigation of ES&S' AWS server. We will continue reviewing our contract, policies and practices with ES&S. We are taking steps to make certain this can never happen again."

Allen added that the board is considering how to notify and potentially offer remedies to those whose information was exposed.

"The expense for that is going to be borne by ES&S," Allen said. "This was a violation of the contract terms that explicitly lay out the requirement to safeguard the voters' data."

It's a wonder to this writer that the constant calls for things like e-voting machines continue when those in charge of securing voter data can't even do that right.

]]>
oops https://beta.techdirt.com/comment_rss.php?sid=20170818/09560938025
Tue, 7 Mar 2017 13:15:00 PST Techdirt Podcast Episode 112: When A Typo Breaks The Internet Leigh Beadon https://beta.techdirt.com/articles/20170307/12225136863/techdirt-podcast-episode-112-when-typo-breaks-internet.shtml https://beta.techdirt.com/articles/20170307/12225136863/techdirt-podcast-episode-112-when-typo-breaks-internet.shtml

From its humble origins as an online bookseller that many people worried might not survive, Amazon has grown into a critical piece of the web's backbone via its Amazon Web Services platform. Last week's S3 outage made this painfully clear, and understandably raised lots of concerns — especially after it was revealed that the whole thing was caused by a typo. So this week we're discussing whether something needs to be done, and what that might be.

Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.

]]>
AW(OO)S https://beta.techdirt.com/comment_rss.php?sid=20170307/12225136863