How The Internet Association's Support For SESTA Just Hurt Facebook And Its Users

from the with-friends-like-these dept

The Internet Association’s support for SESTA is truly bizarre. Should its support cause the bill to pass it will be damaging to every one of its members. Perhaps some members feel otherwise, but it is hopelessly na?ve for any of them to believe that they will have the resources to stave off all the potential liability, including criminal liability, SESTA invites to their companies generally and to their management teams specifically, or that they will be able to deploy these resources in a way that won’t destroy their user communities by over-censoring the creativity and expression they are in the business of providing forums for.

But that’s only part of the problem, because what no one seems to be remembering is that Section 230 does not just protect the Internet Association’s platform members (and their management teams) from crippling liability; it also protects its platform members’ users, and if SESTA passes that protection will be gone.

Naturally, Section 230 does not insulate users from liability in the things they themselves use the platforms to communicate. It never has. That’s part of the essential futility of SESTA, because it is trying to solve a problem that was not a problem. People who publish legally wrongful content have always been subject to liability, even federal criminal liability, and SESTA does not change that.

But what everyone seems to forget is that on certain platforms users are not just users; in their use of these systems, they actually become platforms themselves. Facebook users are a prime example of this dynamic, because when users post status updates that are open for commenting, they become intermediary platforms for all those comments. Just as Facebook provides the space for third-party content in the form of status updates, users who post updates are now providing the space for third parties to provide content in the form of comments. And just as Section 230 protects platforms like Facebook from liability in how people use the space it provides, it equally protects its users for the space that they provide. Without Section 230 they would all be equally unprotected.

True, in theory, SESTA doesn’t get rid of Section 230 altogether. It supposedly only introduces the risk of certain types of liability for any company or person dependent on its statutory protection. But as I’ve noted, the hole SESTA pokes through Section 230’s general protection against liability is enormous. Whether SESTA’s supporters want to recognize it or not, it so substantially undermines Section 230’s essential protective function as to make the statute a virtual nullity.

And it eviscerates it for everyone, corporate platforms and individual people alike ? even those very same individual people whose discussion-hosting activity has been what’s made platforms like Facebook so popular. While every single platform, regardless of whether it is a current member of Internet Association, an unaffiliated or smaller platform, or a platform that has yet to be invented, will be harmed by SESTA, the particular character of Facebook, as a platform hosting the platforms of individual users, means it will be hit extra hard. It suddenly becomes substantially more difficult to maintain these sorts of dynamic user communities when a key law enabling those user communities is now taken away, because in its absence it becomes significantly more risky for any individual user to continue to host this conversation on the material they post. Regardless of whether that material is political commentary, silly memes, vacation pictures, or anything else people enjoy sharing with other people, without Section 230’s critical protection insulating them from liability in whatever these other people should happen to say about it, there are no comments that these users will be able to confidently allow on their posts without fear of an unexpectedly harsh consequence should they let the wrong ones remain.

Filed Under: , ,
Companies: facebook, internet association

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “How The Internet Association's Support For SESTA Just Hurt Facebook And Its Users”

Subscribe: RSS Leave a comment
35 Comments
Anonymous Coward says:

“Just as Facebook provides the space for third-party content in the form of status updates, users who post updates are now providing the space for third parties to provide content in the form of comments. And just as Section 230 protects platforms like Facebook from liability in how people use the space it provides, it equally protects its users for the space that they provide. Without Section 230 they would all be equally unprotected.”

Do you have an actual case where Section 230 protected a user from liability?

Anonymous Coward says:

The Internet Association’s official site has this to say about their position on "Intermediary Liability":

The Internet has flourished in part because Internet platforms permit users to post and share information without fear that those platforms will be held liable for third-party content. The threat of liability can transform ISPs and websites into gatekeepers and enforcement agents, incentivizing them to block user generated content, even if legal – making the web less free, innovative, and collaborative. Consequently, the Internet Association supports the federal policy codified in Section 230 of the Communications Decency Act and Section 512 of the Copyright Act, which explicitly protects against liability for content posted by third party users.

That seems to directly conflict with any support for SESTA in any form. So is the IA’s leadership acting independently of the membership’s wants and needs?

ShadowNinja (profile) says:

Re: Vague FUD.

… Seriously?

Print publishers like newspapers actually hire the writers who write stuff, they have editors to review everything. Even letters to the editor/etc. that papers publish had to be reviewed and pre-approved to appear in the paper.

Web sites do not. Any discussion forum, even Techdirt’s, lets people post whatever the hell they want with oversight to prevent bad content from ever appearing. It’s been explained in countless previous articles at techdirt why making websites review and approve of posts before they show up won’t work. It’s too expensive, and many sites, especially those owned by giant big names, get WAY too much content for it to ever be viable for them to have someone personally read and review every single piece of new content submitted.

Vapewhore says:

Re: Re: Vague FUD.

So you are saying print publications used their advertising $ to pay people to read and vet all the letters to the editor etc… We now have a disruptive tech where print is becoming irrelevant, facebook is raking in the advertising $ but doesn’t want to pay anyone to do this new (old) job created by the new tech?
I would think click’n’read would be much easier and efficient than tear open envelope, unfold paper, orient paper, try to read printing/writing. I’m thinking someone with a quick left-clicker finger and a laser mouse could read 5-10 posts in time it takes to do one on paper.
Something tells me that a filter could be put in place, we’ll call it a spam filter, so that only posts caught in the filter need be seen by human eyes. Maybe TD can tell us just how prohibitively expensive this is? How much of every dollar goes to clearing the spam filter?

Anonymous Coward says:

Re: Vague FUD.

Would you be happy if every web site you commented on looked at and decided whether they will publish your comment?

Also, if you run your own blog,Facebbok age, YouTube channel, or other space where you can control comments, will you risk prosecution, block comments, or look at every one and any embedded links before letting it become visible?

I ask because that is what you are asking for. Note, as soon as any comment could expose you to criminal charges, you have to look at all comments.

Anonymous Coward says:

Re: Re: Vague FUD.

I’m not the one you were asking, but: yes, I’d be completely happy with that.

It can be done, you know. We’ve done it with moderated mailing lists and Usenet newsgroups for decades. It takes attention and effort and diligence, of course, but it CAN be done — and there are all kinds of effective, efficient techniques for handling it gracefully, including collaborative ones.

The quality of conversation on these forums is incredibly higher than on those which are unmoderated and — increasingly — overrun by bots. Moreover, high quality begets the same, as we’ve seen over and over again: selecting for literate, intelligent, well-written comments encourages others to invest effort in crafting their own and discourages those who won’t. It’s not an accident that some of these forums are still around, and that some of them are VERY popular, despite the shiny new glitzy interfaces offered up by social media and their unfulfilled promises of community. They persist because they’re a pleasure to read and to participate in. They’re high-functioning communities that are worth seeking out.

If Facebook, Twitter, Reddit et.al. don’t want to do this, then let them shut down. Problem solved. (Actually: MANY problems solved.) They will be replaced by myriad operations which WILL do this: some will do well, some won’t, but eventually there will emerge some which are run by people who’ve done their homework and learned from those of us who’ve been handling this adeptly all these years. They’ll create platforms that are largely immune to trolls and bots, stalkers and doxxers, and they’ll do well. The rest will be consigned to the scrap heap, and good riddance to them.

Anonymous Coward says:

Re: Re: Re: Vague FUD.

It can be done, you know. We’ve done it with moderated mailing lists and Usenet newsgroups for decades.

That works, when a faction of a percent of the people online participate in those forums. For larger platforms, that will be like print papers, a fraction of a percent of submissions will actually be published.

So what you are advocating is actually turning the Internet into a communications medium for a self selected elite.

Anonymous Coward says:

Re: Re: Re:2 Vague FUD.

Yes…and no.

First, don’t underestimate how much online traffic is bot-generated (or bot-assisted). It’s completely overrun a number of platforms, mostly due to the incompetence of the people running them. Add to that the “shit-posting”, the trolling, and all the other forms of abuse. Sensible moderation takes a huge bite out of all of these — and, as a highly desirable side effect, discourages repeated, future efforts.

Second, it’s not clear that “large” == “good”. Oh sure, it’s all idealistic to think of giving everyone an equal voice, but handing a global soapbox to hundreds of millions of illiterate, uneducated morons doesn’t serve anyone well.

Third, it’s not elitism so much as it is meritocracy. Those who are worthy, and can prove it by demonstrating so, are worth reading. Those who can’t even compose a single cogent sentence are not. They bring nothing to the table: why should they be allowed a seat at it? What value do they add to discourse? How do they justify their presence?

(Keep in mind that “speaking via someone else’s Internet operation” is not a right. It’s a privilege.)

Fourth, it’s worth noting that all platforms (to varying degrees of effectiveness) already screen/censor traffic. (including this one) Thus what I suggest is not the imposition of moderation where none currently exists: it’s merely a change in the criteria.

Part of the problem here, and this touches on all these points, is that ignorant newbies like the ones at Twitter built platforms far beyond their meager abilities to operate responsibly. They were so arrogant, so proud of their ability to create enormous operations, that they didn’t stop to consider if they should. (Yes, I’m channeling Jeff Goldblum.) They’ve lost control of their own creation.

The same has happened at Facebook and Reddit and Google and elsewhere. And their excuse is that they’re “too big” — which is not only bullshit, but disingenuous bullshit. Nobody made them do that: they CHOSE to do it. And now they’re trying to duck responsibility for the consequences, while of course continuing to massively from them.

Anonymous Coward says:

Re: Re: Re:3 Vague FUD.

Oh sure, it’s all idealistic to think of giving everyone an equal voice, but handing a global soapbox to hundreds of millions of illiterate, uneducated morons doesn’t serve anyone well.

As I said, you would reserve communications for a self selected elite; and therefore you have the soul of a censor. I agree that parts of Reddit, and other soicals media are crawling in the sewer, but that is what those groups decide to do. Other parts are polite and useful. The same can be said for newsgroups.

Select the company that you keep, but do not force your standards on everybody else, less you find somebody with more power and standards you disagree with are forcing you to follow their standards..

Anonymous Coward says:

Re: Re: Re:4 Vague FUD.

I have the soul of a moderator. (And have put my job on the line, twice, to fight censorship.)

The distinction between the two often gets lost, but they really are quite different. A moderator’s task is in fact the opposite of censorship: it is to facilitate discussion, not to stop it, or to stop a portion of it based on viewpoint/opinion. Good moderators make this happen, and by their actions greatly enhance the quality of discourse.

A censor’s task is to suppress a viewpoint/opinion and thus to impair discourse. A censor doesn’t care how well written an essay is, or how well-sourced a news report is: they care whether or not it advances their agenda(s).

The two thus sit at opposite ends of the spectrum — even if some of the actions they take appear to be similar. (Like, let’s say, blocking “shit-posting”.)

It’s very nice to think of completely unfettered forums, and you know, Once Upon A Time, we had some of those. It was fun, in some ways, and frustrating in others. But that was during a time when the total population of the nets was only a tiny fraction of what it is now, during a time when most of that population was affiliated with universities/research institutions/large corporations, during a time when system/network admins paid attention to what their operations were doing, during a time when there were, thankfully, precious few trolls and bots and abusers.

That was then. This is now. That model doesn’t work any more. Pity. But no point pretending otherwise. Anyone who truly wants their forum to be useful and not merely another target to be exploited must take steps to defend it. (as TD does here) And that means that we’re not arguing moderated vs. unmoderated: we’re just arguing about how that moderation should work.

Anonymous Coward says:

Re: Re: Re:5 Vague FUD.

The distinction between the two often gets lost, but they really are quite different. A moderator’s task is in fact the opposite of censorship: it is to facilitate discussion, not to stop it, or to stop a portion of it based on viewpoint/opinion. Good moderators make this happen, and by their actions greatly enhance the quality of discourse.

Reverse-false-equivalency in a nutshell. Censorship is censorship no matter what you name it. If you prevent someone’s opinion (shit-post, trolling, whatever) then you are censoring that person. “Moderation” is just a dumbed-down synonym.

Anonymous Coward says:

Re: Re: Re:6 Vague FUD.

I’m sorry that your inferior mind is incapable of grasping nuance. I’m sorry that the distinction between “moderator” and “censor” is lost on you. I’m sorry that you lack the intelligence, the vocabulary, the education to understand this rather simple point.

Perhaps you should not attempt to participate in discussions that are quite clearly far beyond your feeble intellect.

Anonymous Coward says:

Re: Re: Re:5 Vague FUD.

Parts of the large sites are well moderated, other not, but then that is the decision of the groups that form on those sites. What you are proposing is that the site imposes it standards on all groups, rather than letting the groups that form decide their own standards.

As I said, people can select the company that they keep, and is is fairly easy to find and follow worthwhile conversations on all platforms, while ignoring that which would offend you.

Anonymous Coward says:

Re: Re: Re:6 Vague FUD.

This has nothing to do with what “offends” me. (Or for that matter, what offends anyone else.)

It has do with abuse, which is has gotten so bad that “completely out of control” barely covers it. And yet it continues to get still worse: every day brings another report or paper demonstrating that however bad we thought it was, it’s actually worse. And that it’s proliferating along another axis or via another vector. Even those of us who’ve studied this for many years are shocked at what’s turning up.

Here’s an example of what I mean: https://medium.com/@jamesbridle/something-is-wrong-on-the-internet-c39c471271d2

Note: just an example. I could paste in several hundred more links from just the past month. And while our discussions here tend to focus on Facebook, Twitter, and Google, the problem isn’t confined to just those platforms: it’s on Instagram and Snapchat, Reddit and many, many others. None of them have yet demonstrated the professionalism required to deal with it — which is one reason why it’s increasing. Abusers know a soft target when they see one.

This has consequences — far-reaching consequences. We’re now at the point where some services (e.g. Twitter) can be fairly said to be more under the control of third parties than under the company running them. That should be setting off alarm bells in everyone’s head.

Anonymous Coward says:

Re: Re: Re:7 Vague FUD.

There is good and bad on all the large platforms, which is a natural result of opening them for anybody to use. You would through out the good to try and eliminate the bad. Demanding that companies moderate there services is asking them to decide who will have a voice on the Internet. The problem with moderation is that it does not scale well, and algorithms are extremely error prone. Let the groups that form on those platforms decide how to moderate their little patch of the Internet.

Instead of demanding moderation to your standards for all, give you attention to what you find good, and ignore that which you find bad. That way you will find that your experience of the Internet improves dramatically.

Anonymous Coward says:

Re: Re: Re:3 Vague FUD.

Second, it’s not clear that "large" == "good". Oh sure, it’s all idealistic to think of giving everyone an equal voice, but handing a global soapbox to hundreds of millions of illiterate, uneducated morons doesn’t serve anyone well.

True or false, for better or worse, that’s what the 1st Amendment guarantees is permissible. And protecting that is what protects your ability to post your own opinions. Don’t be so quick to censor the internet lest ye find yourself censored.

zarprime (profile) says:

Re: Re: Re: Vague FUD.

Actually, those sites won’t shut down. They’ll move, base themselves in a country that’s more accepting of what they’re doing, and stop making themselves available in the US. And the US will become a backwater for discussion. This is no different than when the US tried to regulate cryptography – all the best work was being done outside the US. A programmer in the US trying to participate could be prosecuted as an arms dealer. Eventually someone realized that if they couldn’t regulate the world then they’d better make sure they could participate. They’re about the make the same mistake all over again. Do you think at this point Facebook, Twitter and Reddit need the US? They don’t. They are there because that’s where they started. I’m sure a disproportionate number of their users are there too, but sometimes you need to cut off a leg to save the body. Killing off these companies would be bad, but driving them away is even worse.

Anonymous Coward says:

Re: Re: Re:2 Vague FUD.

Interesting point. Perhaps that would/will happen.

It’s not clear that this is bad for the US. Perhaps you didn’t see the news, but in the last few days Facebook upped its public estimate of fake/bot profiles to 200M. That is, of course, still a HUGE underestimate — like all security disclosures always are. (See “Yahoo email” for a reference case.) The same is true elsewhere: this situation is the rule, not the exception.

So let’s say they move. What has been lost? An operation that’s clearly out of control? One with massive privacy and security issues? One that is readily manipulated by third parties? One that’s hopelessly overrun by bots? Why, exactly, should we care if this hot mess goes away?

But if we stipulate, for the purpose of argument, that A and B and C have been lost in this hypothetical Facebook-less future, and that these are valuable things that we want to have, then SOMEONE will step up, move into the vacuum that’s been created, and provide them. Facebook is expendable: it is as unimportant and transient as MySpace. (Remember it? Once upon a time we were breathlessly told, with much hype, that MySpace was important. It wasn’t.) So if Facebook leaves then there will certainly be some temporary disruption — but it’ll pass. And we can hope that whatever replaces it will be run by people who’ve learned from its failures.

Anonymous Coward says:

Re: Re: Re:3 Vague FUD.

https://en.wikipedia.org/wiki/Hitchens%27s_razor

Who said MySpace was important? Who said Facebook is? Your arguments are largely if not entirely baseless and certainly run afoul of the constitution. If you really want what you appear to want, move to China and that’s what you’ll get. Report back on how awesome their system is.

zarprime (profile) says:

Re: Re: Re:3 Vague FUD.

Let’s say that Facebook steps up. What’s it going to cost? Let’s postulate that they have only 100M users, posting only twice a day, and every post must be reviewed. Given this volume of posts, reviewers can’t take a lot of time to review – let’s give them only 10 seconds. That works out to over 555000 hours of reviewing, per day. With a normal 8-hour day, they need almost 70000 employees JUST FOR REVIEWING. I asked Google, and apparently Facebook currently employs something in the neighborhood of 17000 workers.

How do we pay for these workers? Mandatory ads all over people’s pages? How did that work out for MySpace? Chances are, you now have to pay to be on Facebook.

But that’s just the cost to Facebook. Once your post has been approved, you’re now a publisher, and responsible for any comments on your post. Now you have to spend time reviewing comments. This starts to sounds like a job and not a fun place to meet new people or find old friends. Screw comments – just let people apply a couple of emoticons to your post, nothing controversial there.

How is this a place you want to be? How often will something interesting happen there? Yes, Facebook isn’t perfect. NOTHING IS. Public (physical) bulletin boards have the same problem, but the good they serve generally outweighs the bad.

Mike Masnick (profile) says:

Re: Vague FUD.

Tell me EXACTLY why treating Internet corporations a little more like print publishers is bad.

Because an internet platform that enables anyone to communicate is vastly different than a print publication that is a limited resource, involves careful review of everything.

Just YOUR OWN COMMENT proves the point. If we were to be treated as a print publication, we’d need to review every comment before it went on the site. And that would be ridiculous: we’d just get rid of comments.

You can’t, so just wrote a rant, then got out your thesaurus and made it look learned.

No, we can, and did. As did others responding to you as well. The differences are fairly obvious, so I have to wonder why you pretend not to understand them.

The whole nature of the internet is that it’s a communications medium — meaning open communications — rather than a broadcast medium involving gatekeepers reviewing every bit of content and only allowing some through.

I recognize that some whose business models thrived under a broadcast/gatekeeper model would like to turn the internet into such a system, but most of us think that would be devastating for communication and innovation. You move from a world of permissionless discussion, to only those with the stamp of approval can discuss. That’s a big, big difference. Why would you support such a move?

Anonymous Coward says:

Re: Re: Vague FUD.

The biggest irony is that out_of_the_blue supports such a permission-based system, which given the quality of his drivel, would never be allowed should his wishes come into fruition.

I’ve learned that copyright advocates genuinely suck at thinking through the consequences of their desires, and have yet to see any exceptions to the rule.

MyNameHere (profile) says:

Re: Re: Vague FUD.

“we’d need to review every comment before it went on the site.”

That’s part of the FUD. Print sites have the time to do so and a limited space, so they review before publishing to get varied points of view (or to support what they like).

Online doesn’t work that way only because it’s a choice in how you have set your site up. I know a number of sites that have 100% moderated comments, and still get plenty of action.

SESTA would also not make you have to moderate every comment. That is very, very misleading.

“The whole nature of the internet is that it’s a communications medium — meaning open communications — rather than a broadcast medium involving gatekeepers reviewing every bit of content and only allowing some through.”

I think the problem you have, however, is that you have mixed the publishers and the communications part of things together in your mind, and you seem unable to separate them.

We all know that Techdirt has moderation. You already filter out pretty much anything that would be a problem, such as posts with too many links, posts from IPs you think are TOR nodes, and so on. I doubt that Techdirt would have to do a single thing in the face of SESTA.

That level of moderation isn’t an unreasonable expectation of a site. After all, it is your site, your name on the door, and your name above every post.

“You move from a world of permissionless discussion, to only those with the stamp of approval can discuss.”

Another dishonest statement. With very basic moderation to avoid linking to bad things, the average site is still wide open and there is no requirement to “approve” discussions. What do you think would be stopped, exactly? Do you think the comments on this site would be much different? I don’t think so.

“Why would you support such a move?”

I don’t support strawmen. Creating a monster under the bed scenario to scare the kids isn’t helping your stand much at all.

Anonymous Coward says:

Re: Re: Re: Vague FUD.

It is still a bad law, because it is making people criminally responsible for other peoples actions. Also, it only takes one bad post/link to slip through to put the site in trouble. Haven’t you noticed that Spam sometimes get through on this site, and may stay up for the whole of a weekend.

MyNameHere (profile) says:

Re: Re: Re:2 Vague FUD.

I think the point here is that people’s bad actions, without the amplifier of the publishing website, would be about the same as a frog farting in a swamp.

“Also, it only takes one bad post/link to slip through to put the site in trouble.”

That isn’t at all true. You have to consider what the case would look like in court. “Yes Judge, we found this site all about parts for Ford Mustangs, and on the 100,000 posts and comments, we found a single spammer that linked to an escort site!”. Do you honestly think that it would even get into court? It would be a very poor case to try to fight.

Most importantly, a single post would most certainly not rise to the level of facilitating anything, except perhaps spam.

Mike doesn’t want to tell you that, because outrage at SESTA only stands up when you deal with such scary scenarios that just don’t play out in real life.

The Wanderer (profile) says:

Re: Re: Re: Vague FUD.

Online doesn’t work that way only because it’s a choice in how you have set your site up.

And the sites are set up that way because that’s the only possible way to achieve the scale of "content throughput" which is needed to serve that large of an audience without pricing yourself out of existence, at least barring the development of strong AI (which would bring with it ethical concerns related to slavery, but that’s another conversation).

I know a number of sites that have 100% moderated comments, and still get plenty of action.

At what scale of "content throughput", i.e., posts-per-second et cetera?

A quick Google search indicates that Twitter, to choose one example, gets about 6,000 posts per second on average. Would the kind of moderation used on the sites you’re thinking of scale to that level of throughput?

"You move from a world of permissionless discussion, to only those with the stamp of approval can discuss."

With very basic moderation to avoid linking to bad things, the average site is still wide open and there is no requirement to "approve" discussions.

Do you not see the contradiction here?

The very act of deciding what is and is not "bad things", for the purpose of blocking them under such a system, is itself a granting or denying of permission.

MyNameHere (profile) says:

Re: Re: Re:2 Vague FUD.

IMHO, Twitter has a business model problem.

The scale of the site creates a legal nightmare for them – and only section 230 has kept them from massive legal problems.

The problem here is that section 230 has created a special situation where a company can effectively ignore pretty much all the laws by saying “user generated content”, but at the same time have no obligation to know who the users are who are generating the content. It creates a situation where the site isn’t liable (section 230 to the rescue) and at the same time the user isn’t liable because there is no way to track them down. All you need for a Twitter account is a disposable email address (yahoo, gmail, or any list of others) and away you go. Use TOR or VPNs, and you are effectively 100% anonymous, so you can libel, slander, offer illegal services and nobody is responsible.

That’s just f–ked up, plain and simple.

It’s one of the reasons why Facebook is actually a superior platform in many ways. The actual user is most often clearly identified (there are fake accounts, but at about 10% they are pretty good with it and working to improve). Facebook also has quite a reactive user flagging and reporting system that helps to keep things clean. They are committing more resources and people to it, and they understand that their business model in no small way depends on it.

So a move to repeal / remove protection for a very small part of section 230 to specifically prohibit what is already illegal everywhere else seems normal and reasonable. It’s something that needs to be done. Taking money out of the sex trade ecosystem is a good step towards making it less profitable and thus, perhaps fewer victims.

Mattwo says:

Amont the IA's members, Netflix is definately not on our side...

“Perhaps some members feel otherwise, but it is hopelessly naïve for any of them to believe that they will have the resources to stave off all the potential liability, including criminal liability,”

Netflix sold out to Comcast in the past and probably doesn’t even support net neutrality, they probably have other reasons to support SETSA.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...