Senator Mark Warner Lays Out Ideas For Regulating Internet Platforms

from the be-careful dept

For over a year now, Senator Mark Warner has been among the most vocal in saying that it’s looking like Congress may need to regulate internet platforms. So it came as little surprise on Monday when he released a draft white paper listing out “potential police proposals for [the] regulation of social media and technology firms.” Unlike much of what comes out of Congress, it does appear that whoever put together this paper spent a fair bit of time thinking through a wide variety of ideas, recognizing that every option has potential consequences — both positive and negative. That is, while there’s a lot in the paper I don’t agree with, it is (mostly) not in the hysterical moral panic nature found around such debates as FOSTA/SESTA.

The paper lays out three major issues that it hopes to deal with:

  1. Disinformation that undermines trust in our institutions, democracy, free press, and markets.
  2. Consumer protection in the digital age
  3. Antitrust issues around large platforms and the impact it may have on competition and innovation.

All of these are issues worth discussing and thinking about carefully, though I fear that bad policy-making around any of them could actually serve to make other problems even worse. Indeed, it seems that most ideas around solving the first problem might create problems for the other two. Or solving the third problem could create problems for the first one. And so on. That is not to say that we should throw up our hands and automatically say “do nothing.” But, we should tread carefully, because there are also an awful lot of special interests (a la FOSTA, and Articles 11 and 13 in the EU) who are looking at any regulation of the internet as an opportunity to remake the internet in a way that brings back gatekeeper power.

On a related note, we should also think carefully about how much of a problem each of the three items listed above are. I know that there are good reasons to be concerned about all three, and there are clear examples of how each one is a problem. But just how big a problem they are, and whether or not that will remain the case is important to examine. Mike Godwin has been writing an important series for us over the last few months (part 1, part 2 and part 3) which makes a compelling case that many of the problems that everyone is focused on may be the result of a bit of moral panic, overreacting to a smaller problem and not realizing how small it is.

We’ll likely take more time to analyze the various policy proposals in the white paper over time, but let’s focus in on the big one that everyone is talking about: the idea of opening up Section 230 again.

Make platforms liable for state-law torts (defamation, false light, public disclosure of private facts) for failure to take down deep fake or other manipulated audio/video content — Due to Section 230 of the Communications Decency Act, internet intermediaries like social media platforms are immunized from state tort and criminal liability. However, the rise of technology like DeepFakes — sophisticated image and audio tools that cart generate fake audio or video files falsely depicting someone saying or doing something — is poised to usher in an unprecedented wave of false and defamatory content, with state law-based torts (dignitary torts) potentially offering the only effective redress to victims. Dignitary torts such as defamation, invasion of privacy, false light, and public disclosure of private facts represent key mechanisms for victims to enjoin and deter sharing of this kind of content.

Currently the onus is on victims to exhaustively search for, and report, this content to platforms who frequently take months to respond and who are under no obligation thereafter to proactively prevent the same content from being re-uploaded in the future. Many victims describe a “whack-a-mole” situation. Even if a victim has successfully secured a judgment against the user who created the offending content, the content in question in many cases will be re-uploaded by other users. In economic terms, platforms represent “least-cost avoiders” of these harms; they are in the best place to identify and prevent this kind of content from being propagated on their platforms. Thus, a revision to Section 230 could provide the ability for users who have successfully proved that sharing of particular content by another user constituted a dignitary tort to give notice of this judgement to a platform; with this notice, platforms would be liable in instances where they did not prevent the content in question from being re-uploaded in the future a process made possible by existing perceptual hashing technology (e.g. the technology they use to identify and automatically take down child pornography). Any effort on this front would need to address the challenge of distinguishing true DeepFakes aimed at spreading disinformation from satire or other legitimate forms of entertainment and parody.

So this seems very carefully worded and structured. Specifically, it would appear to require first a judicial ruling on the legality of the content itself, and then would require platforms to avoid having that content re-uploaded, or face liability if it were. The good part of this proposal is the requirement that the content go through a full legal adjudication before a takedown would actually happen.

That said, there are some serious concerns about this. First of all, as we’ve documented many times here on Techdirt, there have been many, many examples of either sketchy lawsuits that were filed solely to get a ruling on the books to try to take down perfectly legitimate content. If you don’t remember the details, there were a few different variants on this, but the standard one was to file a John Doe lawsuit, then (almost immediately) claim to have identified the “John Doe” who admits to everything and agrees to a “settlement” admitting defamation. The “plaintiff” then sends this to the platforms as “proof” that the content should be taken down. If Warner’s proposal goes through as is, you could see how that could become a lot more common, and you could see a series of similar tricks as well. Separately, it could potentially increase the number of sketchy and problematic defamation lawsuits filed in the hopes of getting content deleted.

One would hope that if Warner did push down this road, he would only do so in combination with a very strong federal anti-SLAPP law that would help deal with the inevitable flood of questionable defamation lawsuits that would come with it.

To his credit, Warner’s white paper acknowledges at least some of the concerns that would come with this proposal:

Reforms to Section 230 are bound to elicit vigorous opposition, including from digital liberties groups and online technology providers. Opponents of revisions to Section 230 have claimed that the threat of liability will encourage online service providers to err on the side of content takedown, even in non-meritorious instances. Attempting to distinguish between true disinformation and legitimate satire could prove difficult. However, the requirement that plaintiffs successfully obtain court judgements that the content in question constitutes a dignitary tort which provides significantly more process than something like the Digital Millennium Copyright Act (DMCA) notice and takedown regime for copyright-infringing works may limit the potential for frivolous or adversarial reporting. Further, courts already must make distinctions between satire and defamation/libel.

This is all true, but it does not take into account how these bogus defamation cases may come into play. It also fails to recognize that some of this stuff is extremely context specific. The paper points to hashing technology like those used in spotting child pornography. But such content involves a strict liability — where there are no circumstances under which it is considered legal. Broader speech is not like that. As the paper acknowledges in determining whether or not a “deepfake” is satire, much of this is likely to be context specific. And so, even if certain content may represent a tort in one context, it might not in others. Yet under this hashing proposal, the content would be barred in all contexts.

As a separate concern, this might also make it that much harder to study content like deepfakes in ways that might prove useful in recognizing and identifying faked content.

Again, this paper is not presented in the hysterical manner found in other attempts to regulate internet platforms, but it also does very little beyond a perfunctory “digital liberties groups might not like it” to explore the potential harms, risks and downsides to this kind of approach. One hopes that if Warner and others continue to pursue such a regulatory path, that much more caution would go into the process.

Filed Under: , , , , , , , , , ,
Companies: facebook, google, twitter

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Senator Mark Warner Lays Out Ideas For Regulating Internet Platforms”

Subscribe: RSS Leave a comment
56 Comments
Anonymous Coward says:

Remove your assumption that teh internets works as should now.

[Much repeated from a prior, but close enough. — ANY opposition here is just censored away, so accuracy and relevance don’t really matter!]

That IS just your arbitrary assertion for premise — because it empowers corporations as YOU wish, not that it serves The Public so much.

CDA Section 230 made an EXCEPTION to prior law. — IF NOT an exception (as "Gwiz" asserted), then WHY is it necessary? HMM? — I’m sure Masnick just fainted at the thought of removing IMMUNITY to "platforms", making them JUST LIKE PRINT PUBLISHING. — And the resulting "gatekeeping" meaning editorial control would be FINE with me when within the limits of common law.

Anonymous Coward says:

Re: Remove your assumption that teh internets works as should now.

Section 230 is actually new and provides IMMUNITY. Indeed, Masnick’s notion is that it provides not just immunity, but authorizes "platforms" to arbitrarily control The Public’s speech! — That is NEW, "Gwiz", and clear EXCEPTION to all prior law. Section 230 is supposed to empower The Public, NOT makes us subject to corporations!

Corporations of course monetized that "freedom" to gain money indirectly from the works of others. No one paid any heed to growing problems for too long. — It’s only been lately that FOSTA stopped blatant advertising for prostitution!

BUT OF COURSE you "libertarians" who believe that the norms against drug use, prostitution, and numerous scams enabled by anonymity, that all societies have found necessary for basically three thousand years, ALL THAT CAN JUST BE THROWN OUT, right?

The Public does NOT have to allow corporations to run totally wild in order to get most of the advantages! It’s only masnicks and googles and pirates who skim off the margins without contributing who are worried by this.

Anonymous Coward says:

Re: Re: Remove your assumption that teh internets works as should now.

Claiming every society in the last 3 millennia has condemned sex work is laughable. Have you heard of this obscure profession called ‘geisha’?

Shariah law, which forbid alcohol, is about 1400 years old, not 3000. The first non-alcohol drug regulations are younger still. And they definitely weren’t across every society in the entire world.

And ‘scams enabled by’ anonymity isn’t the same as ‘anonymity,’ so that’s just a non sequitur. Where is the pro-scam constituency?

Stephen T. Stone (profile) says:

Re: Re:

Oooh, been a while since I went toe-to-toe with you via a longer comment. Let’s have some fun.

That IS just your arbitrary assertion for premise — because it empowers corporations as YOU wish, not that it serves The Public so much.

Section 230 empowers anyone who runs any sort of platform that allows for third-party speech. That platform can be YouTube, one lone person running a self-hosted blog with a comments section, or anything in-between.

CDA Section 230 made an EXCEPTION to prior law. — IF NOT an exception […] then WHY is it necessary?

Speech on the Internet would be chilled without 230. No “open to the public” platform would dare exist if the owners/operators of that platform could be held liable for third-party speech it had no hand in making or publishing. 230 gives platform ops the necessary legal leeway to moderate the platform as they see fit while keeping it open to the public.

I’m sure Masnick just fainted at the thought of removing IMMUNITY to "platforms", making them JUST LIKE PRINT PUBLISHING.

Print publishing does not carry the same risk of third-party liability as speech on the Internet precisely because material submitted by non-employees of a print publication is typically vetted and edited before the publication goes to print. Twitter cannot hold back every single tweet that goes out on the service every second to vet and edit them for “publication”—and it should not have to. To force Twitter into doing so would kill the usefulness of the service; the service itself would fall next. The same goes for Tumblr, YouTube, Pinterest, and virtually every other “open to the public” platform for speech.

And the resulting "gatekeeping" meaning editorial control would be FINE with me when within the limits of common law.

You have never once explained your definition of “common law”, how common law pertains to the matter of free speech, and what specific part of common law restricts a platform’s owners and operators from moderating the site as they deem fit. If you want that SovCit lingo to mean anything, we must come to an understanding in re: the meaning of that lingo and how it pertains to the United States legal system.

Masnick’s notion is that it provides not just immunity, but authorizes "platforms" to arbitrarily control The Public’s speech!

230 allows the owners and operators of “open to the public” platforms to moderate the platform as they deem fit (within the existing limits of the law). It does not allow those platform owners/operators to moderate the entirety of the Internet in re: one person’s speech. I can make a 280-character post on Twitter, Tumblr, and Mastodon at virtually the same time, and if the moderators of just one of those platforms decides my post violates that platform’s TOS, they can delete that post only on that platform.

Twitter mods have no control over what I post outside of Twitter, including anything I post to, say, a fully-self-hosted website where I republish anything I put on Twitter. Please learn the difference between a platform saying “we don’t do that here” and the government saying “you can’t do that anywhere”.

Section 230 is supposed to empower The Public, NOT makes us subject to corporations!

People who run corporations and moderate platforms for corporations are part of the general public, too, you know.

It’s only been lately that FOSTA stopped blatant advertising for prostitution!

Yes, and look at the wonderful job that piece of (shit) legislation has done of driving illegal human trafficking operations away from the platforms where police could find them to underground platforms where the police cannot. FOSTA did not stop prostitution or curb human trafficking—it only pushed those things out of sight so politicians do not have to think about them.

OF COURSE you "libertarians" who believe that the norms against drug use, prostitution, and numerous scams enabled by anonymity, that all societies have found necessary for basically three thousand years, ALL THAT CAN JUST BE THROWN OUT, right?

What I believe is this: Illegal drugs should be legalized and regulated like alcohol, prostitution should at least be decriminalized, and scammers should be punished for defrauding people. Getting rid of Section 230 would only do what FOSTA has done: Drive the issue out of the spotlight and make the job of law enforcement that much harder to do.

The Public does NOT have to allow corporations to run totally wild in order to get most of the advantages!

Corporations do not run “totally wild”. They must still work within existing laws; the legal issues that Backpage faces stand as proof. Even so, members of “the Public” who operate their own platforms receive the same advantages from 230 that major corporations do: General immunity from legal liability for the actions of a third party. As with the Backpage situation, that immunity can be rescinded if the platform owner violates the law.

It’s only masnicks and googles and pirates who skim off the margins without contributing who are worried by this.

Funny thing: If Techdirt were to ever lose 230 protections via 230 being rescinded, you would no longer have this platform to which you could post your speech. And since you have never once indicated or proven that you post on any other platform, you would lose what is apparently the only platform for your speech. If’n you wanna keep yelling at clouds that look like Mike Masnick, I suggest you start yelling at Mark Warner instead.

That One Guy (profile) says:

Re: Re: Re: Re:

Print publishing does not carry the same risk of third-party liability as speech on the Internet precisely because material submitted by non-employees of a print publication is typically vetted and edited before the publication goes to print.

You missed the best part of that argument: 230 protections are not giving online platforms extra rights, they’ve making clear that those platforms have the same rights as offline publishers, namely not holding them liable for what other people say and do.

A newspaper is not liable for what someone might scribble into a paper left lying around, even if those scribbles are illegal in some fashion. A book publisher is not liable for someone slipping illegal photos in a book and then letting other people find it and them. 230 merely takes the idea of holding the one who posts illegal content liable rather than whatever they use to do so and applies it online.

Funny thing: If Techdirt were to ever lose 230 protections via 230 being rescinded, you would no longer have this platform to which you could post your speech. And since you have never once indicated or proven that you post on any other platform, you would lose what is apparently the only platform for your speech.

Out of their many quirks that one is probably the funniest. Watching someone attack the very thing that allows them to use a platform they obsess over is like watching someone threatening to demolish a bridge they’re currently standing on. If by some chance they actually succeed then they’ll have done damage to what they hate so much to be sure, but they’ll also have screwed themselves over in a big way.

Mike Masnick (profile) says:

Re: Remove your assumption that teh internets works as should now.

CDA Section 230 made an EXCEPTION to prior law. — IF NOT an exception (as "Gwiz" asserted), then WHY is it necessary? HMM? — I’m sure Masnick just fainted at the thought of removing IMMUNITY to "platforms", making them JUST LIKE PRINT PUBLISHING.

While you make these amusing assertions, to be clear, for many years I did in fact argue that CDA 230 was unnecessary, because it should be OBVIOUS that the platform is not responsible for the speech of users, and thus courts should throw out such lawsuits.

However, it became clear that CDA 230 IS necessary because many people — including people like you — bizarrely believe that platforms should be liable for the speech of others, creating a huge mess of censorship.

I mean, if we had the world you wanted, I would need to ban you from speaking on this site because you so regularly post nonsense. Would you be okay with that?

Anonymous Coward says:

Re: Re: Remove your assumption that teh internets works as should now.

for many years I did in fact argue that CDA 230 was unnecessary, because it should be OBVIOUS that the platform is not responsible for the speech of users … However, it became clear that CDA 230 IS necessary because many people — including people like you — bizarrely believe that platforms should be liable for the speech of others,

Statements made by uninformed people are not sufficient reason to create a law saying that a legal thing is legal.

Courts basically agreed with you that service providers were not responsible… except in one specific case, which is where the CDA is useful: when they used editorial control to remove content. Courts had decided they were responsible for everything if they were moderating, and nothing if they weren’t, and CDA 230 overrode that to say that moderation doesn’t make someone a publisher.

Anonymous Coward says:

Re: Re: Re:2 Remove your assumption that teh internets works as should now.

Yes, that’s why 230 is useful, though it’s not so clear-cut that we can say 99.9% of sites need it. The Stratton ruling was sufficiently bad that it would have likely been overturned by a higher court, had 230 not made it moot. (And those "defamatory" statements? Turned out Stratton Oakmont were scamming people; the founders later admitted it, and "The Wolf of Wall Street" is based on this story.)

Mike Masnick (profile) says:

Re: Re: Re: Remove your assumption that teh internets works as should now.

Courts basically agreed with you that service providers were not responsible… except in one specific case, which is where the CDA is useful: when they used editorial control to remove content. Courts had decided they were responsible for everything if they were moderating, and nothing if they weren’t, and CDA 230 overrode that to say that moderation doesn’t make someone a publisher.

Yes. But one other reason why CDA 230 became useful is not just in preventing a repeat of Stratton, but in creating pretty clear certainty for platforms AND in getting nearly all (with a few edge case exceptions) cases dismissed at the earliest stage of litigation. The fear, certainly, is that without a CDA 230 having such a clear line, plaintiffs could get past a 12(b)6 Motion to Dismiss and drag platforms into a longer, much costlier, process.

aerinai (profile) says:

Deepfake today -- Dynamic Video Option Tomorrow

The thing I am upset about this deepfake controversy is how a couple bad actors did some bad things with it and now everyone is in a moral uproar over it. This is a new technology that has a lot of potential LEGITIMATE uses. For example:
– A movie where you can cast yourself and friends in a role
– Recast a movie with specific actors/actresses (who doesn’t want to see Christopher Walken as Han Solo!)
– Shooting pilots and pitching ideas using cheap talent and augmenting your preferred candidates instead
– Instead of reshooting scenes after an actor either dies/does something stupid and gets fired, just use this technology in its place

TECHNICALLY… as it is worded any of the above would be for failure to take down deep fake or other manipulated audio/video content.”

Quit demonizing a SPECIFIC technology just because a few people did something bad with it…

John Smith says:

Re: Re: Deepfake today -- Dynamic Video Option Tomorrow

Don’t worry about copyright holders. They were just ambushed by the lack of legal protection they thought they had. Smart creators just go where the money is and abandon that which cannot be protected. The result is the audience gets what it pays for after it destroys a market through piracy, while the artists themselves develop business models which are immune to piracy, even if they have to create slightly different content to adapt. It’s really no big deal as long as the rules don’t change ex post facto like they did with the DMCA.

Thad (user link) says:

Re: Deepfake today -- Dynamic Video Option Tomorrow

  • Shooting pilots and pitching ideas using cheap talent and augmenting your preferred candidates instead
  • Instead of reshooting scenes after an actor either dies/does something stupid and gets fired, just use this technology in its place

Never mind Congress, I can’t imagine SAG going for either of those.

Anonymous Coward says:

a small site operator says

We’ve run a community news site for over 15 years. We deal with any requests from users within minutes.

Black and white issues are easy. It’s the gray areas, the ones that requires human thinking to assess whether an insult is defamation, that cause the headaches.

Is a local politician a public figure? Sort of, but not quite in a small town.

An observation – the worst offenders always quote our policies back at us pointing out the loopholes they’ve found to support their damage. No one else notices policies.

And finally, Section 230 saved us when one organization’s board member wrote truthful embarrassing things on our site, and another board member sued us all. We had NO WAY to know the truth of what someone was posting to our platform until after the lawsuit. We weren’t there when the alleged, (and ultimately true and newsworthy) activities took place.

Ninja (profile) says:

If you are going to regulate internet platforms then focus. There isn’t any technical solution to *perfectly* filter user content that doesn’t involve bankruptcy or dumbing down the web into some broadcaster so don’t go that way. Focus on what’s within reach like how they deal with information supplied to them, specially if it’s not explicitly and actively done for instance.

Most of these guys wanting to regulate the internet are actually aiming at the end user while hitting the platform. It will inevitably go wrong like the FOSTAs and SESTAs of the world.

Berenerd (profile) says:

There are issues that I don't think they are looking at...

No one country can think about enforcing anything on the internet without going the way of the great firewall of China and spying on individuals for no reason. The internet is not just a US thing. It is a WORLD WIDE communication network. Every country would need a say on these laws, otherwise laws are nothing more then SESTA/FOSTA and being just some words on a document that someone flips around in the air to make it look like they are doing god’s work. Even the Great firewall, there are ways to get around it to the point that enough of the population knows how to do it.

Christenson says:

Core problems....core questions

Here we are, with the greatest copying machine ever built (the internet), on the verge of anyone being able to create anything (deepfakes, photoshop, etc) and say it to almost anyone.

It’s completely a-social…the internet, and even small platforms don’t know what speech it is they host, and lots of awful things can be libel or not depending only on context, the specific example being saying something that starts a kerfluffle, which might be libel, or reporting that the very same thing started a kerfluffle, which never is.

Sites control misinformation of all kinds when there are social consequences for it. The old barriers to entry which created those consequences have fallen away…and have not yet been replaced.

Example: For sale, 1 bridge, in Brooklyn, Famous Designer. Must move quickly, price is right!

OK, how, (besides the laugh test in this case), is someone supposed to figure out if the above “want ad” is credible? Used to be, it cost a few bucks to get that printed, but not anymore….

How can Techdirt and its readers know that this anonymous comment is really from the same guy that posted all the other posts under my name, and not a Russian disinformation officer or a bot or a Remote-access Trojan?

Christenson says:

Re: Re: Core problems....core questions

Nothing new? It’s besides my point, which is that conditions and relationships have changed and are changing in deep ways.

I think even our little chat here would not exist without the internet, much less things like the explosion of available content we are experiencing. Look at Slenderman — would that be possible without the internet?

It’s also very hard to disentangle stuff like photoshop, facebook, and deep fakes from the internet. These all depend on both the internet (so enough people will pay for it), and enough computing power in enough hands to make them happen.

Anonymous Coward says:

Re: Re: Re: Core problems....core questions

I think even our little chat here would not exist without the internet, much less things like the explosion of available content we are experiencing. Look at Slenderman — would that be possible without the internet?

Our specific chat needs the Internet, but chats like this happened before wherever people gathered to relax or work.

The explosion we are witnessing is not in the creation of new works, as for some people the drive to tell stories is strong. What has happened is those story tellers have a means of reaching an audience via the Internet. Computers have changed the media used to tell those stories, and sometime the way they are told, but similar stories, built up in similar fashion, predate the printing press, they are the fairy tales of old.

The deep change being brought about by the Internet is at an organizational level of society, and the groups it threatens are the bureaucracies of government and industrial organizations. This is due to the many to many communications enabled by the Internet.

It is seen in its earliest form in Linux and the free software movement, where a large volume of useful software has been built with no centralized control. It also enable add hoc groups to form to deal with natural disasters, like translation and remapping to support rescue efforts, or citizens helping each other out, like in the aftermath of Sandy. It enabled the monster trucks to roll into Houston, and be effective in rendering aid. Also, the responses are not limited by knowledge in a coordinating system, but rather include any available responder and their resources.

ECA (profile) says:

God help us from the ones that THINK they know it all.

Disinformation that undermines trust in our institutions, democracy, free press, and markets.
Consumer protection in the digital age
Antitrust issues around large platforms and the impact it may have on competition and innovation.

#1..
You cant tell the SITES who is liable for idiots..
Institutions?? WHO is this??
Democracy?? WE ARNT ONE..and every one that WE have Forced on others has failed.
FREE press?? WHO owns our press?? WHOSE opinions are we listening to??
Markets?? Then dont read the STARS on amazon or newegg..on how good a product is. how many of them are just Stealing our money?? Where did QUALITY GO??

#2 Protection?? those laws are almost gone.. Go look at the superfund sites, and all the money spent to clean up Corp Garbage.. corps that denied and ran away..

#3…Antitrust? where do you want to start? And the regulations have been STOMPED ON, and removed..

Practice what I preach and NOT what I do??

Disinformation is easy to deal with. Its called you ask for proof, and an explanation of the persons SIDE OF THE PROBLEM..
I dont mind a fair debate, but both sides must TRY to understand the opinions expressed. And the NET is a great place to debate allot of things.

Protecting our corps/companies is Kinda stupid. If they are doing something WRONG, LETS FIND OUT/KNOW what is going on. Lets discuss it, and HOW they can improve.. AND NOT run away from the subject.
i love the thought that CORPS ARE KINGS in certain nations, but other nations regulate and control the corps, ALLOT BETTER..

Anonymous Coward says:

Hashes

I am no expert on the subject, but isn’t a hash value supposed to change if just bits of data are changed? It seems that it would be simple to change a frame, add a frame at the end, or similar things that could be easily automated before upload, would render any hash-check-and-ban system useless.
I know there are other methods as well, but this one stood out to me.

fairuse (profile) says:

Regulate? Social Platform is the Standard Oil of 2018

What happened to the oil barons? Bang you are dead via tax and regulate.

Ma Bell? Cut into bits and forced out of the telephone set monopoly.

It’s now time for politicians to get their fur up at something they don’t control, have little knowledge of what needs to be done and have the weight of Hollywood, centralized cable networks that have content libraries to make money on. Look real close at what passes for Premium content in the cable market; pitiful amount of no-cost TV series and movies. Much rent/buy at prices that are more than Amazon Prime Video — $5 for one episode of popular series.

This goes way beyond just Google, Facebook, Twitter, and 24h news sites. Comcast is silent because it has much to gain. Other cable services are going to get rolled up in this — ISP = content supplier and access. Hell, mobile is just waiting to add content nobody wants, Hip G90 or whatever that was died; solution is offer access to the Netflix, Amazon Prime Video, Universal catalog.

Youtube is not the issue. DeepFake tech is just a symbol to wave in consumers face. I have all the audio and Video tools and know how to use them. Why did Hollywood beat all the hardware level access iMac had out of user control? Duh. Make computers into TVs.

Tablets, mobile, laptops are just 1960 TV sets now. My old iMac can make short films, it can also grab a segment of any video I see. With that power Plus DeepFake Pro I am staring in Doctor Who.

This is the middle of a long game by movie/TV LABELS.

Everything else is dinner theater. The mob wants bread.

Anonymous Coward says:

We're No. 1!

“1. Disinformation that undermines trust in our institutions, democracy, free press, and markets.”

“Disinformation”, being in the eye of the beholder, how could this possibly go wrong for sites such as TD?! Don’t even get me started on the undermining of trust.

All those Warner brothers are so wacky.

Anonymous Coward says:

I just read the paper, or at least most of it.

Warner sets out a premise, that essentially ditto’s the view of the unholy trinity of cabal news, then regurgitates the summaries of a bunch of high level reports in a circular argument.

His premise, is essentially a product of the problem that he presumes to want to mitigate. And he doesn’t seem to have a problem with that, or as far as I can tell, is even aware of it. Clearly introspection is not the guys strong point.

There are lot of popular but unsubstantiated claims. What I found most disturbing, is that he seems to propose formalizing a new industry of civil rights brokering. Which is in fact what is already going on. The problem is not that this needs to be commoditized responsibly. The problem is that it is commoditized AT ALL.

Though he probably doesn’t realize it, there are a lot of famous arguments that sound very much like this paper. In fact they were made by a fellow Democrat… John C. Calhoun.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...