Doubtful. I found a draft version of the law from last year online (not the current version). It has major penalties for trying to circumvent the operation of the law. Guessing that part has not changed. If Facebook was willing to close down in-person operations in Australia, and access that market solely via the Internet, Australia might have a struggle to enforce the sanctions in the law but, if you are Facebook, that's playing with fire.
I get that answer in the wierd corner case where the arbitrator imposes a large one-time fee (or maybe a large fee covering the next N years). At that point, Anonymous Coward's analysis seems right--they'd have to pay or they'd have to 100% exit Australia. Whether or not the fee would still be enforceable via some internation agreement is beyond my expertise.
But, suppose the arbitrator imposes a pay-per-use fee, measured, somehow, based on the amount of conmtent to which they link. Unless that fee has some obnoxious minimum, even when they link to zero content, seems like the nuclear option would remain available.
I guess Facebook has some significant legal talent trying to game through these scenarios.
I haven't tried to read this pending Australian law and I have a question for anyone who has. Consider the scenario where Facebook tried to n egotiate a fee, the negotiation fails, and an arbitrator then determines the fee. Maybe the ultimate number is one that, as a business matter, Facebook decides it can live with. But suppose the number is one that Facebook finds intolerable. Wny cannot it, at that later point, return to the Nuclear option, e.g., refusing to post or allow posting of anything that would make it subject to the fee?
This is a legit concern. Figuring out how to balance the burden is challenging. I do not have the skills to propose a system for which I want to advocate but, neither, am I convinced that the current system si the best we can do.
Mr. Stone says:
Your entire idea hinges on the premise that content should be taken down before it is determined to be defamatory. That is a gross violation of the First Amendment — in spirit, if not in letter.
No, I don't believe I said that (and I certainly did not intend it). You and I seem to agree it would be a problem to have a system where Babcock's mere allegation of defamation results in a takedown unless the counter-party jumps through hoops to prove the negative (i.e., no defamation).
I am hypothesizing a different system where Babcock has to get a court to confirm there was defamation. In that hypothetical system, the or iginal poster would have an option to contest the allegation of defamation. Maybe the platforms would have that option too. Maybe an industry organization, in which the platforms could share, would have that option too. Only when Babcock won his case, would potential liability be triggered for platforms.
To whom, exactly, would that pose an unfair burden?
I agree with lots of what you say, e.g., re better approaches they could have taken. Good points.
On intermediary liability, let me elaborate. I would NOT propose a simple DMCA takedown system expanded to cover allegations of defamation just as the current DMCA system covers allegations of infringement. The abuses of the current system have been well explained on this site and an expansion of this kind would make it much worse. However, suppose we invent some additional hoops through which the party seeking takedown must jump. For example, if I go into court seeking a preliminary injunction, the judge can, as a condition of granting the injunction, require me to post a bond that I forfeit if I cannot prove my case. (Further explanation may be found at https://www.performancesuretybonds.com/blog/what-is-the-purpose-of-a-preliminary-injunction-bond/.) It's up to the judge to decide whether I am likely to prevail on the merits (if not, my request for an injunction is denied) and also how big a bond is appropriate to balance the equities.
A system like that is not perfect and, like almost any system is subject to abuse. But at least there is a control built in to moderate the potential for abuse. A problem with the current DMCA system is that it puts too much burden on the party opposing takedown. This is one way (maybe there are others) to shift the balance of the burdens.
By the way, I do very much appreciate the civility of your comment.
If the choice is between no intermediary liability ever and 100% intermediary liability always, I guess I might reluctantly choose the former. But I am not convinced there are no other choices.
The NYTimes article (whose factual reciations I have not attempted to verify) says that the victims spent a lot of resources getting RipOffReport to take down the defamatory content. It also says that multiple other websites with overlapping business models refused to take down the content and asserted Sec. 230 as a defense. So, on one interpretation of this data, various bad actors (RipOffReport and its ilk) have profited at the expense of innocent victims, the extended Babcock family.
Some commenters dismiss this as a non-issue by pointing out that the victims include Canadian citizens and that Sec. 230 is not a Canadian law. I don't follow that logic: the NYTimes article asserts that the Babcock family is spread across the U.K., Canada and the U.S. and that they have been defamed globally. If we assume (but I cannot prove) that some of the defamation comes from U.S. hosted websites,U.S. law is relevant and it is not constructive to note that some of the victims are Canadian.
Back to the binary issue. The common law legal system (spanning multiple countries and multiple centuries) often imposes intermediary liability, e.g., in cases where there is no effective and practical remedy against the "real" villain. I acknowledge the risks associated with that approach and am not so confident in the trade offs (e.g., vis a vis the consequences to free speech) that I want immediately to argue for imposing that liability. However, I think it is worthy of reasoned debate.
I am not particularly inclined to select Google or other search engines as the optimal targets of intermediary liability as the forces of the marketplace and their own self interest might suffice. But what about platforms like RipOffReport? Might it be justified subjecting them to intermediary liability?
Some of the audience will, of course, note that I raised a similar concern in comments on another post a few days ago. Some of the respondents to those earlier comments decided that obscenities made a good counter-argument. Perhaps those respondents will be a little more thoughtful this time.
Also, on your first question, I return to the NYTimes Babcock/Atas article that got me started on this thread. The Babcock's were damaged and are continuing to be damaged online by the activities of a person against whom there is little or no effective remedy. In a perfect world, they'd have a remedy against someone. One source of the damage to them seems to be folks like "RipOffReport" and other "complaint" sites. So, maybe a remedy against them? But maybe it's a slippery slope and better to leave things alone. I don't have a perfect answer--just wanted to ask the question.
I have the most difficulty with the latter question and agree that, if politicians start fiddling with Section 230, there is clearly a very great risk of undesirable consequences. That might be the best argument I've heard for not touching it at all. (As to the former question, we've seen multiple cases and essays where the judge or the author manages to misunderstand Section 230. With 20-20 hindsight, one can imagine drafting the words to avoid those misunderstandings.)
I apologize for the (gentle) flattery. It was not designed to have anyone "go easy" but, rather, to invite reasoned discourse. I do not want to destroy Sec. 230 but neither do I think it perfect beyond improvement. When I looked a little while ago, the NYTimes article had several hundred comments, the large majority from uninformed people who do not understand the issue and many of whom think the Babcock story is a good reason to destroy Sec. 230 and/or take various steps antithetical to traditional First Amendment rights. To argue that Sec. 230 is beyond improvement runs the risk that those uninformed people and their uninformed political representatives will prevail on these issues.
Mr. Stone, whose thoughtful commentary I admire, says "if the system still works on notice-and-takedown, it will still suck." He may be right but I want to keep trying. Let's invent a system where takedown's have to be "certified" before they take effect. We invent a certification system involving independent arbitrators who the issuer of the takedown request has to pay (pick a number) to have them do their job. The system can identify someone as "vexatious" and increase the fee due on subsequent takedowns. There is a suitable appellate process to deal with situations where the first level gets it wrong.
Once again, I repeat, this is just brainstorming. I am a fan of Sec. 230 vs. repealing it or vs. any of the public proposals for change. But surely it is not perfect. Can we not do better?
One cannot prove anything via weird corner cases which, perhaps, this is. But the victims described in this article seem to have deployed multiple legal resources to address their problem without success. Is it impossible to think of anything to improve the situation? Just to throw out an idea, suppose new legislation created a DMCA-style take down system. Without more, it would surely be abused but, again, suppose we try to fix that. E.g., before I can send a take down notice, I have to deposit a $1,000 bond with an independent escrow agent. If a court ends up concluding my take down was in bad faith, I forfeit the bond. This is just off the top of my head so it undoubtedly has bugs which the Techdirt community will soon point out. But the Techdirt community contains a lot of smart people--let's see if we can debug the idea or come up with a better idea.
As an intro to this comment, I am a fan of Sec. 230 and get the arguments about the difficulties with content moderation at scale. But it strikes me that there is an alternative to articles explaining, every time someone has a proposal to amend Sec. 230, why the proposal is defective (as, indeed, they generally are). The alternative would be to propose one or more ways to improve Sec. 230. I do not have an immediate proposal in mind but this comment is triggered by this article in the NYTimes: www.nytimes.com/2021/01/30/technology/change-my-google-results.html. It describes a situation where the true villain is essentially "judgment proof" and the only way to remedy the situation may be to force the platforms hosting the villain's post to take some constructibe action. I would like to hear what everyone thinks about this.
This is from a Fox News article about the views of Brit Hume:
Fox News Senior Political Analyst Brit Hume is calling out big social media companies for banning President Trump's accounts based on what he says are "pure editorial judgments." "These social media companies have a legal right to do this, but they should not then pose as open platforms entitled to legal protections from the legal risks faced by publishers," he wrote in a tweet Friday night.
There is more in the article that I have not excerpted above. It is funny/sad that Mr. Hume seems to think editorial judgements might be a bad thing and that someone who makes editorial judgement ought to be treated as a publisher. As to whether he understands but does not like Section 230 or just does not undertand it, I cannot say,
Epic hired Cravath, among the most presygious law firms in the known universe, with a litigation team led by a former Assistent Attorney General for the Antitrust Division of DOJ. Hard to imagine a more intense opening salvo.
I can find occasional items in this post on which I might disagree and might be interested in a reasoned debate (which is not my goal for this comment). But, with the view from 10,000 feet at least, I hear a legitimate, good faith argument. If people want to advocate different positions on net neutrality or governance, they are entitled to do that but I reject any generalized ad hominem attack.
(Or, did I read it wrong and the previous commenters were just being sarcastic?)
Re: Re: Re: Re: Re: A Fair Question
The Golden Toilet of the former Prez (https://toilet-guru.com/trump.php) may be mega-flush capable. No-knock surely justified at that address.
Re: Nondiscrimination Aspect
Doubtful. I found a draft version of the law from last year online (not the current version). It has major penalties for trying to circumvent the operation of the law. Guessing that part has not changed. If Facebook was willing to close down in-person operations in Australia, and access that market solely via the Internet, Australia might have a struggle to enforce the sanctions in the law but, if you are Facebook, that's playing with fire.
Re: Re: Re: Question about Australian Law
I get that answer in the wierd corner case where the arbitrator imposes a large one-time fee (or maybe a large fee covering the next N years). At that point, Anonymous Coward's analysis seems right--they'd have to pay or they'd have to 100% exit Australia. Whether or not the fee would still be enforceable via some internation agreement is beyond my expertise.
But, suppose the arbitrator imposes a pay-per-use fee, measured, somehow, based on the amount of conmtent to which they link. Unless that fee has some obnoxious minimum, even when they link to zero content, seems like the nuclear option would remain available.
I guess Facebook has some significant legal talent trying to game through these scenarios.
Question about Australian Law
I haven't tried to read this pending Australian law and I have a question for anyone who has. Consider the scenario where Facebook tried to n egotiate a fee, the negotiation fails, and an arbitrator then determines the fee. Maybe the ultimate number is one that, as a business matter, Facebook decides it can live with. But suppose the number is one that Facebook finds intolerable. Wny cannot it, at that later point, return to the Nuclear option, e.g., refusing to post or allow posting of anything that would make it subject to the fee?
Re: Re: Re: Re: Re: Hoops
Yes, I agree.
Re: Re: Re: Hoops
This is a legit concern. Figuring out how to balance the burden is challenging. I do not have the skills to propose a system for which I want to advocate but, neither, am I convinced that the current system si the best we can do.
Re: Hoops
Mr. Stone says:
Your entire idea hinges on the premise that content should be taken down before it is determined to be defamatory. That is a gross violation of the First Amendment — in spirit, if not in letter.
No, I don't believe I said that (and I certainly did not intend it). You and I seem to agree it would be a problem to have a system where Babcock's mere allegation of defamation results in a takedown unless the counter-party jumps through hoops to prove the negative (i.e., no defamation).
I am hypothesizing a different system where Babcock has to get a court to confirm there was defamation. In that hypothetical system, the or iginal poster would have an option to contest the allegation of defamation. Maybe the platforms would have that option too. Maybe an industry organization, in which the platforms could share, would have that option too. Only when Babcock won his case, would potential liability be triggered for platforms.
To whom, exactly, would that pose an unfair burden?
Re: Re: Is it really binary?
I agree with lots of what you say, e.g., re better approaches they could have taken. Good points.
On intermediary liability, let me elaborate. I would NOT propose a simple DMCA takedown system expanded to cover allegations of defamation just as the current DMCA system covers allegations of infringement. The abuses of the current system have been well explained on this site and an expansion of this kind would make it much worse. However, suppose we invent some additional hoops through which the party seeking takedown must jump. For example, if I go into court seeking a preliminary injunction, the judge can, as a condition of granting the injunction, require me to post a bond that I forfeit if I cannot prove my case. (Further explanation may be found at https://www.performancesuretybonds.com/blog/what-is-the-purpose-of-a-preliminary-injunction-bond/.) It's up to the judge to decide whether I am likely to prevail on the merits (if not, my request for an injunction is denied) and also how big a bond is appropriate to balance the equities.
A system like that is not perfect and, like almost any system is subject to abuse. But at least there is a control built in to moderate the potential for abuse. A problem with the current DMCA system is that it puts too much burden on the party opposing takedown. This is one way (maybe there are others) to shift the balance of the burdens.
By the way, I do very much appreciate the civility of your comment.
Re: Re: Is it really binary?
A fair comment and concern.
Is it really binary?
If the choice is between no intermediary liability ever and 100% intermediary liability always, I guess I might reluctantly choose the former. But I am not convinced there are no other choices.
The NYTimes article (whose factual reciations I have not attempted to verify) says that the victims spent a lot of resources getting RipOffReport to take down the defamatory content. It also says that multiple other websites with overlapping business models refused to take down the content and asserted Sec. 230 as a defense. So, on one interpretation of this data, various bad actors (RipOffReport and its ilk) have profited at the expense of innocent victims, the extended Babcock family.
Some commenters dismiss this as a non-issue by pointing out that the victims include Canadian citizens and that Sec. 230 is not a Canadian law. I don't follow that logic: the NYTimes article asserts that the Babcock family is spread across the U.K., Canada and the U.S. and that they have been defamed globally. If we assume (but I cannot prove) that some of the defamation comes from U.S. hosted websites,U.S. law is relevant and it is not constructive to note that some of the victims are Canadian.
Back to the binary issue. The common law legal system (spanning multiple countries and multiple centuries) often imposes intermediary liability, e.g., in cases where there is no effective and practical remedy against the "real" villain. I acknowledge the risks associated with that approach and am not so confident in the trade offs (e.g., vis a vis the consequences to free speech) that I want immediately to argue for imposing that liability. However, I think it is worthy of reasoned debate.
I am not particularly inclined to select Google or other search engines as the optimal targets of intermediary liability as the forces of the marketplace and their own self interest might suffice. But what about platforms like RipOffReport? Might it be justified subjecting them to intermediary liability?
Some of the audience will, of course, note that I raised a similar concern in comments on another post a few days ago. Some of the respondents to those earlier comments decided that obscenities made a good counter-argument. Perhaps those respondents will be a little more thoughtful this time.
Re: Re: leather products
Also, on your first question, I return to the NYTimes Babcock/Atas article that got me started on this thread. The Babcock's were damaged and are continuing to be damaged online by the activities of a person against whom there is little or no effective remedy. In a perfect world, they'd have a remedy against someone. One source of the damage to them seems to be folks like "RipOffReport" and other "complaint" sites. So, maybe a remedy against them? But maybe it's a slippery slope and better to leave things alone. I don't have a perfect answer--just wanted to ask the question.
Re: Re: leather products
I have the most difficulty with the latter question and agree that, if politicians start fiddling with Section 230, there is clearly a very great risk of undesirable consequences. That might be the best argument I've heard for not touching it at all. (As to the former question, we've seen multiple cases and essays where the judge or the author manages to misunderstand Section 230. With 20-20 hindsight, one can imagine drafting the words to avoid those misunderstandings.)
Re: More of the same
I apologize for the (gentle) flattery. It was not designed to have anyone "go easy" but, rather, to invite reasoned discourse. I do not want to destroy Sec. 230 but neither do I think it perfect beyond improvement. When I looked a little while ago, the NYTimes article had several hundred comments, the large majority from uninformed people who do not understand the issue and many of whom think the Babcock story is a good reason to destroy Sec. 230 and/or take various steps antithetical to traditional First Amendment rights. To argue that Sec. 230 is beyond improvement runs the risk that those uninformed people and their uninformed political representatives will prevail on these issues.
Sec. 230
Mr. Stone, whose thoughtful commentary I admire, says "if the system still works on notice-and-takedown, it will still suck." He may be right but I want to keep trying. Let's invent a system where takedown's have to be "certified" before they take effect. We invent a certification system involving independent arbitrators who the issuer of the takedown request has to pay (pick a number) to have them do their job. The system can identify someone as "vexatious" and increase the fee due on subsequent takedowns. There is a suitable appellate process to deal with situations where the first level gets it wrong.
Once again, I repeat, this is just brainstorming. I am a fan of Sec. 230 vs. repealing it or vs. any of the public proposals for change. But surely it is not perfect. Can we not do better?
Re: Re: Reworking Section 230
One cannot prove anything via weird corner cases which, perhaps, this is. But the victims described in this article seem to have deployed multiple legal resources to address their problem without success. Is it impossible to think of anything to improve the situation? Just to throw out an idea, suppose new legislation created a DMCA-style take down system. Without more, it would surely be abused but, again, suppose we try to fix that. E.g., before I can send a take down notice, I have to deposit a $1,000 bond with an independent escrow agent. If a court ends up concluding my take down was in bad faith, I forfeit the bond. This is just off the top of my head so it undoubtedly has bugs which the Techdirt community will soon point out. But the Techdirt community contains a lot of smart people--let's see if we can debug the idea or come up with a better idea.
Reworking Section 230
As an intro to this comment, I am a fan of Sec. 230 and get the arguments about the difficulties with content moderation at scale. But it strikes me that there is an alternative to articles explaining, every time someone has a proposal to amend Sec. 230, why the proposal is defective (as, indeed, they generally are). The alternative would be to propose one or more ways to improve Sec. 230. I do not have an immediate proposal in mind but this comment is triggered by this article in the NYTimes: www.nytimes.com/2021/01/30/technology/change-my-google-results.html. It describes a situation where the true villain is essentially "judgment proof" and the only way to remedy the situation may be to force the platforms hosting the villain's post to take some constructibe action. I would like to hear what everyone thinks about this.
Editorial Discretion
This is from a Fox News article about the views of Brit Hume:
Fox News Senior Political Analyst Brit Hume is calling out big social media companies for banning President Trump's accounts based on what he says are "pure editorial judgments." "These social media companies have a legal right to do this, but they should not then pose as open platforms entitled to legal protections from the legal risks faced by publishers," he wrote in a tweet Friday night.
There is more in the article that I have not excerpted above. It is funny/sad that Mr. Hume seems to think editorial judgements might be a bad thing and that someone who makes editorial judgement ought to be treated as a publisher. As to whether he understands but does not like Section 230 or just does not undertand it, I cannot say,
Another Reason Sec. 230 is Needed
Check out the insane (should be in all caps) lawsuit described here: https://www.theverge.com/2020/8/18/21373530/robert-kennedy-childrens-health-defense-facebook-zuckerb erg-fact-checking-lawsuit
Big Guns
Epic hired Cravath, among the most presygious law firms in the known universe, with a litigation team led by a former Assistent Attorney General for the Antitrust Division of DOJ. Hard to imagine a more intense opening salvo.
My Two Cents
More comments from Vermont IP Lawyer >>
Techdirt has not posted any stories submitted by Vermont IP Lawyer.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt