Top German Court Says Facebook Must Inform Users About Deleting Their Posts Or Suspending Their Account, Explain Why, And Allow Them To Respond

from the hating-the-hate-speech-hate dept

We’ve just written about Germany’s constitutional court grappling with the issue of whether government users of zero-days for surveillance have a responsibility to report the flaws they use to the relevant developers. Another senior court in the country has been pondering an even thornier question that is occupying judges and lawmakers around the world: how should social media police so-called “hate speech” on their services in a way that respects fundamental rights on all sides?

Germany’s Federal Court of Justice issued its judgment regarding two similar cases (pointed out by Matthias C. Kettemann on Twitter). Both involved posts that Facebook removed because it said they went against the social network’s community standards governing hate speech. In addition, Facebook temporarily blocked the accounts of the users who wrote the posts. When the lower German courts refused to overturn Facebook’s moves completely, the users appealed to the Federal Court of Justice, which not only ordered Facebook to reactivate the two accounts, but also told it to refrain from blocking the re-posting of the deleted comments. The court ruled that Facebook’s rules governing the removal of posts and the blocking of user accounts were “invalid”, because “they unreasonably disadvantage the users of the network contrary to the requirements of good faith.” The court went on to explain its reasoning (translation by DeepL of original in German):

In this case, the conflicting fundamental rights of the parties — on the side of the users the freedom of expression from [Article 5 (1) sentence 1 of Germany’s Basic Law], on the side of the defendant [Facebook] above all the freedom to exercise a profession from [Article 12 (1) sentence 1 of Germany’s Basic Law] — must be considered and balanced according to the principle of practical concordance in such a way that they become as effective as possible for all parties. This balancing shows that the defendant is in principle entitled to require the users of its network to comply with certain communication standards that go beyond the requirements of criminal law (e.g. insult, defamation or incitement of the people). It may reserve the right to remove posts and block the user account concerned in the event of a breach of the communication standards. However, in order to strike a balance between the conflicting fundamental rights in a manner that is in line with the interests of the parties, and thus to maintain reasonableness within the meaning of [Section 307 (1) sentence 1 of the Civil Code of Germany], it is necessary that the defendant undertakes in its terms and conditions to inform the user concerned about the removal of a post at least subsequently and about an intended blocking of his or her user account in advance, to inform him or her of the reason for this and to grant him or her an opportunity to respond, followed by a new decision.

Germany’s Federal Court of Justice is trying to balance two conflicting rights — freedom of speech, and freedom to exercise a profession. Its solution is to require companies like Facebook to inform users about the removal of a post — at least retrospectively — to tell them in advance about the blocking of an account, explain why, and to allow users to respond so that the decision can be reconsidered. That’s a new, general approach that can be applied to a wide range of online services. However, as Matthias C. Kettemann pointed out on Twitter, it leaves important questions unanswered, including the issue of spam accounts, and of account suspensions, rather than deletions. Given their importance, we can probably expect future judgments to tackle these points in due course.

Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Filed Under: , , , , , ,
Companies: facebook

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Top German Court Says Facebook Must Inform Users About Deleting Their Posts Or Suspending Their Account, Explain Why, And Allow Them To Respond”

Subscribe: RSS Leave a comment
16 Comments
That One Guysays:

'I said the color of trees, not green, give my account back.'

Well that’s nice of the german courts, now every other country will get to see what a monumentally stupid idea that is as spam and troll accounts swamp Facebook with demands to explain why they were suspended and appeal after appeal because they weren’t really violating the rules, or for future violations because they weren’t violating the rules in specifically the way they were told was a violation the last time.

MathFoxsays:

Re: Re:

This ruling allows the platform provider to remove any post, provided that it informs the user of the platform about the reason. "Government says so" is quite simple, add a copy of the deletion request. "Moderator thinks post is bad" requires a reference to platform rules.

I do find this ruling interesting because it says that a platform can not hide behind unfair terms of service. (In line with the EU law that puts a limit to how unfair TOS are allowed to be.) A company is expected to behave reasonable and consider basic human rights when making decisions.

sumgaisays:

Re: Re: Re: Re:

"Government says so" [is a good excuse to block a user]

I hope that you’re not advocating for government controlled speech. While that won’t even get to first base in the US, thanks to 1A, it’s a different story in the EU. In some countries, you do have some (limited) government controlled speech. For example in France the topic of Nazis is 100% interdit. Elsewhere I’m not qualified to address, but I do think that any government control over speech is always a bad idea.

sumgaisays:

la

The court has made a couple of tactical errors here.

a) Fb can still block/hide/suspend/etc. an account by simply saying "Hey, we gave notice, and they didn’t respond in a timely manner, so….." There should’ve been a minimum time limit between the two events. As it stands, a minute might seem reasonable to an AI that’s going to automatically do the blocking operation.

b) There’s nothing in there that compels Fb to change their minds. The intended "reconsideration and new determination" doesn’t have to be in favor of the policy offender. After all, Fb isn’t going to (nor can they be required to) change their TOS, TOU, AUP and other policy statements just for any one given policy offender, doing so in mid-stream, so to speak. More telling, they are required to apply their policies equally across the board, so they literally can’t show any favoritism – they simply must not change their minds, given that the offense is real and not a mistake. (AI’s rarely understand the context, so mistakes can be rectified.)

But I do give the court credit for one thing: users have complained since Day One that notices of deficiency have been woefully… deficient in explaining why an action was taken. It’s exactly like your parents telling you that you can’t do something for the reason of "Because I said so!" That doesn’t sit too well with most adults, I’m certain. But the explanation needn’t be detailed, it can be nothing more than a list of checkboxes with short-and-sweet generalized descriptions. For example:

a) Bullying another user;
b) Threats of violence towards elected officials;
c) Espousing medical advise contrary to current medical practices;
d) Promoting discrimination contrary to established law;
e) yadda yadda, so on and so forth, blah blah, woof woof….

I don’t use Fb (or any other socially acceptable social media), so I’m a fine one to talk. But I do keep my ears open, and they get filled with this complaint vis-a-vis Fb and other platforms, way too often for me not to notice.

Anonymoussays:

Re: Re: At Least

Please explain to me how Facebook is fascists?

And to help you out with your explanation, here is what wikipedia has to say:

Fascism
Fascism (/?f???z?m/) is a form of far-right, authoritarian ultranationalism[1][2] characterized by dictatorial power, forcible suppression of opposition, and strong regimentation of society and of the economy …

So, again, please explain how Facebook is fascist?

Also, who is forcing you to use facebook? If you don’t like the way in which they moderate their own private property, then quit using facebook. Problem solved.

This isn’t rocket science people!

Ninjasays:

I’m not sure if this is good or bad overall but I am sure social media platforms are monolithic, inaccessible messes when it comes to giving people the chance to challenge a block/suspension/ban. I had my account suspended on Twitter these days because I used a homophobic slur satirizing those who use it seriously. It was clearly automated and I had 2 options: delete the tweet and wait 12h to have my account restored (i couldn’t interact, just read other tweets for the duration) or challenge the ban and remain fully blocked for several days before someone hopefully reviewed my case so at the very least i won’t have3 a strike registered to my account. The proccess was convoluted so I decided the least painful way was to delete the tweet and get 12h restrictions.

Now, I do understand the scale of Twitter, Facebook is massive and I wouldn’t expect any review to be fast. However there are a few things that should be considered before restricting an account automatically or due to ppl flagging the accounts. I do think accounts with a clean history should be given more leeway and kept unblocked while somebody evaluates the user challenge for instance. Older accounts, accounts with a lot of activity, followers, interactions (I know, bots but if bot sentinel can identify bots on Twitter then the companies themselves can as well). Accounts with previous stances of mass flagging (ie: left people attacked by right masses or vice-versa). I can go on.

My point is, there has to be ways to give some protection to the user, even if via algorithms as well.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter




Techdirt Deals
Report this ad??|??Hide Techdirt ads

The latest chatter on the Techdirt Insider Discord channel...

Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it