Cali Lawmakers Pushing For 72-Hour Bot Removal Requirements For Social Media Companies

from the bad-idea,-worse-implementation dept

Following in the footsteps of misguided European lawmakers, California legislators have introduced a time-sensitive “remove speech or else” law targeting social media sites.

They’ve introduced a bill that would give online platforms such as Facebook and Twitter three days to investigate whether a given account is a bot, to disclose that it’s a bot if it is in fact auto-generated, or to remove the bot outright.

The bill would make it illegal for anyone to use an automated account to mislead the citizens of California or to interact with them without disclosing that they’re dealing with a bot. Once somebody reports an illegally undisclosed bot, the clock would start ticking for the social media platform on which it’s found. The platforms would also be required to submit a bimonthly report to the state’s Attorney General detailing bot activity and what corrective actions were taken.

This is ridiculous for a number of reasons. First, it assumes the purpose of most bots is to mislead, hence the “need” for upfront disclosure. The ridiculousness of this part of the law’s many faulty premises is only further underscored by a bot created by the legislator behind the bill, Bob Hertzberg. His bot’s bio says [emphasis added]:

I am a bot. Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I’m transparent about being a bot! #SB1001 #BotHertzberg

Hertzberg’s bot must have been made to “misinform and exploit users,” at least according to its own Twitter bio. And yet, the account’s tweets appear to disseminate actual correct info, like subcommittee webcasts and community-oriented info. It’s good the bot is transparent. But it’s terrible because the transparency immediately follows a line claiming automated accounts are made apparently solely to misinform people.

Plenty of automated accounts never misinform or exploit users. Techdirt’s account automatically tweets each newly-published post. So do countless other bots tied into content-management systems. But the bill — and bill creator’s own words — paint bots as evil, even while deploying a bot in an abortive attempt to make a point.

Going on from there, the bill demands sites create a portal for bot reporting and starts the removal clock when a report is made. User reporting may function better than algorithms when detecting bots spreading misinformation (putting bots in charge of bot removal), but this still puts social media companies in the uncomfortable position of being arbiters of truth. And if they make the “wrong” decision and leave a bot up, the government is free to punish them for noncompliance.

The bill also provides no avenue for those targeted to challenge a bot report or removal. (And no option for sites to challenge the government’s determination that they’ve failed to remove bots.) This is a key omission which will lead to unchecked abuse.

Finally, there’s the motivation for the bill. Some of it stems from a desire to punish “fake news,” a term no government has ever clearly defined. Some of it comes from evidence of Russian interference in the last presidential election. But much of the bill’s impetus is tied to vague notions of “rightness.” Hertzberg himself exhumes a long-dead catchphrase to justify his bill’s existence.

“We need to know if we are having debates with real people or if we’re being manipulated,” said Democratic State Senator Bob Hertzberg, who introduced the bill. “Right now we have no law and it’s just the Wild West.”

So, summary executions of bots by social media posse members? Is that the “Wild West” you mean, one historically notorious for its lack of due process and violent overreactions?

Here’s the other excuse for bad lawmaking, via an advocate for terrible legislation.

“California feels a bit guilty about how our hometown companies have had a negative impact on society as a whole,” said Shum Preston, the national director of advocacy and communications at Common Sense Media, a major supporter of Hertzberg’s bill. “We are looking to regulate in the absence of the federal government. We don’t think anything is coming from Washington.”

So, secondhand guilt justifies the direct regulation of third-party service providers? That’s almost worse than no reason at all.

And this isn’t the only bad bot bill being considered. Assemblymember Marc Levine wants all bots to tied to verified human beings. The same goes for any online advertising purchases. Levine feels his bill will help fight the bot problem, but his belief is predicated on a profound misunderstanding of human behavior.

By identifying bots, users will be better informed and able to identify whether or not the power of a group’s influence is legitimate. This will mitigate the promulgation of misinformation and influence of unauthentic social media campaigns.

Yes, telling people the stuff they think is legitimate isn’t legitimate always results in people ditching “illegitimate” news sources. Especially when that info is coming from a government they don’t like presiding over a state many wish would just fall into the ocean. Trying to fight a bot problem largely associated with alt-right groups with legislation from coastal elites is sure to win hearts and minds.

A bot-reporting portal with no recourse provisions — and a possibile “real name” requirement added into the mix — will become little more than a handy tool for harassment and hecklers. The cost of these efforts will be borne entirely by social media companies, which also will be held responsible for the mere existence of bots the Californian government feels might be misleading its residents. It’s bad lawmaking all around, propelled by misplaced guilt and overstated fears about the democratic process.

Filed Under: , , , , , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Cali Lawmakers Pushing For 72-Hour Bot Removal Requirements For Social Media Companies”

Subscribe: RSS Leave a comment
21 Comments
Mason Wheeler (profile) says:

This is ridiculous for a number of reasons. First, it assumes the purpose of most bots is to mislead, hence the "need" for upfront disclosure.

Remove a tiny bit of oversimplification and it becomes a whole lot less ridiculous:

It assumes the purpose of most bots that pretend to be people rather than bots is to mislead

Not only is this not ridiculous, it’s trivially true.

Mark Murphy (profile) says:

Re: Re:

Quoting the legislation:

“Bot” means a machine, device, computer program, or other computer software that is designed to mimic or behave like a natural person such that a reasonable natural person is unable to discern its artificial identity.

Nobody is hand-flipping bits in a drive when they post to online platforms. They post via software (or, on occasion, butterflies).

So, when you posted your comment, most likely you used a Web browser. Are you a bot? After all, you did not hand-flip bits in a drive at a Techdirt server. You used a "computer program".

If you wish to claim that using a Web browser does not make one a bot, then the implication is that the source of the material typed into the Web browser is what determines "bot-ness" (bot-osity? bot-itude?) But I doubt that many of the Russian trolls used artificial intelligence to generate their posts from whole cloth. Rather, most likely, the origin of the posts were human, with software doing things like making mild random alterations, such as word substitution, to help defeat anti-spam measures, along with bulk posting.

So, where is the dividing line? Does the use of a spell-checker make one a bot? After all, by definition, that spell-checker auto-generated part of the post, substituting words that appear to come from a "natural person". What about mobile social network clients that offer suggested basic replies? Does that make their users bots, if they choose a canned reply, if that canned reply appears to come from a "natural person"? Does retweeting make one a bot?

Anonymous Coward says:

Re: Re:

99.9% of bots I see, are to Advertise to me, redirect me to another site, to sell me something, to apply trackers..

A lie?
Is worthless to a REASONABLY smart person..
MOST of the lies we get are based on 1 fact..NO INPUT..our gov. isnt telling us ANYTHING.. There is so much BS, flying around that its hard to tell Who, what , where, When, how, WHY..or if has anything to do with them HOLDING A JOB..

Anonymous Coward says:

Going on from there, the bill demands sites create a portal for bot reporting and starts the removal clock when a report is made.

And that opens up a new vector of attack on social media sites, and possible force them into auto disable if they cannot examine the account within the time limit due to the number of reports. It could become a new vector of denial of service attack via botnets.

AdvertisersWin says:

Now design not bots will drive usage

This is good for the industry. They just don’t know it yet.

Bots drove fake ad clicks, bots drove fake viewership numbers which relate to ad revenue and how much is paid out.

This is going to remove a liability for sites and make sites wholly own, wholly responsible for their content or collected works.

If it’s on a site, that site is now entirely responsible. It removes layers of accountability and makes requests or removals the responsibility of the site, not a third party.

Now the real value propositions come.
The clicks are reliable, the revenues are based on real value not fake bot accounts clicking links or falsely driving conversations.

If any social media company is worth their use it will be based on actual users liking the platform design now. Now companies that want to sell, will have a level playing field, advertisers can better gauge actual sales.

It’s win for companies that want to utilize social media to sell, it’s a win win advertisers.

Anonymous Coward says:

Stupid and the problem is not new

This isn’t even new. Spam has been around for over forty years. Troll farming has been a thing for well over a decade if you count the fifty cent party for one much less review writing. Turing testing everyone won’t stop the propaganda much less the utility. The problem is human.

A slightly less dumb idea would be to meet with industry groups and suggest refactoring to have bot vs non-bot flagging with accounts transparently – especially if they are a service that promotes API-posting for legitimate usage. Say if Twitter had blue borders around bot posted tweets but that is very much a fig leaf.

Even putting aside all of the cynical reasons not to do so (advertisers, inflating apparent user count) securing it would be problematic. Computers can simulate all of the input like a user down to mouse clicks and typing. Throwing captchas everywhere would deter users and make the interface more clunky making them reluctant to do so. Even worse it wouldn’t work given that captcha solving programs are all over the place often offering some e-currency or game asset for doing so.

Really if you want to fight propaganda fund critical thinking courses for everyone and offer tax credits or subsidies for taking them it is harder to circumvent.

Anonymous Coward says:

reasons largely political

Because of the widely-held belief that pro-Trump bots on social media sites swung the 2016 presidential election, it should not surprise anyone that the vehemently anti-Trump state of California would spearhead this effort to crack down on bots of all kinds (as it would look awfully bad to pass a law that only applied to one political party).

Facebook and Twitter engaged in a massive search-and-destroy mission for pro-Trump (as well as pro-Le Pen) bots, and even bragged to the press about it — one of the only times these normally secretive (and widely despised) pogroms were ever admitted. As with selective “leaks” to the press, selective rule enforcement by partisan business owners is of course nothing new, as Amazon’s Jeff Bezos has demonstrated numerous times when user-posted reviews are selectively tampered with.

Had Trump lost the election — and had that loss been blamed on anti-trump bots — then there’s little doubt that California’s politicians would be quite happy with the situation, just as they were when Obama’s 2008 win was in part attributed to social media dominance (bots included)

Anonymous Coward says:

use an automated account to mislead the citizens

So CA just banned advertising?

No really. From a technical standpoint, profile driven advertising is not easy to distinguish from bots. And advertising is by definition deceptive. “building value”, is not distinguishable from “lying to convince some dumb shit that the market value is different than the utility value”.

So there is to be no more advertising in CA? Cool. Moving there. Sounds great!

Pretty much all IT law passed in the past decade says the same thing:

“We don’t understand WTF is going on, but WHAAA! Us good! Not-us bad! Here is a bunch of arbitrary bullshit that means nothing, that we will summarily use to persecute anyone we don’t like.”

fairuse (profile) says:

State Will Get $ Upfront - Speed Camera Trap Mod 4 Bots

“Once somebody reports an illegally undisclosed bot,(..)”, please find clothes for it.

(begin simulation)
Bot was declared misleading. It wanted to lead.

Its job was helpful not damaging to public. Its function was to notify taxpayers when state lawmakers meet; Also, update schedule, alert writers, and return estimate of citizens attending.

The 72 hour starts … 0 hour. Now what can be done for this homeless bot?

Why is the press absent?
No comments by taxpayers because they were not notified.
Bills for zoning, water management and council pay raise pass without objection.

City Hall con game drives this kind of bill not the general public’s well being.
(end simulation)

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...