Senator Feinstein Brings Back Horrible Bill Forcing Internet Companies To Report On Your 'Suspicious' Behavior
from the it-was-a-bad-idea,-drop-it dept
Earlier this year, Senator Dianne Feinstein, who seems to be an endless well of bad ideas around surveillance, started pushing a bill that would require internet companies to report to the government any content they suspected was posted by terrorists. This bill has all sorts of problems, not the least of which is that most of the major internet companies already alert the government to any terrorist-related content that they come across. But, by mandating such reporting, it will only lead to these companies filing a bunch more reports — much of which will be bogus, flooding the government with useless information, just to avoid running afoul of the law.
Back in September, Senator Wyden successfully forced Feinstein to drop the bill…
But, of course, in the wake of the Paris and San Bernardino attacks all bad ideas are back on the table, and Feinstein is bringing this one back as well. She’s teaming up with the intelligence committee’s other biggest cheerleader, Intelligence Committee boss Senator Richard Burr, to reintroduce the idea, and they put out a completely bogus statement that plays up the fearmongering angle as much as possible, about those darn ISIS people using social media.
“We’re in a new age where terrorist groups like ISIL are using social media to reinvent how they recruit and plot attacks,” Senator Feinstein said. “That information can be the key to identifying and stopping terrorist recruitment or a terrorist attack, but we need help from technology companies. This bill doesn’t require companies to take any additional actions to discover terrorist activity, it merely requires them to report such activity to law enforcement when they come across it. Congress needs to do everything we can to help intelligence and law enforcement agencies identify and prevent terrorist attacks, and this bill is a step in the right direction.”
“Terror groups have become adept at taking advantage of social media platforms to spread their message,” Senator Burr said. “Social media is one part of a large puzzle that law enforcement and intelligence officials must piece together to prevent future attacks. It’s critical that Congress works together to ensure that law enforcement and intelligence officials have the tools available to keep Americans safe. The stakes have never been higher and having cooperation with these outlets will help save lives here and abroad.”
Neither of those quotes makes any sense. Again, most companies already report stuff, and mandating it will only lead to more bogus reports to avoid liability for the companies, while potentially leading to less active monitoring since they only have to report stuff if they come across it. As for Burr’s assertion that this is necessary to give law enforcement “the tools” to find this information — that’s a totally different issue. Doesn’t law enforcement have computers? Can’t they go to Twitter and Facebook and YouTube themselves and do searches?
Senator Wyden has already spoken out on what a bad idea this is, and how it would do the exact opposite of what Feinstein and Burr are claiming:
Let’s make sure the record is clear: The Director of the FBI testified a few months ago that social media companies are ‘pretty good about telling us what they see.’ Social media companies must continue to do everything they can to quickly remove terrorist content and report it to law enforcement.
I’m opposed to this proposal because I believe it will undermine that collaboration and lead to less reporting of terrorist activity, not more. It would create a perverse incentive for companies to avoid looking for terrorist content on their own networks, because if they saw something and failed to report it they would be breaking the law, but if they stuck their heads in the sand and avoided looking for terrorist content they would be absolved of responsibility.
I’m for smart security policies. If law enforcement agencies decide that terrorist content is not being identified quickly enough, then the solution should be to give those agencies more resources and personnel so they know where to look for terrorist content online and who to watch, and can ensure terrorist activity is quickly reported and acted upon.
Meanwhile, CDT has gone much further in explaining why this is such an astoundingly dumb idea:
Why is this proposal such a bad idea? As we described in July, it would create a requirement for all electronic communication services – social media companies, as well as Internet service providers, web hosts, cloud services, and public libraries or coffee shops that offer WiFi access – to make reports about their users’ activity based on a completely opaque set of criteria. Creating such an obligation, with its vague parameters, would drive Internet companies to one of several likely responses. Some would decide to significantly over-report their customers’ information and private communications to the US government to ensure that the company stays on the right side of the law. Others would refuse to review any content that was flagged to them, for fear that doing so would mean they obtain the “actual knowledge of any terrorist activity” that triggers the reporting obligation.
Either of these outcomes pose major problems for the free expression and privacy of Internet users. It’s also far from clear that this would generate actionable information for law enforcement or intelligence agencies. Further, this type of reporting obligation would undermine any sense of trust between Internet users and the companies that provide the service providers that enable them to access information, conduct transactions, and share their perspectives online. The proposal would essentially deputize US-based Internet companies to act as agents of the government, including potentially requiring entities such as email services to turn over the contents of private communications if they are part of the “facts and circumstances” of alleged terrorist activities – for their users both in the US and abroad.
It’s a bad idea and Feinstein knows it’s a bad idea, because all of this has been explained to her multiple times in the past. So why is she still proposing it?