However, battling misinformation is not always so easy -- as Facebook discovered yesterday. Yesterday afternoon a bunch of folks started noticing that Facebook was blocking all sorts of perfectly normal content, including NY Times stories about Covid-19. Now, we can joke all we want about some of the poor NY Times reporting, but to argue that its reporting on Covid-19 is misinformation would be, well, misinformation itself. There was some speculation, a la YouTube's warning that this could be due to content moderators being sent home -- and not being allowed to do their content moderation duties over privacy concerns, but the company said that it was "a bug in an anti-spam system" and was "unrelated to any changes in our content moderation workforce." Whether you buy that or not is your choice.
Still, it's a reminder that any effort to block misinformation is going to be fraught with problems and mistakes, and trying to adapt rapidly, especially on a big (the biggest) news story with rapidly changing factors and new information (and misinformation) all the time, is going to run into some problems sooner or later.
]]>So Pichai's comments to CNN shouldn't be seen as controversial, so much as they are explaining how large numbers work:
"It's one of those things in which let's say we are getting it right over 99% of the time. You'll still be able to find examples. Our goal is to take that to a very, very small percentage, well below 1%," he added.
This shouldn't be that complex. YouTube's most recent stats say that over 500 hours of content are uploaded to YouTube every minute. Assuming, conservatively, that the average YouTube video is 5 minutes (Comscore recently put the number at 4.4 minutes per video) that means around 6,000 videos uploaded every minute. That means about 8.6 million videos per day. And somewhere in the range of 250 million new videos in a month. Now, let's say that Google is actually 99.99% "accurate" (again, a non-existent and impossible standard) in its content moderation efforts. That would still mean ~26,000 "mistakes" in a month. And, I'm sure, eventually some people could come along and find 100 to 200 of those mistakes and make a big story out of how "bad" Google/YouTube are at moderating. But, the issue is not so much the quality of moderation, but the large numbers.
Anyway, that all seems fairly straightforward, but of course, because it's Google, nothing is straightforward, and CNBC decided to take this story and spin it hyperbolicly as Google CEO Sundar Pichai: YouTube is too big to fix. That, of course, is not what he's saying at all. But, of course, it's already being picked up on by various folks to prove that Google is obviously too big and needs to be broken up.
Of course, what no one will actually discuss is how you would solve this problem of the law of large numbers. You can break up Google, sure, but unless you think that consumers will suddenly shift so that not too many of them use any particular video platform, whatever leading video platforms there are will always have this general challenge. The issue is not that YouTube is "too big to fix," but simply that any platform with that much content is going to make some moderation mistakes -- and, with so much content, in absolute terms, even if the moderation efforts are pretty "accurate" you'll still find a ton of those mistakes.
I've long argued that a better solution is for these companies to open up their platforms to allow user empowerment and competition at the filtering level, so that various 3rd parties could effectively "compete" to see who's better at moderating (and to allow end users to opt-in to what kind of moderation they want), but that's got nothing to do with a platform being "too big" or needing "fixing." It's a recognition that -- as stated at the outset -- there is no "right" way to moderate content, and no one will agree on what's proper. In such a world, having a single standard will never make sense, so we might as well have many competing ones. But it's hard to see how that's a problem of being "too big."
]]>The real issue -- as we've been trying to explain for quite some time now -- is that basic content moderation at scale is nearly impossible to do well. That doesn't mean sites can't do better, but the failures are not because of some institutional bias. Will Oremus, over at Slate, has a good article up detailing why this narrative is nonsense, and he points to the episode of Radiolab we recently wrote about, that digs deep on how Facebook moderation choices happen, where you quickly begin to get a sense of why it's impossible to do it well. I would add to that a recent piece from Motherboard, accurately titled The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People.
These all highlight a few simple facts that lots of angry people (on all sides of political debates) are having trouble grasping.
“This is the difference between having 100 million people and a few billion people on your platform,” Kate Klonick... told Motherboard. “If you moderate posts 40 million times a day, the chance of one of those wrong decisions blowing up in your face is so much higher.”
Later in the piece, Klonick again makes an important point:
“The really easy answer is outrage, and that reaction is so useless,” Klonick said. “The other easy thing is to create an evil corporate narrative, and that is also not right. I’m not letting them off the hook, but these are mind-bending problems and I think sometimes they don’t get enough credit for how hard these problems are.”
This is why I've been advocating loudly for platforms to move the moderation decisions further out to the ends of the network, rather than doing it in a centralized fashion. Let end users create their own moderation system, or adapt ones put together by third parties. But, of course, even that has problems as well.
No matter what choices are made, there are significant tradeoffs. As the Motherboard article also highlights, what seems like a "simple" rule gets hellishly complex quickly when applied to other situations, and then you've suddenly increased the "error" rate and people get angry all over again and the whole mess gets blown out of proportion again.
“There's always a balance between: do we add this exception, this nuance, this regional trade-off, and maybe incur lower accuracy and more errors,” Guy Rosen, VP of product management at Facebook, said. "[Or] do we keep it simple, but maybe not quite as nuanced in some of the edge cases? Balance that's really hard to strike at the end of the day.”
As the Oremus piece notes, the "bias" of platforms when it comes to moderation is not "liberal" or "conservative," it's Capitalist. Having a platform overrun with spam and trolls is bad for business. Hiring enough people who can adequately review content within the correct context is somewhere between insanely cost prohibitive and impossible. So the platforms muddle by with imperfect review processes. Making moderation mistakes is also bad for business, and the platforms would love to minimize them, but "mistakes" are often in the eye of the beholder as well, again reinforcing that this is an impossible task. For everyone screaming about how Alex Jones should be kicked off platforms, there's a similar number of people screaming about how awful the platforms are that do kick him off. There is no "right" way to do this, and that's what every platform struggles with.
And, if you think that these platforms are unfairly silencing "conservatives" (which is the prevailing narrative right now), it's probably because you're not paying enough attention elsewhere. Black Lives Matter and other civil rights groups have complained about "racially biased" moderation in the opposite direction, saying that minority groups are regularly silenced on these platforms. Indeed, it's not hard to find a ton of reports about black activists having content removed from social media platforms. And for all the talk of Infowars being taken off these platforms, how many people noticed that the Facebook page of the Venezuelan socialist TV station Telesur was recently taken down as well.
Yes, it's fine to point out that these platforms (mainly Facebook, Twitter and YouTube) are really bad at moderating. But, unless you're willing to actually understand the scale at play, recognize how many mistakes are going to be made (and recognize how trolls are going to go nuts over correct decisions), you're playing into a false narrative to argue that any of these platforms are "targeting" anyone. It's not true.
]]>How does an agency with the technical capabilities the FBI has miscount physical items? Apparently, you let software do the counting and hope for the best.
“The FBI’s initial assessment is that programming errors resulted in significant over-counting of mobile devices reported,’’ the FBI said in a statement Tuesday. The bureau said the problem stemmed from the use of three distinct databases that led to repeated counting of phones. Tests of the methodology conducted in April 2016 failed to detect the flaw, according to people familiar with the work.
This inflated the count from somewhere between 1,000-2,000 to nearly 8,000. That was the number used by Director Christopher Wray and AG Jeff Sessions in testimony to Congress and speeches to law enforcement. That was the center of the narrative: a number that kept growing exponentially with no end in sight.
You'd think the agency would track devices better, what with officials constantly claiming each and every phone was "tied to a threat to the American people." Something that important shouldn't be carelessly handled. But the phones "tied to threats" were overcounted, suggesting a severe problem in the FBI's tracking system that might make it difficult to figure out which "threats" each phone is "tied" to if and when it ever gets around to cracking the devices.
Now, the DOJ is in damage control mode. Robyn Greene was the first to notice the corrections made to officials' statements using the FBI's bullshit ~8,000 number. Edits to AG Sessions' recent comments at a law enforcement conference have replaced this:
Last year the FBI was unable to access investigation-related content on more than 7,700 devices…
Last year the FBI was unable to access investigation-related content on more than ** devices…
The footnote reads:
** Due to an error in the FBI's methodology, an earlier version of this speech incorrectly stated that the FBI had been unable to access 7,800 devices. The correct number will be substantially lower.
The FBI's been doing a bit of cleanup at its own site. Here's one from May 3, 2017 that has since been corrected.
In just the first half of this fiscal year, the FBI was unable to access the content of more than 3,000* mobile devices...
The FBI's footnote is far more concise and vague.
* Due to an error in methodology, this number is incorrect. A review is ongoing to determine an updated number.
It helpfully doesn't mention the FBI screwed up its own count. Nor does it state the new estimate will be "substantially lower." Cowards.
Here's another one, from September 27, 2017:
In the first 10 months of this fiscal year, the FBI was unable to access the content of more than 6,000* mobile devices…
How did this jump -- 3,000 phones in five months -- escape notice? As of November 2016, the number was only around 880 devices. Then it jumped to 3,000 six months later. Then it doubled to 6,000 in less than five months. By the end of that fiscal year four months later, the FBI had supposedly added another 1,775 uncrackable devices.
In fiscal year 2017, we were unable to access the content of 7,775* devices…
Even if the count had been accurate (which it wasn't), the numbers were delivered dishonestly. What appears to be a cumulative total of all devices the FBI had collected over the years is presented in testimony as being the total number of devices the FBI couldn't unlock during a single fiscal year. This vastly misconstrues the severity of the issue and while FBI officials may have been unaware the FBI's software was delivering bogus counts, it didn't stop them from overstating the problem by presenting historical cumulative numbers as a single year's total.
Whatever number the FBI finally delivers will be decidedly underwhelming. Considering it still hasn't provided Congress with details of its attempts to access supposedly uncrackable devices, the FBI has managed to decimate its own forces in the new War on Encryption. It overplayed its hand -- perhaps inadvertently -- and weakened its argument severely. Something this important to the FBI was handled carelessly -- not because the FBI doesn't really want weakened encryption, but because the FBI will not do anything to weaken its anti-encryption position. It never bothered to check the phone count because the number given to officials was sufficiently impressive. That mattered far more than accuracy or honesty. "Fidelity, Bravery, Integrity" my ass.
]]>So real, in fact, have these simulations become, that they can occasionally create real-world mishaps, as happened with a French soccer player named Ruben Aguilar.
As discussed in this inteview with Goal (in French), there's a mistake in last year's version of the life-destroying management game where Aguilar is incorrectly given dual citizenship of both France and Bolivia.
With tens of thousands of players in the game, mistakes are bound to happen from time to time, but the difference here is that it's turned into an international thing. Bolivian players of Football Manager noticed his supposed South American heritage last season, but a string of strong performances in the real world (especially against French giants PSG) this year have blown up to such an extent that he made the TV in Bolivia, with the country's national team management contacting him to inquire about the possibility of him playing for them.
The error here was such that it indicated that Aguilar was available for signing to the Bolivian team under the rules of international football. How the error came about is an open question, but the fact is that Aguilar's parents were French and Spanish and he holds no citizenship, or indeed even a passport, for Bolivia. Still, the Bolivian team heeded the calls from its fans to inquire about signing Aguilar, only to learn he was not eligible to play for the team. It's worth repeating that this entire episode came about because of a single error in a single popular sports simulation game. Aguilar himself was forced to respond to all of this on his Facebook page.
For the past few weeks, we have received dozens of messages concerning the nationality of Ruben. On Facebook and Twitter many information also circulate. In order to remove doubts; by this communiqué, we affirm that Ruben was born in Grenoble (France), of Spanish father and French mother. As a result, he does not have a Bolivian passport. In any case, we thank you for all messages of support and the enthusiasm aroused to see him wearing the jersey of ‘La Verde’. ¡Muchas Gracias!
It's a funny little story, with many folks now poking fun at both the Bolivian team for not doing its homework and Football Manager for making the seemingly inconsequential error to begin with. More interesting to me, however, is how this serves as an indication of how far video games have come in terms of the realism we expect from them. So sure was the public, and even a professional soccer team, that the information in the game was accurate that all of the above acted on that information.
That's actually kind of cool.
]]>Regulators, rather than treating blockchain platforms (such as Bitcoin or Ethereum) and other "distributed ledgers" merely as tools of illicit dark markets, are beginning to look at frameworks to regulate and incorporate this important technology into traditional commerce.
That progress was challenged recently, when more than $54 million was stolen from The DAO (short for "decentralized autonomous organization") — an experimental and unregulated investment fund built on the blockchain platform Ethereum. As people realized The DAO was being drained, the ensuing panic also crashed the price of Ether (or ETH), Ethereum's cryptocurrency.
Beyond potentially making a lot of people poorer – who probably should have known better than to invest in an experimental "robotic corporation" — the theft has created a massive political rift within the blockchain community, and threatens to undermine trust in a technology described as the "trust machine". In addition, this event raises serious questions about the cybersecurity risks of distributed applications, the (lack of) enforcement of existing securities laws and the potential for increased scrutiny by regulators looking to protect unwary investors.
Prior to last week, The DAO was widely considered a phenomenal success. It enjoyed the largest crowdfunding in history, raising the equivalent of more than $150 million, or about a tenth of the value of the Ethereum blockchain platform on which it was built. While you could conceivably build a DAO for anything, since it was a piece of software, The DAO was created for the purpose of developing the Ethereum platform and other decentralized software projects. According to its "manifesto" on daohub.org:
The goal of The DAO is to diligently use the ETH it controls to support projects that will:
• Provide a return on investment or benefit to the DAO and its members.
• Benefit the decentralized ecosystem as a whole.
In short, it was developed as a venture-capital fund and, importantly, its investors expected returns.
@steve_somers Personally I think it will be spent more smartly than if it was just as pure ETH. Now falls under governance of the many.
— Stephan Tual (@stephantual) May 14, 2016
What is a DAO, anyway? And how does it work? Christoph Jentzsch — founder of the German company Slock.it, which helped create The DAO — explained the concept in his white paper as "organizations in which (1) participants maintain direct real-time control of contributed funds and (2) governance rules are formalized, automated and enforced using software."
As American Banker's Tanaya Macheel writes, DAOs and the smart contracts on which they are built could have a lot to offer traditional financial institutions:
In theory, distributed autonomous organizations (of which the DAO is one of the first examples) are a hardcoded solution to the age-old principal-agent problem. Simply put, backers shouldn't have to worry about a third party mismanaging their funds when that third party is a computer program that no one party controls.
At a time when the financial services industry is trying to automate old processes to cut costs, errors and friction, DAOs represent perhaps the most extreme attempt to take people out of the picture.
DAOs can be deployed on the distributed global computer of the Ethereum platform or other suitable blockchains, including private ones. One mechanism to fund them is through a "crowdsale" of DAO tokens that act like shares of stock, which is what The DAO did. Token-holders can vote on new proposals (weighted by the number of tokens a user controls) to change the structure of the DAO and alter its code. Tokens also can be traded and have an exchange-value. As The DAO's "official website" daohub.org describes it:
The DAO is borne from immutable, unstoppable, and irrefutable computer code, operated entirely by its members.How exactly does an immutable decentralized computer get "hacked"? According to DAO developer Felix Albert, it wasn't. Unlike the failed bitcoin exchange Mt. Gox — where nearly $500 million of bitcoins were lost due to a combination of breach and fraud — the theft exploited a bug that previously had been undiscovered (or more accurately, hadn't been fixed) in its code.
A quirk of robotic corporations is that they take their bylaws literally. Like Asimov's robots, DAOs are built with rules to govern their behavior that cannot easily be revised or overwritten once they are set in motion. Inevitably, these sometimes conflict with our preconceived ideas of how they ought to operate.
Technical analysis of the DAO theft revealed the attacker exploited a function originally designed to protect users:
The attack [on The DAO] is a recursive calling vulnerability, where an attacker called the "split" function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.
It wasn't really a hack at all. It was human error. Making matters worse, The DAO's promoters (in this case, Slock.it Chief Operating Officer Stephan Tual) had said this kind of bug wouldn't be an issue just a few days before the theft (whoops).
“No DAO funds at risk following the Ethereum smart contract ‘recursive call’ bug discovery” @stephantual https://t.co/7EtlWZ8m6m
— DAOhub (@DAOhubORG) June 12, 2016
Lots of potential vulnerabilities for The DAO had been discussed and it was even suggested to place a moratorium on proposals. Meanwhile, its promoters confidently asserted everything was fine:
We are assuming that the base contract is secure. This assumption is justified due to the community verification and a private security audit.
Additionally, Slock.it's blog claimed that the generic DAO framework code had been audited by a leading security firm:
We're pleased to announce that one of the world's leading security audit companies, Deja Vu Security, has performed a security review of the generic DAO framework smart contracts.
On close inspection, the only report they linked in their blog was three pages long. It's unclear whether a rigorous formal audit had ever been conducted. After the attack, people started asking for the audit report and wondering why Slock.it hadn't shared it. The security firm, Deja Vu, even responded on Reddit.
Hi Everyone, Adam Cecchetti CEO of Deja vu Security here. For legal and professional reasons Deja vu Security does not discuss details of any customer interaction, engagement, or audit without written consent from said customer. Please contact representatives from Slock.it for additional details.
Whoever was in charge of auditing the code screwed up big-time. As former Ethereum release coordinator Vinay Gupta explained on YouTube, The DAO was an experiment that was never built to handle this much risk:
We all knew as we watched this happening that this was an emperor's clothes scenario ... there was no way that that smart contract had undergone an appropriate amount of scrutiny for something that was a container for $160 million.
Sure, everyone involved should have stopped it from getting carried away. But what are the actual consequences when a decentralized extralegal robot corporation doesn't do what it's expected to? Is anyone really "in charge" of making sure it works? Is anyone on the hook if the whole thing goes down the tubes because of its creators' (or proposal authors') lack of due diligence?
For one thing, as Coin Center's Peter Van Valkenburgh explains, DAOs are likely to run afoul of existing securities law – potentially implicating their developers, promoters and investors:
The Securities Act intentionally defines "promoter" broadly: "any person that, alone or together with others, directly or indirectly, takes initiative in founding the business or enterprise of the issuer." Given the breadth of this language, developers should carefully weigh the risks of being visibly associated with the release and sale of [DAO] tokens.
Individuals deemed to be promoters of a [DAO] may be found to be in violation of Section 5(a) and 5(c) of the Securities Act. Under these sections it is unlawful to directly or indirectly offer to sell or buy unregistered securities, or to "carry" for sale or delivery after the sale an unregistered security or a prospectus detailing that security. Even if a [DAO] is deemed to be an unregistered security, it remains very unclear how promoting that [DAO] would or would not equate to these unlawful activities, and who—if anyone—would be found to have violated the law. Nonetheless, broad interpretation of these laws may potentially implicate any participant or visibly affiliated developer or advocate.
So DAO evangelists could soon be in hot water, regardless of any disclaimers they put up.
— Stephan Tual (@stephantual) May 19, 2016
To the Securities and Exchange Commission's credit, they have thus far been relatively open to innovations like crowdfunding, as well as the potential for blockchain technology. As SEC Chairwoman Mary Jo White recently said in an address at Stanford University:
Blockchain technology has the potential to modernize, simplify, or even potentially replace, current trading and clearing and settlement operations ... We are closely monitoring the proliferation of this technology and already addressing it in certain contexts ... One key regulatory issue is whether blockchain applications require registration under existing Commission regulatory regimes, such as those for transfer agents or clearing agencies. We are actively exploring these issues and their implications.
Beyond financial regulation, the broader legal treatment of DAOs is a murky subject. With applications running on Ethereum, it's not always clear what the point of enforcement is. You can't exactly sue a DAO in court and then seize its assets. And, while The DAO's creators were in the public eye, that doesn't necessarily have to be the case; it could be deployed anonymously.
Maybe the next DAOs should be anonymous. Avoids the blame game and force us to use tools to build trust despite not trusting the creators.
— alex van de sande (@avsa) June 21, 2016
Even if DAOs are created without a formal legal status, governments may impose legal status on them. As business lawyer Stephen Palley writes at CoinDesk:
If you don't formalize a legal structure for a human-created entity, courts will impose one for you. As most lawyers will tell you: a general partnership, unless properly formalized or a deliberately created structure, is a Very Bad Thing ... [T]he members of a general partnership can end up jointly and severally liable on a personal basis for partnership obligations.
For instance, I don't think this is how the law works:
@SamirPatelLaw @vidal007 @slockitproject Customer protection on blockchain is insured via smart contracts, not legal systems. Code is law.
— Stephan Tual (@stephantual) March 21, 2016
Even if the SEC or other government entity decides to crack down on DAOs, it might be easier said than done. Because they operate on pseudonymous distributed computers, those parties may not be easy to track down (notably, we still don't know who Satoshi Nakamoto is). Even if you did, they might not have any control over it or know what it was doing. Its code also may have been radically altered from its original programming/intent.
But as far as The DAO is concerned, are we in for a slew of lawsuits or calls for SEC action by disgruntled investors? Not so fast. Investors in The DAO may yet be able to recover their losses.
Various prominent stakeholders in the Ethereum community, from Ethereum inventor Vitalik Buterin to Slock.it's Christopher Jentzsch, have suggested that the only sensible solution is to create a "fork" of the Ethereum network that could freeze the attacker's stolen funds and shut down The DAO, with the option to create a “hard fork” to fully reverse the theft and return investors' funds. Some have criticized this approach as a “bailout” or “asserting centralized control.” But it's worth noting that it would require a plurality of miners to adopt it voluntarily; whether they will remains to be seen.
Either way, Ethereum's credibility may be adversely affected. On the one hand, people need to trust that smart-contracts do what they are supposed to — particularly where millions of dollars are on the line. On the other hand, the credibility of the platform is also tied to its immutability. If developers and miners collude to reverse transactions they don't like, that sets a bad precedent.
Additionally, if the community decides The DAO's investors need to take a haircut, it could open up a Pandora's box of legal troubles for its developers and promoters (and maybe even miners and investors), potentially stifling advancement of this important technology.
But wait a minute. Why didn't the attacker see the this coming? Surely if he was sufficiently sophisticated to find a "recursive call" bug, he would have known that split funds would be locked away for 27 days — giving the community time to get wise to his activities and find a solution like the fork.
As previously mentioned, The DAO theft also crashed ETH prices. Savvy readers will note that a DAO vulnerability doesn't mean the Ethereum platform itself was compromised (any more than a nasty bug in Photoshop means that everyone with Windows 10 is at risk).
Was it possible this whole event was a ruse to pull off a "big short", as one user suggests on Reddit? As of now, there's no proof of that, but it's an interesting theory.
But was this even a theft at all? As Slock.it's representative said, "code is law!" If the code doesn't do what you think it does — that's your fault. At least, that's the theory behind an anonymous letter uploaded to Pastebin and purportedly authored by The DAO's attacker:
I have carefully examined the code of The DAO and decided to participate after finding the feature where splitting is rewarded with additional ether. I have made use of this feature and have rightfully claimed 3,641,694 ether, and would like to thank the DAO for this reward. It is my understanding that the DAO code contains this feature to promote decentralization and encourage the creation of "child DAOs".
I am disappointed by those who are characterizing the use of this intentional feature as "theft". I am making use of this explicitly coded feature as per the smart contract terms and my law firm has advised me that my action is fully compliant with United States criminal and tort law.
Adding that:
I reserve all rights to take any and all legal action against any accomplices of illegitimate theft, freezing, or seizure of my legitimate ether, and am actively working with my law firm. Those accomplices will be receiving Cease and Desist notices in the mail shortly.
If the fork moves forward to freeze or seize the attacker's digital assets, could that open up the broader Ethereum community and its miners to legal liability? We'll have to wait and see what happens.
Regardless how The DAO "theft" is resolved, regulators shouldn't be in a rush to impose stricter regulations on Ethereum, which is just a platform, or DAOs in general or even The DAO specifically, should it be reincarnated with better security practices.
While The DAO attack raises serious questions about the viability of creating this "DAO 2.0", that doesn't mean we should stop it from happening. Whether or not you believe all the hype about Ethereum being as important as the invention of the internet, it's an exciting technology that's worth giving the opportunity to grow.
Unlike Bitcoin, which has been around for eight years, Ethereum is only a year old. It officially launched in July 2015, but is already the second-largest cryptocurrency by market capitalization. It's vastly more complex than Bitcoin and still in its infancy; it will have inevitable growing pains on the way to maturity.
Just as the internet wasn't built in a day, neither will smart-contract technology come to fruition without a permissive regulatory environment to grow, much like the Clinton administration's Framework for Global Electronic Commerce did for the internet.
Certainly, vetting DAO code (particularly new proposals) is a big problem. More fundamentally, smart-contract security is an emerging area where people are rightly starting to pivot, following the lessons of The DAO attack. As Ethereum developer Peter Borah writes:
In his response to the bug, Slock's COO expressed shock, referring to it as "unthinkable", and pointing to the "thousands of pairs of eyes" that somehow missed this. It's certainly hard to blame anyone for being shaken by the sudden disappearance of tens of millions of dollars. However, this natural reaction hides the simple truth that anyone who has dabbled in programming knows: bugs in programs are far from unthinkable — they are inevitable.
Making code open-source is not enough. We need mechanisms to create smarter (i.e., fault-tolerant) smart contracts. This could mean more rigorous independent testing, strategies to implement better development practices or, at least, more time to develop through trial-and-error in a lower-risk context. Stakeholder interests also must be aligned to make sure appropriate vetting happens, particularly where voting on code alterations is involved and particularly if we want to develop more complex autonomous programs.
The DAO is an instance of people getting carried away with an exciting new technology, while not effectively managing the new cybersecurity risks that come with it. But just because a group of people screwed up The DAO, it doesn't mean all DAOs are DOA.
While there's an overabundance of utopian thinking in this space, blockchain-based experiments in decentralized governance and peer-to-peer commerce could have immense benefits that offer truly revolutionary potential. Regulators should continue to take a wait-and-see approach and not use this as an invitation to try to shut them down or impose harsh new regulations.
]]>
Right about now you're thinking that you just witnessed a French how-to manual on having a horse be all that it can be inside of you. But it isn't! Honest! What this actually is is an attempt by the illustrator to show how similar the bone structure of human beings and horses are by aligning their respective physiology in this way. A rep from the publisher told BuzzFeed:
"Obviously, we never wanted to shock our readers with that drawing," a Fleurus spokesperson told BuzzFeed. "We publish educational books and make realistic or explanatory illustrations. In that case, our goal was to make the child visually comprehend that the bone structure of the horse and the human being are similar," they said. "Putting them in the same position makes the likening more understandable and concrete."
So, we have an unfortunately designed illustration that was supposed to be educational now going viral entirely because of the context our own dirty minds adds to the image. BuzzFeed wrote a post on the image, only to find that -- you guessed it -- Facebook had begun flagging the article as it was being shared on the site.
Well, things got even weirder with this horse thing. Facebook seems to be flagging this article — the one you're reading right now — as pornography. And then, after Facebook removes this article from your feed, it makes you go through your photos and verify that none of them are pornographic. In fact, Facebook's moderators seem to find this horse picture so inappropriate, a member of BuzzFeed's social media team received a 24-hour ban from posting on BuzzFeed's Facebook page.
And this is why Facebook should get out of the morality business to every last degree possible. An article about a hilarious, but innocent, educational illustration is being flagged, users are being hassled about their other photos on the site, and some folks are even getting banned. Because? Well, because it appears that Facebook moderators have the same perverse baseline psyche as the rest of us, resulting in an image of a man and a horse being compared physiologically becoming suspected horse-man-porn. And the article pointing out what it actually is is the one that got flagged. That's as much of a failure of this sort of thing as we could hope for.
]]>Most people often aren't the most technically minded, give them a tool, tell them it CAN produce an output, and they'll assume that any output that looks like the best quality possible, IS the best one available. It's extremely common with 'forensic evidence' and jurors in court cases, where it's given weight well beyond its actual evidentiary value (to the point that they now distrust cases without it) – there's even a name for it, "the CSI effect", named after one of the TV shows that uses it as a cornerstone.
One of the latest tools to get the blind trust of morons is IP Geolocation. At its basic level, it's a database of IP addresses with latitude and longitude listed, so when you look up an IP address, you get a pair of coordinates you can associate as an 'origin' for that.
However, there's a number of problems with that.:
So let's quickly address them.
Those that don't have a lat/long listed.
Well, there's a few ways to do it, but the way some chose to do, is just to guess. In the article that started me on this, it points out that the company MaxMind decided to guess at the average closest place it could – the geographical center of the US, except 39°50'N 98°35'W. is a messy decimal (39.8333333 N,98.585522W) so it rounded them to 38N, 97W. It's the front yard of a farm in Kansas.
Other times they just guess and get a town and put it somewhere there, although even that can be off a bit. It can be a lot off, as you'll see shortly.
How often are they updated?
There's no telling. With the great shortage of IPv4 addresses now, but with an ever-expanding list of devices, from cell phones to thermostats and even fridges, IP addresses are shifting around everywhere. There's also mergers and splits of companies, bankruptcies and so on. So unless the database is frequently updated, there's no chance that anything it has to say will be accurate – again we'll see that directly.
Finally, how does it deal with cellular devices?
Simply put, they don't. The handoff mechanism means that you'll often carry one IP address from one tower to the next (otherwise you'd have to terminate and restart any data transfer as you shifted between towers. In addition most cellular providers hide their cell customers behind NAT, precisely because of the lack of discreet IPv4 addresses to give out (and their… slowness in migrating to IPv6)
Odds are you're going to get a local network control center, or regional corporate office instead, which means it's practically no use at all.
Oh dear....
This all assumes as well that entries are made in good faith. One of the more common uses of geolocation is for targeted adverts, especially with 'adult websites', where they promise there's a horny woman (or man, if your browsing is detected as such, or the 'content' suggests you may be female) close by. Or you may have seen it in the scam adverts on news sites that should know better than to accept low-rate advertising based on scams (with easy to tell, clickbait headlines about insurance 'tricks' or similar).
This means that if you can 'rig' the database, you can expose the stupidity in parts of it, as was best demonstrated by Randall Munroe in his XKCD comic series.
So just how inaccurate are these systems? The easiest way to tell by far is to run some IP addresses where you know the location through these systems and see how far off they can be. So I did.
The most obvious one to start with is my own home connection's IP address. So I tried the link in the story, and boy was it off! Just for the record, I live on the south side of Atlanta's metro area, near Macon – Walking Dead country in fact
That's right, it put me in Ottawa, capital of Canada, roughly 1900km (1180 miles) and 1 whole country off. Part of that comes from the second question, how current the data is. It's listing my IP as belonging to Nortel networks. Problem is, I'm not a subscriber to Nortel – no-one is, the company was wound down years ago. Yet some databases still have them listed.
Cellphones don't fare much better either. I used the same service on a 4G Verizon phone sitting at my computer. It's location, San Diego. That's 1900 miles (3050km) off. Others services gave locations of New York, Atlanta, and Macon.
Wondering if it's just my semi-rural system that's messed up, I called a few friends who live in the Atlanta suburbs (a few streets from each other) and asked for their IP addresses, one used Comcast, and the other AT&T. Maybe things will be better and more accurate in a big-city environment?
I ran a number of different GeoIP services, and it was a very mixed bag of results.. One thing's certain though, none of the four set of coordinates gave an accurate location for the person (for obvious reasons I'm not going to give you their address, or mine for that matter)
Of them all, only one service – IPCIM.com – gave an error circle with a location, (twenty five mile radius), but it didn't do it for all. To me that indicates knowledge of its inaccuracy, but it's lack at other times seems to show it just doesn't care.
The second and third locations are the same coordinates, but they're less certain of the third than the second, despite both being off.
There's also something specific to note. There's 4 providers covered here. Two were done from the exact same location, yet their locations came nowhere near matching. Two more were IP addresses just streets away, but they also didn't match that well, although many went to the same default locations, including two which went to the 'lazy US Center' investigated in the Fusion piece.
More importantly, of the 30+ geolocating attempts made here, not a single one managed to be within a mile of the actual location (although one location was within a mile and a half, while another was within 3 miles – again, I'm not going to give out specifics). So for those who want to rely on them as being a source of where something is, the simple answer is "don't". This applies as much to those tracking down people who are leaving spammy comments, as it does to police officers and lawyers seeking to use them for court actions criminal or civil.
In fact lawyers and the police have absolutely NO excuse to use these kinds of databases in litigation at all as there are better, more accurate tools at their disposal – the courts themselves. In criminal cases a warrant is the preferred method, obtaining subscriber information from the ISP (fixed or cellular) which is far more accurate than any geolocation service because it's data coming from the entity actually providing the connection. In a civil trial you have a discovery subpoena to do pretty much the same thing and for the same reasons.
If you're doing it 'on your own', remember that these tools are as accurate as taking a dart and throwing it not at a map on the wall, but at a Google map display on your computer screen. Sure you'll be out a display, but you won't be potentially facing criminal charges when you go to act on what it basically bullshit data. At the very best, it can be used to advise, but it can be INCREDIBLY off, sometimes thousands of miles.
Data
The following services were used
There were 4 IP addresses used, three residential and one cellular comprising four of the biggest ISP's in the US.
IP addresses
The first two were located in south metro Atlanta, near Macon. David and James are located approximately half a mile apart in north Cobb county, Georgia.
Raw coordinates
|
Service |
Charter | Verizon | Comcast |
AT&T |
| checkIP.org | 45.4167, -84.3246 | 32.7977, -117.1322 | NOT TESTED | BLANK RESULT |
| IP2Location | 33.95621, -83.98796 | 32.55376, -83.88741 | 34.02342, -84.61549 | 34.02342, -84.61549 |
| IPinfo.io | 32.8685, -84.3246 | 32.8975, -83.7536 | 34.0247, -84.5033 | 38.0000, -97.0000 |
| EurekAPI | 32.8685, -84.3246 | 33.7981, -84.3877 | 34.1015, -84.5194 | 34.0247, -84.5033 |
| DB-IP | 33.9562, -83.988 | 40.7128, -74.0059 | 33.9413, -84.5177 ("Marietta (bedroom)") | 33.8545, -84.2171 |
| IPCIM.com | 32.8685, -84.3246 (± 25 mile) | NOT TESTED | 34.0247, -84.5033 | 34.0247, -84.5033 (± 25 mile) |
| MaxMind (geoLiteCity) | 32.8685, -84.3246 | 32.8975, -83.7536 | 34.0247, -84.5033 | 38, -97 |
| MaxMind (GeoIP2) | 32.8685, -84.3246 | 33.7844, -84.2135 | 34.0247, -84.5033 | 34.0247, -84.5033 |
If you'd rather see them on a map, they're here. (Legend Charter in green, Verizon in red, Comcast in blue, AT&T in yellow)
NOTE: One data source was extremely interesting in its provision of 11+ decimal places in its results. While this might seek to imply accuracy, it actually underscores how inaccurate it actually is. Eight decimal places gives a resolution of 1.1 millimeters – half the thickness of a CD/DVD. 11 decimal places as given in all their results is going to extremes, with locations given to less than a hair's thickness. It has been rounded down.
The "Marietta (bedroom)" label was actually on the output from their database.
I would like to thank David and James for their help with this. And for obvious reasons, we have forced changes in IP addresses for all our connections (and the release of this article was delayed to ensure that).
This is a repost from Andrew Norton's Politics & P2P blog
]]>Use of one of the research community's most valuable and extensively applied tools for manipulation of genomic data can introduce erroneous names. A default date conversion feature in Excel (Microsoft Corp., Redmond, WA) was altering gene names that it considered to look like dates. For example, the tumor suppressor DEC1 [Deleted in Esophageal Cancer 1] was being converted to '1-DEC.'Here we have the interesting interaction of two very different fields, where the name of a gene involved in esophageal cancer, DEC1, was interpreted by Excel to mean the date, 1 December. As the paper points out, these kinds of substitution errors are already to be found in key public databases:
DEC1, a possible target for cancer therapy, was incorrectly rendered, and it could potentially be missed in downstream data analysis. The same type of error can infect, and propagate through, the major public data resources. For example, this type of error occurs several times in even the immaculately curated LocusLink database.As that notes, a gene that might be relevant for treating cancer could well be missed because of this incorrect conversion to a date by Excel. Although it is unlikely that any serious harm has been caused by this -- yet -- it's a useful reminder of the dangers of depending a little too heavily on the results of software without checking for corruption of this kind.
Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+
]]>According to the Prairie Village Post, earlier this month lawyer Mark Molner was driving through a Kansas City suburb on his way home from his wife’s sonogram. All of a sudden, his BMW was blocked in front by a police car as another officer on a motorcycle pulled up behind him. (His pregnant wife witnessed the incident from a nearby parked car.)The mistake prompting this guns-drawn approach of Molner's video could have been made by anybody. The ALPR read a "7" as a "2" and returned a hit for a stolen vehicle. The hit also returned info for a stolen Oldsmobile, which clearly wasn't what Molner was driving. But that could mean the plates were on the wrong vehicle, which is also an indication of Something Not Quite Right.
According to what Molner told the Post, one of the officers then approached his car with his gun out.
“He did not point it at me, but it was definitely out of the holster,” Molner told the Post. “I am guessing that he saw the shock and horror on my face, and realized that I was unlikely to make (more of) a scene.”
“The officer has discretion on whether or not to unholster his weapon depending on the severity of the crime. In this case he did not point it at the driver, rather kept it down to his side because he thought the vehicle could possibly be stolen. If he was 100 percent sure it was stolen, then he would have conducted a felony car stop which means both officers would have been pointing guns at him while they gave him commands to exit the vehicle.”That makes sense, but there's still a chance this situation could have been averted. Molner's plate triggered the hit several miles before he was pulled over as pursuing police were unable to verify the plate due to traffic density. But it appears the officers made a last-minute decision to perform the unverified stop shortly before Molner would have driven out of the PD's jurisdiction. The stop occurred on the city/state boundary between Kansas and Missouri.
“I’m armchair quarterbacking the police, which is not a good position to be in,” Molner told the Post. “But before you unholster your gun, you might want to confirm that you’ve got the people you’re looking for.”So, when the plate reader kicked back a bad hit, the cops did attempt to verify the plate, but it looks very much like they overrode procedural safeguards in order to prevent possibly losing a collar.
As soon as the article was posted, someone from or associated with a popular cryptography website claims to have downloaded a pdf of the Snowden document from The New York Times and discovered that three of the redactions that were intended to obscure sensitive national security information were easily accessible by highlighting, copying and pasting the text. The poorly-redacted file was subsequently posted to the cryptography website, then promoted via Twitter. (We’re not going to post the name of the website that posted the file to protect the information contained within.)Cesca somehow feels the privacy of a single NSA agent trumps the public's interest in infringements on their own privacy -- not just here in the US but all over the world. Certainly, the New York Times should have made sure its redactions were actually redactions before publishing the document, but Cesca's hyperbolic attack isn't doing his side any favor.
…
So, the identity of an NSA agent is out there in public view within the same document in which a target of this program is named. All of this is due to the incompetence of whoever failed to properly redact the pdf before publishing it for the world to see — as well as for the aforementioned cryptography site to nab and republish it.'
…
This was bound to happen at some point in this ongoing saga: the name of an American agent has been leaked to the public via a document stolen by Edward Snowden. To add to the irresponsibility of how Snowden went about this operation, he distributed untold thousands of documents to a gaggle of technological neophytes who barely understand how to used Adobe Acrobat, much less the phenomenally complicated details of top secret NSA operations.
Finally, DCMS demand ISPs give them magic beans (“We want industry to continue to refine and improve their filters to ensure they do not – even unintentionally – filter out legitimate content”) and threaten them with regulation if they do not answer to future demands, or “maintain momentum”.There's nothing quite like a faith-based technological platform crafted by a crack team of professional busybodies and bureaucrats, especially one that assumes the only fuel needed is good intentions and the "momentum" will sustain itself into perpetuity. OR ELSE.
And while Government looks to the industry to deliver, through the self-regulatory mechanisms already established under UKCCIS, we are clear that if momentum is not maintained, we will consider whether alternative regulatory powers can deliver a culture of universally-available, family-friendly internet access that is easy to use.Jesus. That's frightening. If ISPs don't march in lockstep with Cameron's orders, they'll simply be beaten into shape by restrictive government mandates that ensure "a culture of universally-available, family-friendly internet access." If that doesn't sound like a slightly kinder, gentler version of any totalitarian regime's homegrown "internet," then I didn't just throw up a little in my mouth while typing out that quote.
This should be underpinned by a basic, common set of media standards, building on existing standards that already apply in many places. We would expect this to include:Well, Cameron might want to contact the Daily Mail and ask if it's willing to stop sexualizing minors, something it's never been shy about doing even if the front page is making all sorts of noise about rampant child pornography. I'm sure Cameron will also be clamping down on advertisers who push products pretty much anywhere they can aimed at the wide open wallets of teens and tweens (or ultimately, their parents). (P.S. Have the cast of Jackass shot.)
• Protection of minors: including protecting children’s exposure to material that seeks to sexualise them, strong sexual content, violence, imitable and dangerous behaviour, any specific health priorities, safety of children in content and protecting against commercial influence.
Given the economic, political and social importance of that finding, many have tried to reproduce it, but failed. A post by Mike Konczal on The Next New Deal blog explains how three researchers finally succeeded -- with surprising consequences:
In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadsheet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.
In his post, Konczal goes on to give a good explanation of just what went wrong. Correcting those three major errors produces the following result:
They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result.So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.
That is, not only is there no significant difference between countries whose public debt-to-GDP ratio is over 90%, and those with much lower values, there is apparently no critical number above which growth falls catastrophically. Put another way, from the corrected research, there does not seem to be any reason why the public debt-to-GDP ratio cannot keep on rising while preserving normal levels of growth.
That clearly runs entirely contrary to the current dogma that public debt must be reduced at all costs in order to keep growth at a healthy level. As the authors of the new paper conclude (pdf):
RR's [Reinhart and Rogoff's] findings have served as an intellectual bulwark in support of austerity politics. The fact that RR's findings are wrong should therefore lead us to reassess the austerity agenda itself in both Europe and the United States.
That debate about public debt reduction and the need for austerity measures certainly won't stop just because a key justification for the approach has been found to be completely wrong. But it's worth noting that alongside the major political ramifications of this new finding, there is another, rather less contentious, conclusion to be drawn.
The three errors in the original work by Reinhart and Rogoff finally came to light when they allowed other researchers to examine their model and the data they employed in it. It then became clear that the model was flawed, and that not all the relevant data had been included in the calculation. Neither was obvious from the result alone.
This reinforces a point we have made before. Alongside the results of their work, academics also need to release the datasets and any mathematical/computational models that they have used to derive them. Without those additional resources, it is not possible for other researchers to reproduce the results, which may -- as turns out to be the case for Reinhart and Rogoff's famous paper -- contain fundamental errors that completely undermine the conclusions drawn from them.
Follow me @glynmoody on Twitter or identi.ca, and on Google+
]]>To fix some of the errors in the AIA, Congress rushed through a "technical corrections" bill, intended to fix some of the problems with the bill. During all the fiscal cliff mess, with some back and forth between the House and Senate, they approved this bill which will be signed any moment, if it hasn't been already.By all accounts, in the AIA, Congress intended to remove the "could have been raised" language and provide a narrower estoppel for PGR proceedings. As the Congressional committee report explains, the PGR was designed to "remove current disincentives to current administrative processes." But something funny happened on the way to the Congressional floor, and the problematic "could have been raised" language was inadvertently inserted into the bill.
We're not the only ones to recognize the error. House Judiciary Chairman Lamar Smith referred to the AIA's PGR estoppel standard as "an inadvertent scrivener's error." Senate Judiciary Chairman Patrick Leahy, in advocating that the Senate adopt the technical corrections bill, said the PGR estoppel standard in AIA was "unintentional," and it was "regrettable" the technical corrections bill doesn't address the issue. Sen. Leahy expressed "hope we will soon address this issue so that the law accurately reflects Congress's intent." The PTO also thinks Congress made a mistake, saying "Clarity is needed to ensure that the [PGR] provision functions as Congress intended."


You see, the internet is like magic. And like most magic, it can be used for entertainment purposes. All the do-gooding in the world doesn't amount to much if you forget to register your URL. While you're busy enjoying that "new ink" smell of freshly printed Voter's Guides, someone quicker on the draw is undermining your "marijuana is bad" propaganda proselytizing information with hilariously over-the-top headlines.
The good news is that the online voters' guide sports the corrected URL: mavotenoonquestion3.com
The bad news is that the paper version will carry the old URL permanently. Of course, very few people are willing to type in a URL by hand, but as news of this blunder spreads, the fake site with the real URL will be receiving much more attention, voters' guide correction or no.
Here's the official reaction from No on Question 3 spokesman, Kevin Sabet:
"It's funny and upsetting, I guess, at the same time."Yeah. Largely the first part. And to think, the committee can't even blame a late afternoon smokeout for the mental slip.
The group sent out a press release saying proponents of medical marijuana were tampering with the democratic process through “underhanded efforts.”Sabet admits the committee made a mistake and yet, the press release attempts to paint No on Question 3 as the victim of villainous pot smokers rather than treating it like the self-inflicted wound it is.
The Globe notes that the No on Question 3 campaign has managed to collect all of $600 so far, compared to the $1 million or so that supporters of the initiative have received from Peter Lewis, a longtime patron of drug policy reform.Maybe it's time to admit your fears of a weed-loaded America are overblown, especially when you've just been outmaneuvered (and outspent) by a bunch of stoners. ]]>
She said the search warrants were invalid because they were general warrants which lacked specificity about the offence and the scope of the items to be searched for.In other words, it's not only entirely possible that the government won't even be able to use anything from what they seized in a case, but they may, themselves, be in trouble for breaking the law and violating Dotcom's privacy rights.
Without a valid warrant, police were trespassing and exceeded what they were lawfully authorised to do.
Justice Winkelmann said no one had addressed whether police conduct also amounted to unreasonable search and seizure, but her preliminary view was that it did.
Somewhere between 2500 feet and 2000 feet, the captain's mobile phone started beeping with incoming text messages, and the captain twice did not respond to the co-pilot's requests.There followed a series of errors, with the pilot and the co-pilot not communicating with each other -- the pilot trying to drop the wheels as the co-pilot prepared to abort the landing -- and then both pilots becoming confused about their actual altitude. Oh, and then there was the fact that the flaps were set incorrectly.
The co-pilot looked over and saw the captain "preoccupied with his mobile phone", investigators said. The captain told investigators he was trying to unlock the phone to turn it off, after having forgotten to do so before take-off.
At 1000 feet, the co-pilot scanned the instruments and felt "something was not quite right" but could not spot what it was.
One recent estimate placed annual direct consumer losses at $114 billion worldwide. It turns out, however, that such widely circulated cybercrime estimates are generated using absurdly bad statistical methods, making them wholly unreliable.This is pretty common. In the first link above, we wrote about how a single $7,500 "loss" was extrapolated into $1.5 billion in losses. The simple fact is that, while such things can make some people lose some money, the size of the problem has been massively exaggerated. As these researchers note, this kind of thing happens all the time. They point to an FTC report, where two respondents alone provided answers that effectively would have added $37 billion in total "losses" to the estimate.
Most cybercrime estimates are based on surveys of consumers and companies. They borrow credibility from election polls, which we have learned to trust. However, when extrapolating from a surveyed group to the overall population, there is an enormous difference between preference questions (which are used in election polls) and numerical questions (as in cybercrime surveys).
For one thing, in numeric surveys, errors are almost always upward: since the amounts of estimated losses must be positive, there’s no limit on the upside, but zero is a hard limit on the downside. As a consequence, respondent errors — or outright lies — cannot be canceled out. Even worse, errors get amplified when researchers scale between the survey group and the overall population.
How quickly things change, especially for entities who find themselves staring down an angry internet. At first, ECAD seemed disturbingly untroubled by the uproar, including the memeification of its intention to stretch the definition of "public performance" to include all audible sound. But it suddenly changed its prohibitively expensive tune when hundreds of thousands of dollars were at stake.
None other than Google Brazil itself issued a blog post stating that ECAD's existing agreement with Youtube did not allow the agency to collect fees from bloggers, pointing out the obvious to ECAD's wilfully obtuse representatives:
These sites don't host or transmit any content when they associate a YouTube video to their site, and as such, the fact of embedding videos from YouTube can't be treated as a ‘retransmission'. As these sites aren't performing any music, ECAD can't, within the law, collect any payment from these.Having been smacked down by its main benefactor, ECAD issued a statement of its own, claiming the whole thing was just an "error" and that it had no intention of setting up tollbooths on every website with embedded video:
1- Ecad has never had the intention to curtail the freedom on the internet, known to be a space devoted to information, dissemination of music and other creative works, and propagation of ideas. The institution also lacks a copyright billing strategy geared to embedded videos. Royalties collections for webcasting have been under re-evaluation since February 29th, and the case reported in recent days took place before then. Nevertheless, it resulted from an operational error of interpretation, which represents an isolated fact in this segment. (...)Note that ECAD has left itself a bit of an opening for pursuing these fees in the future. Supposedly it can still go after blogs but only if it informs Google/Youtube of its intention to do so. It seems the only error it feels it made was getting caught. Everything else was simply a clerical screw-up and if all ducks had been properly ordered, it would have been free to bill websites for linking to Youtube.
2- Two years ago, Ecad and Google signed a letter of intent that guides the relationship between both organizations. The document details thatEcad can collect copyright fees for music coming from embedded videos, as long as it gives advance notice to Google/YouTube. As Ecad did not send such a notification, it becomes clear that this is not its goal. If it were the case, it would have sent the notification the letter of intent requires. (...)
]]>
The letter continues with a statement that claims that Mr. Steele's office has been unable to get in touch with the recipient.This is incredibly sloppy on the part of Steele, and with errors abounding in his letters that doesn't bode well for his lawsuits:
Odd thing is, the e-mail address that is listed under the mailing address on the letter is not the e-mail address associated with the recipient's ISP. The only way Mr. Steele's firm could obtain the address would be by asking for it during a phone call. One of the five calls which Mr. Steele's firm would like to pretend never happened.
Personally, I believe that the implications of this letter are extremely disturbing. For one, Mr. Steele's firm appears to not bother proof-reading any of its letters. Mr. Steele is comfortable with asking for thousands of dollars from people, but he can't take 10 seconds to at least review the first sentence of his settlement letters.There's a suggestion that some of the date errors may be due to whatever software Steele is using, but that also raises questions: if the software for creating these letters is so filled with errors, is the software he uses to track what IP addresses are sharing files also riddled with errors? ]]>
Are there errors in the affidavit? If so, do they even matter? The answer is no.Hart's reasoning is that since Homeland Security only has to show "probable cause" in its affidavit, the various errors don't matter. Now, without a doubt, the standard for probable cause is different than for guilt in a trial. But that does not mean there are no standards. He quotes various Supreme Court rulings, which grant law enforcement leeway in filing the affidavits and reaching the probable cause barriers, and specifically noting that some level of mistakes are allowed. Specifically, he quotes Brinegar v. United States, where the court gives law enforcement some leeway for errors:
These long-prevailing standards seek to safeguard citizens from rash and unreasonable interferences with privacy and from unfounded charges of crime. They also seek to give fair leeway for enforcing the law in the communitys protection. Because many situations which confront officers in the course of executing their duties are more or less ambiguous, room must be allowed for some mistakes on their part. But the mistakes must be those of reasonable men, acting on facts leading sensibly to their conclusions of probability. The rule of probable cause is a practical, nontechnical conception affording the best compromise that has been found for accommodating these often opposing interests. Requiring more would unduly hamper law enforcement. To allow less would be to leave law-abiding citizens at the mercy of the officers whim or caprice.With all due respect to Hart, I believe his analysis falls short on a variety of different factors. First, I believe he greatly simplifies the overall ruling in Brinegar to a level that the Court almost certainly did not intend. It does allow for some mistakes (in Brinegar it was a small one). It does not allow for massive mistakes that undermine the entire probability equation that makes up probable cause. Obviously, it's expected that sometimes errors will be made. But, given the vast number of errors in this affidavit, combined with the seriousness of those errors, and the fact that (especially with dajaz1) they made up the very core of the probable cause argument, it would seem that the "balance" would shift against this affidavit having been properly executed.
Thus, while the general rule under the Fourth Amendment is that any and all contraband, instrumentalities, and evidence of crimes may be seized on probable cause (and even without a warrant in various circumstances), it is otherwise when materials presumptively protected by the First Amendment are involved... It is "[t]he risk of prior restraint, which is the underlying basis for the special Fourth Amendment protections accorded searches for and seizure of First Amendment materials" that motivates this rule.This line of thinking goes back through a long, long, long line of cases, many of which repeat the famous line: "Any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity." In seizure cases where expressive speech is part of what is removed from circulation, the bar is higher than your average probable cause. That's why those errors are incredibly important, and the lack of any attempt to avoid First Amendment issues is glaring. Hart doesn't mention any of this, which I find surprising.