Notice: Use of undefined constant EDITION_TOKEN - assumed 'EDITION_TOKEN' in /home/beta6/deploy/itasca_20201215-3691-c395/rss.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss.php on line 2

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss-template.inc on line 2
Techdirt. Stories filed under "errors" Easily digestible tech news... https://beta.techdirt.com/ en-us Techdirt. Stories filed under "errors"https://beta.techdirt.com/images/td-88x31.gifhttps://beta.techdirt.com/ Wed, 18 Mar 2020 12:50:00 PDT Social Media Promised To Block Covid-19 Misinformation; But They're Also Blocking Legit Info Too Mike Masnick https://beta.techdirt.com/articles/20200317/17521544122/social-media-promised-to-block-covid-19-misinformation-theyre-also-blocking-legit-info-too.shtml https://beta.techdirt.com/articles/20200317/17521544122/social-media-promised-to-block-covid-19-misinformation-theyre-also-blocking-legit-info-too.shtml Sing it with me, folks: content moderation is impossible to do well at scale. Over the last few weeks, all of the big social media platforms have talked about their intense efforts to block misinformation about Covid-19. It appeared to be something of an all hands on deck situation for employees (mostly working from home) at these companies. Indeed, earlier this week, Facebook, Google, Linkedin, Microsoft, Reddit, Twitter, and YouTube all released a joint statement about how they're working together to fight Covid-19 misinformation, and hoping other platforms would join in.

However, battling misinformation is not always so easy -- as Facebook discovered yesterday. Yesterday afternoon a bunch of folks started noticing that Facebook was blocking all sorts of perfectly normal content, including NY Times stories about Covid-19. Now, we can joke all we want about some of the poor NY Times reporting, but to argue that its reporting on Covid-19 is misinformation would be, well, misinformation itself. There was some speculation, a la YouTube's warning that this could be due to content moderators being sent home -- and not being allowed to do their content moderation duties over privacy concerns, but the company said that it was "a bug in an anti-spam system" and was "unrelated to any changes in our content moderation workforce." Whether you buy that or not is your choice.

Still, it's a reminder that any effort to block misinformation is going to be fraught with problems and mistakes, and trying to adapt rapidly, especially on a big (the biggest) news story with rapidly changing factors and new information (and misinformation) all the time, is going to run into some problems sooner or later.

]]>
content-moderation-is-impossible-at-scale https://beta.techdirt.com/comment_rss.php?sid=20200317/17521544122
Thu, 20 Jun 2019 14:12:33 PDT Google CEO Admits That It's Impossible To Moderate YouTube Perfectly; CNBC Blasts Him Mike Masnick https://beta.techdirt.com/articles/20190618/17362542426/google-ceo-admits-that-impossible-to-moderate-youtube-perfectly-cnbc-blasts-him.shtml https://beta.techdirt.com/articles/20190618/17362542426/google-ceo-admits-that-impossible-to-moderate-youtube-perfectly-cnbc-blasts-him.shtml Over the weekend, Google CEO Sundar Pichai gave an interview to CNN in which he admitted to exactly what we've been screaming over and over again for a few years now: it's literally impossible to do content moderation at scale perfectly. This is for a variety of reasons: first off, no one agrees what is the "correct" level of moderation. Ask 100 people and you will likely get 100 different answers (I know this, because we did this). What many people think must be mostly "black and white" choices actually has a tremendous amount of gray. Second, even if there were clear and easy choices to make (which there are not), at the scale of most major platforms, even a tiny error rate (of either false positives or false negatives) will still be a very large absolute number of mistakes.

So Pichai's comments to CNN shouldn't be seen as controversial, so much as they are explaining how large numbers work:

"It's one of those things in which let's say we are getting it right over 99% of the time. You'll still be able to find examples. Our goal is to take that to a very, very small percentage, well below 1%," he added.

This shouldn't be that complex. YouTube's most recent stats say that over 500 hours of content are uploaded to YouTube every minute. Assuming, conservatively, that the average YouTube video is 5 minutes (Comscore recently put the number at 4.4 minutes per video) that means around 6,000 videos uploaded every minute. That means about 8.6 million videos per day. And somewhere in the range of 250 million new videos in a month. Now, let's say that Google is actually 99.99% "accurate" (again, a non-existent and impossible standard) in its content moderation efforts. That would still mean ~26,000 "mistakes" in a month. And, I'm sure, eventually some people could come along and find 100 to 200 of those mistakes and make a big story out of how "bad" Google/YouTube are at moderating. But, the issue is not so much the quality of moderation, but the large numbers.

Anyway, that all seems fairly straightforward, but of course, because it's Google, nothing is straightforward, and CNBC decided to take this story and spin it hyperbolicly as Google CEO Sundar Pichai: YouTube is too big to fix. That, of course, is not what he's saying at all. But, of course, it's already being picked up on by various folks to prove that Google is obviously too big and needs to be broken up.

Of course, what no one will actually discuss is how you would solve this problem of the law of large numbers. You can break up Google, sure, but unless you think that consumers will suddenly shift so that not too many of them use any particular video platform, whatever leading video platforms there are will always have this general challenge. The issue is not that YouTube is "too big to fix," but simply that any platform with that much content is going to make some moderation mistakes -- and, with so much content, in absolute terms, even if the moderation efforts are pretty "accurate" you'll still find a ton of those mistakes.

I've long argued that a better solution is for these companies to open up their platforms to allow user empowerment and competition at the filtering level, so that various 3rd parties could effectively "compete" to see who's better at moderating (and to allow end users to opt-in to what kind of moderation they want), but that's got nothing to do with a platform being "too big" or needing "fixing." It's a recognition that -- as stated at the outset -- there is no "right" way to moderate content, and no one will agree on what's proper. In such a world, having a single standard will never make sense, so we might as well have many competing ones. But it's hard to see how that's a problem of being "too big."

]]>
wait,-but-why? https://beta.techdirt.com/comment_rss.php?sid=20190618/17362542426
Mon, 27 Aug 2018 09:39:50 PDT Internet Content Moderation Isn't Politically Biased, It's Just Impossible To Do Well At Scale Mike Masnick https://beta.techdirt.com/articles/20180825/23572940509/internet-content-moderation-isnt-politically-biased-just-impossible-to-do-well-scale.shtml https://beta.techdirt.com/articles/20180825/23572940509/internet-content-moderation-isnt-politically-biased-just-impossible-to-do-well-scale.shtml The narrative making the political rounds recently is that the big social media platforms are somehow "biased against conservatives" and deliberately trying to silence them (meanwhile, there are some in the liberal camp who are complaining that sites like Twitter have not killed off certain accounts, arguing -- incorrectly -- that they're now overcompensating in trying to not kick off angry ideologues). This has been a stupid narrative from the beginning, but the refrain on it has only been getting louder and louder, especially as Donald Trump has gone off on one of his ill-informed rants claming that "Social Media Giants are silencing millions of people." Let's be clear: this is all nonsense.

The real issue -- as we've been trying to explain for quite some time now -- is that basic content moderation at scale is nearly impossible to do well. That doesn't mean sites can't do better, but the failures are not because of some institutional bias. Will Oremus, over at Slate, has a good article up detailing why this narrative is nonsense, and he points to the episode of Radiolab we recently wrote about, that digs deep on how Facebook moderation choices happen, where you quickly begin to get a sense of why it's impossible to do it well. I would add to that a recent piece from Motherboard, accurately titled The Impossible Job: Inside Facebook’s Struggle to Moderate Two Billion People.

These all highlight a few simple facts that lots of angry people (on all sides of political debates) are having trouble grasping.

  1. If you leave a platform completely unmoderated, it will fill up with junk, spam, trolling and the like, thereby decreasing its overall utility and pushing people away.
  2. If you do decide to moderate, you have a set of impossible choices. So much content requires understanding context, and context may be very different, even for the same content when viewed by different people.
  3. If you're going to moderate at scale, you're going to need a set of "rules" that thousands of generally low paid individuals will have to be able to put into practice, reviewing pieces of content for just a few seconds (a recent report said that Facebook reviewers were expect to review 5,000 pieces of content per day.
  4. It is impossible to make rules like that that can easily be applied to all content. A significant percentage of content falls into gray areas, where it then becomes a judgment call by people in a cubicle in the middle of reviewing 5,000 pieces of content.
  5. At that rate, many mistakes are made. It is collateral damage of moderation at scale.
  6. People caught in the crossfire of collateral damage will rightly make a big stink about it and the social media companies will look bad.
  7. Meanwhile, some of the reasonable moderation decisions will hit trolls hard (see point 1 above) and those trolls will then take to other platforms and make a huge stink about how unfair it all is, and the social media companies will look bad.
Put this all together and it is a no win situation. You can't leave the platform completely unmoderated. But any attempt at moderation at scale is going to have problems. The "scale" part of this is what's the most difficult for most people to grasp. As Kate Klonick (again, author of an incredible paper on content moderation that you should read as well as author of a guest post here on Techdirt) notes in the Motherboard piece:

“This is the difference between having 100 million people and a few billion people on your platform,” Kate Klonick... told Motherboard. “If you moderate posts 40 million times a day, the chance of one of those wrong decisions blowing up in your face is so much higher.”

Later in the piece, Klonick again makes an important point:

“The really easy answer is outrage, and that reaction is so useless,” Klonick said. “The other easy thing is to create an evil corporate narrative, and that is also not right. I’m not letting them off the hook, but these are mind-bending problems and I think sometimes they don’t get enough credit for how hard these problems are.”

This is why I've been advocating loudly for platforms to move the moderation decisions further out to the ends of the network, rather than doing it in a centralized fashion. Let end users create their own moderation system, or adapt ones put together by third parties. But, of course, even that has problems as well.

No matter what choices are made, there are significant tradeoffs. As the Motherboard article also highlights, what seems like a "simple" rule gets hellishly complex quickly when applied to other situations, and then you've suddenly increased the "error" rate and people get angry all over again and the whole mess gets blown out of proportion again.

“There's always a balance between: do we add this exception, this nuance, this regional trade-off, and maybe incur lower accuracy and more errors,” Guy Rosen, VP of product management at Facebook, said. "[Or] do we keep it simple, but maybe not quite as nuanced in some of the edge cases? Balance that's really hard to strike at the end of the day.”

As the Oremus piece notes, the "bias" of platforms when it comes to moderation is not "liberal" or "conservative," it's Capitalist. Having a platform overrun with spam and trolls is bad for business. Hiring enough people who can adequately review content within the correct context is somewhere between insanely cost prohibitive and impossible. So the platforms muddle by with imperfect review processes. Making moderation mistakes is also bad for business, and the platforms would love to minimize them, but "mistakes" are often in the eye of the beholder as well, again reinforcing that this is an impossible task. For everyone screaming about how Alex Jones should be kicked off platforms, there's a similar number of people screaming about how awful the platforms are that do kick him off. There is no "right" way to do this, and that's what every platform struggles with.

And, if you think that these platforms are unfairly silencing "conservatives" (which is the prevailing narrative right now), it's probably because you're not paying enough attention elsewhere. Black Lives Matter and other civil rights groups have complained about "racially biased" moderation in the opposite direction, saying that minority groups are regularly silenced on these platforms. Indeed, it's not hard to find a ton of reports about black activists having content removed from social media platforms. And for all the talk of Infowars being taken off these platforms, how many people noticed that the Facebook page of the Venezuelan socialist TV station Telesur was recently taken down as well.

Yes, it's fine to point out that these platforms (mainly Facebook, Twitter and YouTube) are really bad at moderating. But, unless you're willing to actually understand the scale at play, recognize how many mistakes are going to be made (and recognize how trolls are going to go nuts over correct decisions), you're playing into a false narrative to argue that any of these platforms are "targeting" anyone. It's not true.

]]>
stop-this-dumb-narrative https://beta.techdirt.com/comment_rss.php?sid=20180825/23572940509
Tue, 29 May 2018 09:29:05 PDT DOJ, FBI Issuing Corrections To Statements, Testimony Containing Bogus Uncracked Device Numbers Tim Cushing https://beta.techdirt.com/articles/20180527/18521139929/doj-fbi-issuing-corrections-to-statements-testimony-containing-bogus-uncracked-device-numbers.shtml https://beta.techdirt.com/articles/20180527/18521139929/doj-fbi-issuing-corrections-to-statements-testimony-containing-bogus-uncracked-device-numbers.shtml The FBI's push for encryption backdoors relied on ever-skyrocketing numbers of uncracked devices the agency's best and brightest just couldn't seem to access. "Look!" DOJ and FBI officials said, pointing lawmakers at charts showing an explosion in the number of locked devices over the last couple of years. Unsustainable, it seemed to say. But it was all a lie. Not a deliberate lie, maybe, but a lie nonetheless. A convenient misrepresentation of the problem caused by a software error.

How does an agency with the technical capabilities the FBI has miscount physical items? Apparently, you let software do the counting and hope for the best.

“The FBI’s initial assessment is that programming errors resulted in significant over-counting of mobile devices reported,’’ the FBI said in a statement Tuesday. The bureau said the problem stemmed from the use of three distinct databases that led to repeated counting of phones. Tests of the methodology conducted in April 2016 failed to detect the flaw, according to people familiar with the work.

This inflated the count from somewhere between 1,000-2,000 to nearly 8,000. That was the number used by Director Christopher Wray and AG Jeff Sessions in testimony to Congress and speeches to law enforcement. That was the center of the narrative: a number that kept growing exponentially with no end in sight.

You'd think the agency would track devices better, what with officials constantly claiming each and every phone was "tied to a threat to the American people." Something that important shouldn't be carelessly handled. But the phones "tied to threats" were overcounted, suggesting a severe problem in the FBI's tracking system that might make it difficult to figure out which "threats" each phone is "tied" to if and when it ever gets around to cracking the devices.

Now, the DOJ is in damage control mode. Robyn Greene was the first to notice the corrections made to officials' statements using the FBI's bullshit ~8,000 number. Edits to AG Sessions' recent comments at a law enforcement conference have replaced this:

Last year the FBI was unable to access investigation-related content on more than 7,700 devices…

With this:

Last year the FBI was unable to access investigation-related content on more than ** devices…

The footnote reads:

** Due to an error in the FBI's methodology, an earlier version of this speech incorrectly stated that the FBI had been unable to access 7,800 devices. The correct number will be substantially lower.

The FBI's been doing a bit of cleanup at its own site. Here's one from May 3, 2017 that has since been corrected.

In just the first half of this fiscal year, the FBI was unable to access the content of more than 3,000* mobile devices...

The FBI's footnote is far more concise and vague.

* Due to an error in methodology, this number is incorrect. A review is ongoing to determine an updated number.

It helpfully doesn't mention the FBI screwed up its own count. Nor does it state the new estimate will be "substantially lower." Cowards.

Here's another one, from September 27, 2017:

In the first 10 months of this fiscal year, the FBI was unable to access the content of more than 6,000* mobile devices…

How did this jump -- 3,000 phones in five months -- escape notice? As of November 2016, the number was only around 880 devices. Then it jumped to 3,000 six months later. Then it doubled to 6,000 in less than five months. By the end of that fiscal year four months later, the FBI had supposedly added another 1,775 uncrackable devices.

In fiscal year 2017, we were unable to access the content of 7,775* devices…

Even if the count had been accurate (which it wasn't), the numbers were delivered dishonestly. What appears to be a cumulative total of all devices the FBI had collected over the years is presented in testimony as being the total number of devices the FBI couldn't unlock during a single fiscal year. This vastly misconstrues the severity of the issue and while FBI officials may have been unaware the FBI's software was delivering bogus counts, it didn't stop them from overstating the problem by presenting historical cumulative numbers as a single year's total.

Whatever number the FBI finally delivers will be decidedly underwhelming. Considering it still hasn't provided Congress with details of its attempts to access supposedly uncrackable devices, the FBI has managed to decimate its own forces in the new War on Encryption. It overplayed its hand -- perhaps inadvertently -- and weakened its argument severely. Something this important to the FBI was handled carelessly -- not because the FBI doesn't really want weakened encryption, but because the FBI will not do anything to weaken its anti-encryption position. It never bothered to check the phone count because the number given to officials was sufficiently impressive. That mattered far more than accuracy or honesty. "Fidelity, Bravery, Integrity" my ass.

]]>
cleanup-on-aisle-7800 https://beta.techdirt.com/comment_rss.php?sid=20180527/18521139929
Thu, 12 Oct 2017 20:04:55 PDT Real Life Soccer Player Besieged By Requests To Play For Foreign Team Due To Video Game Error Timothy Geigner https://beta.techdirt.com/articles/20171011/09261538388/real-life-soccer-player-besieged-requests-to-play-foreign-team-due-to-video-game-error.shtml https://beta.techdirt.com/articles/20171011/09261538388/real-life-soccer-player-besieged-requests-to-play-foreign-team-due-to-video-game-error.shtml Video games have been steadily becoming more realistic since their first creation. Conversations about this progress has mostly centered around graphical enhancements and tech such as virtual reality that strive to better immerse the player in the fictional world in which they play. But graphical and visual enhancements aren't the only form of realism in which video games have progressed. More unsung have been the enhancements in pure data and detail in these games. For this type of progress, one need only look to management-style simulations games, such as those of the sports realm. In games centered on managing sports franchises, the depth of detail that has emerged has become somewhat breathtaking. Baseball sims, such as the excellent Out of the Park series, are an example of this as is the equally deep Football Manager series for soccer fans.

So real, in fact, have these simulations become, that they can occasionally create real-world mishaps, as happened with a French soccer player named Ruben Aguilar.

As discussed in this inteview with Goal (in French), there's a mistake in last year's version of the life-destroying management game where Aguilar is incorrectly given dual citizenship of both France and Bolivia.

With tens of thousands of players in the game, mistakes are bound to happen from time to time, but the difference here is that it's turned into an international thing. Bolivian players of Football Manager noticed his supposed South American heritage last season, but a string of strong performances in the real world (especially against French giants PSG) this year have blown up to such an extent that he made the TV in Bolivia, with the country's national team management contacting him to inquire about the possibility of him playing for them.

The error here was such that it indicated that Aguilar was available for signing to the Bolivian team under the rules of international football. How the error came about is an open question, but the fact is that Aguilar's parents were French and Spanish and he holds no citizenship, or indeed even a passport, for Bolivia. Still, the Bolivian team heeded the calls from its fans to inquire about signing Aguilar, only to learn he was not eligible to play for the team. It's worth repeating that this entire episode came about because of a single error in a single popular sports simulation game. Aguilar himself was forced to respond to all of this on his Facebook page.

For the past few weeks, we have received dozens of messages concerning the nationality of Ruben. On Facebook and Twitter many information also circulate. In order to remove doubts; by this communiqué, we affirm that Ruben was born in Grenoble (France), of Spanish father and French mother. As a result, he does not have a Bolivian passport. In any case, we thank you for all messages of support and the enthusiasm aroused to see him wearing the jersey of ‘La Verde’. ¡Muchas Gracias!

It's a funny little story, with many folks now poking fun at both the Bolivian team for not doing its homework and Football Manager for making the seemingly inconsequential error to begin with. More interesting to me, however, is how this serves as an indication of how far video games have come in terms of the realism we expect from them. So sure was the public, and even a professional soccer team, that the information in the game was accurate that all of the above acted on that information.

That's actually kind of cool.

]]>
irl https://beta.techdirt.com/comment_rss.php?sid=20171011/09261538388
Wed, 29 Jun 2016 11:54:49 PDT Lessons From The Downfall Of A $150M Crowdfunded Experiment In Decentralized Governance Zach Graves https://beta.techdirt.com/articles/20160624/13312834815/lessons-downfall-150m-crowdfunded-experiment-decentralized-governance.shtml https://beta.techdirt.com/articles/20160624/13312834815/lessons-downfall-150m-crowdfunded-experiment-decentralized-governance.shtml Hype around blockchain has risen to an all-time high. A technology once perceived to be the realm of crypto-anarchists and drug dealers has gained increasing popular recognition for its revolutionary potential, drawing billions in venture-capital investment by the world's leading financial institutions and technology companies.

Regulators, rather than treating blockchain platforms (such as Bitcoin or Ethereum) and other "distributed ledgers" merely as tools of illicit dark markets, are beginning to look at frameworks to regulate and incorporate this important technology into traditional commerce.

That progress was challenged recently, when more than $54 million was stolen from The DAO (short for "decentralized autonomous organization") — an experimental and unregulated investment fund built on the blockchain platform Ethereum. As people realized The DAO was being drained, the ensuing panic also crashed the price of Ether (or ETH), Ethereum's cryptocurrency.

Beyond potentially making a lot of people poorer – who probably should have known better than to invest in an experimental "robotic corporation" — the theft has created a massive political rift within the blockchain community, and threatens to undermine trust in a technology described as the "trust machine". In addition, this event raises serious questions about the cybersecurity risks of distributed applications, the (lack of) enforcement of existing securities laws and the potential for increased scrutiny by regulators looking to protect unwary investors.

Prior to last week, The DAO was widely considered a phenomenal success. It enjoyed the largest crowdfunding in history, raising the equivalent of more than $150 million, or about a tenth of the value of the Ethereum blockchain platform on which it was built. While you could conceivably build a DAO for anything, since it was a piece of software, The DAO was created for the purpose of developing the Ethereum platform and other decentralized software projects. According to its "manifesto" on daohub.org:

The goal of The DAO is to diligently use the ETH it controls to support projects that will:

• Provide a return on investment or benefit to the DAO and its members.
• Benefit the decentralized ecosystem as a whole.

In short, it was developed as a venture-capital fund and, importantly, its investors expected returns.

What is a DAO, anyway? And how does it work? Christoph Jentzsch — founder of the German company Slock.it, which helped create The DAO — explained the concept in his white paper as "organizations in which (1) participants maintain direct real-time control of contributed funds and (2) governance rules are formalized, automated and enforced using software."

As American Banker's Tanaya Macheel writes, DAOs and the smart contracts on which they are built could have a lot to offer traditional financial institutions:

In theory, distributed autonomous organizations (of which the DAO is one of the first examples) are a hardcoded solution to the age-old principal-agent problem. Simply put, backers shouldn't have to worry about a third party mismanaging their funds when that third party is a computer program that no one party controls.

At a time when the financial services industry is trying to automate old processes to cut costs, errors and friction, DAOs represent perhaps the most extreme attempt to take people out of the picture.

DAOs can be deployed on the distributed global computer of the Ethereum platform or other suitable blockchains, including private ones. One mechanism to fund them is through a "crowdsale" of DAO tokens that act like shares of stock, which is what The DAO did. Token-holders can vote on new proposals (weighted by the number of tokens a user controls) to change the structure of the DAO and alter its code. Tokens also can be traded and have an exchange-value. As The DAO's "official website" daohub.org describes it:

The DAO is borne from immutable, unstoppable, and irrefutable computer code, operated entirely by its members.
How exactly does an immutable decentralized computer get "hacked"? According to DAO developer Felix Albert, it wasn't. Unlike the failed bitcoin exchange Mt. Gox — where nearly $500 million of bitcoins were lost due to a combination of breach and fraud — the theft exploited a bug that previously had been undiscovered (or more accurately, hadn't been fixed) in its code.

A quirk of robotic corporations is that they take their bylaws literally. Like Asimov's robots, DAOs are built with rules to govern their behavior that cannot easily be revised or overwritten once they are set in motion. Inevitably, these sometimes conflict with our preconceived ideas of how they ought to operate.

Technical analysis of the DAO theft revealed the attacker exploited a function originally designed to protect users:

The attack [on The DAO] is a recursive calling vulnerability, where an attacker called the "split" function, and then calls the split function recursively inside of the split, thereby collecting ether many times over in a single transaction.

It wasn't really a hack at all. It was human error. Making matters worse, The DAO's promoters (in this case, Slock.it Chief Operating Officer Stephan Tual) had said this kind of bug wouldn't be an issue just a few days before the theft (whoops).

Lots of potential vulnerabilities for The DAO had been discussed and it was even suggested to place a moratorium on proposals. Meanwhile, its promoters confidently asserted everything was fine:

We are assuming that the base contract is secure. This assumption is justified due to the community verification and a private security audit.

Additionally, Slock.it's blog claimed that the generic DAO framework code had been audited by a leading security firm:

We're pleased to announce that one of the world's leading security audit companies, Deja Vu Security, has performed a security review of the generic DAO framework smart contracts.

On close inspection, the only report they linked in their blog was three pages long. It's unclear whether a rigorous formal audit had ever been conducted. After the attack, people started asking for the audit report and wondering why Slock.it hadn't shared it. The security firm, Deja Vu, even responded on Reddit.

Hi Everyone, Adam Cecchetti CEO of Deja vu Security here. For legal and professional reasons Deja vu Security does not discuss details of any customer interaction, engagement, or audit without written consent from said customer. Please contact representatives from Slock.it for additional details.

Whoever was in charge of auditing the code screwed up big-time. As former Ethereum release coordinator Vinay Gupta explained on YouTube, The DAO was an experiment that was never built to handle this much risk:

We all knew as we watched this happening that this was an emperor's clothes scenario ... there was no way that that smart contract had undergone an appropriate amount of scrutiny for something that was a container for $160 million.

Sure, everyone involved should have stopped it from getting carried away. But what are the actual consequences when a decentralized extralegal robot corporation doesn't do what it's expected to? Is anyone really "in charge" of making sure it works? Is anyone on the hook if the whole thing goes down the tubes because of its creators' (or proposal authors') lack of due diligence?

For one thing, as Coin Center's Peter Van Valkenburgh explains, DAOs are likely to run afoul of existing securities law – potentially implicating their developers, promoters and investors:

The Securities Act intentionally defines "promoter" broadly: "any person that, alone or together with others, directly or indirectly, takes initiative in founding the business or enterprise of the issuer." Given the breadth of this language, developers should carefully weigh the risks of being visibly associated with the release and sale of [DAO] tokens.

Individuals deemed to be promoters of a [DAO] may be found to be in violation of Section 5(a) and 5(c) of the Securities Act. Under these sections it is unlawful to directly or indirectly offer to sell or buy unregistered securities, or to "carry" for sale or delivery after the sale an unregistered security or a prospectus detailing that security. Even if a [DAO] is deemed to be an unregistered security, it remains very unclear how promoting that [DAO] would or would not equate to these unlawful activities, and who—if anyone—would be found to have violated the law. Nonetheless, broad interpretation of these laws may potentially implicate any participant or visibly affiliated developer or advocate.

So DAO evangelists could soon be in hot water, regardless of any disclaimers they put up.

To the Securities and Exchange Commission's credit, they have thus far been relatively open to innovations like crowdfunding, as well as the potential for blockchain technology. As SEC Chairwoman Mary Jo White recently said in an address at Stanford University:

Blockchain technology has the potential to modernize, simplify, or even potentially replace, current trading and clearing and settlement operations ... We are closely monitoring the proliferation of this technology and already addressing it in certain contexts ... One key regulatory issue is whether blockchain applications require registration under existing Commission regulatory regimes, such as those for transfer agents or clearing agencies. We are actively exploring these issues and their implications.

Beyond financial regulation, the broader legal treatment of DAOs is a murky subject. With applications running on Ethereum, it's not always clear what the point of enforcement is. You can't exactly sue a DAO in court and then seize its assets. And, while The DAO's creators were in the public eye, that doesn't necessarily have to be the case; it could be deployed anonymously.

Even if DAOs are created without a formal legal status, governments may impose legal status on them. As business lawyer Stephen Palley writes at CoinDesk:

If you don't formalize a legal structure for a human-created entity, courts will impose one for you. As most lawyers will tell you: a general partnership, unless properly formalized or a deliberately created structure, is a Very Bad Thing ... [T]he members of a general partnership can end up jointly and severally liable on a personal basis for partnership obligations.

For instance, I don't think this is how the law works:

Even if the SEC or other government entity decides to crack down on DAOs, it might be easier said than done. Because they operate on pseudonymous distributed computers, those parties may not be easy to track down (notably, we still don't know who Satoshi Nakamoto is). Even if you did, they might not have any control over it or know what it was doing. Its code also may have been radically altered from its original programming/intent.

But as far as The DAO is concerned, are we in for a slew of lawsuits or calls for SEC action by disgruntled investors? Not so fast. Investors in The DAO may yet be able to recover their losses.

Various prominent stakeholders in the Ethereum community, from Ethereum inventor Vitalik Buterin to Slock.it's Christopher Jentzsch, have suggested that the only sensible solution is to create a "fork" of the Ethereum network that could freeze the attacker's stolen funds and shut down The DAO, with the option to create a “hard fork” to fully reverse the theft and return investors' funds. Some have criticized this approach as a “bailout” or “asserting centralized control.” But it's worth noting that it would require a plurality of miners to adopt it voluntarily; whether they will remains to be seen.

Either way, Ethereum's credibility may be adversely affected. On the one hand, people need to trust that smart-contracts do what they are supposed to — particularly where millions of dollars are on the line. On the other hand, the credibility of the platform is also tied to its immutability. If developers and miners collude to reverse transactions they don't like, that sets a bad precedent.

Additionally, if the community decides The DAO's investors need to take a haircut, it could open up a Pandora's box of legal troubles for its developers and promoters (and maybe even miners and investors), potentially stifling advancement of this important technology.

But wait a minute. Why didn't the attacker see the this coming? Surely if he was sufficiently sophisticated to find a "recursive call" bug, he would have known that split funds would be locked away for 27 days — giving the community time to get wise to his activities and find a solution like the fork.

As previously mentioned, The DAO theft also crashed ETH prices. Savvy readers will note that a DAO vulnerability doesn't mean the Ethereum platform itself was compromised (any more than a nasty bug in Photoshop means that everyone with Windows 10 is at risk).

Was it possible this whole event was a ruse to pull off a "big short", as one user suggests on Reddit? As of now, there's no proof of that, but it's an interesting theory.

But was this even a theft at all? As Slock.it's representative said, "code is law!" If the code doesn't do what you think it does — that's your fault. At least, that's the theory behind an anonymous letter uploaded to Pastebin and purportedly authored by The DAO's attacker:

I have carefully examined the code of The DAO and decided to participate after finding the feature where splitting is rewarded with additional ether. I have made use of this feature and have rightfully claimed 3,641,694 ether, and would like to thank the DAO for this reward. It is my understanding that the DAO code contains this feature to promote decentralization and encourage the creation of "child DAOs".

I am disappointed by those who are characterizing the use of this intentional feature as "theft". I am making use of this explicitly coded feature as per the smart contract terms and my law firm has advised me that my action is fully compliant with United States criminal and tort law.

Adding that:

I reserve all rights to take any and all legal action against any accomplices of illegitimate theft, freezing, or seizure of my legitimate ether, and am actively working with my law firm. Those accomplices will be receiving Cease and Desist notices in the mail shortly.

If the fork moves forward to freeze or seize the attacker's digital assets, could that open up the broader Ethereum community and its miners to legal liability? We'll have to wait and see what happens.

Regardless how The DAO "theft" is resolved, regulators shouldn't be in a rush to impose stricter regulations on Ethereum, which is just a platform, or DAOs in general or even The DAO specifically, should it be reincarnated with better security practices.

While The DAO attack raises serious questions about the viability of creating this "DAO 2.0", that doesn't mean we should stop it from happening. Whether or not you believe all the hype about Ethereum being as important as the invention of the internet, it's an exciting technology that's worth giving the opportunity to grow.

Unlike Bitcoin, which has been around for eight years, Ethereum is only a year old. It officially launched in July 2015, but is already the second-largest cryptocurrency by market capitalization. It's vastly more complex than Bitcoin and still in its infancy; it will have inevitable growing pains on the way to maturity.

Just as the internet wasn't built in a day, neither will smart-contract technology come to fruition without a permissive regulatory environment to grow, much like the Clinton administration's Framework for Global Electronic Commerce did for the internet.

Certainly, vetting DAO code (particularly new proposals) is a big problem. More fundamentally, smart-contract security is an emerging area where people are rightly starting to pivot, following the lessons of The DAO attack. As Ethereum developer Peter Borah writes:

In his response to the bug, Slock's COO expressed shock, referring to it as "unthinkable", and pointing to the "thousands of pairs of eyes" that somehow missed this. It's certainly hard to blame anyone for being shaken by the sudden disappearance of tens of millions of dollars. However, this natural reaction hides the simple truth that anyone who has dabbled in programming knows: bugs in programs are far from unthinkable — they are inevitable.

Making code open-source is not enough. We need mechanisms to create smarter (i.e., fault-tolerant) smart contracts. This could mean more rigorous independent testing, strategies to implement better development practices or, at least, more time to develop through trial-and-error in a lower-risk context. Stakeholder interests also must be aligned to make sure appropriate vetting happens, particularly where voting on code alterations is involved and particularly if we want to develop more complex autonomous programs.

The DAO is an instance of people getting carried away with an exciting new technology, while not effectively managing the new cybersecurity risks that come with it. But just because a group of people screwed up The DAO, it doesn't mean all DAOs are DOA.

While there's an overabundance of utopian thinking in this space, blockchain-based experiments in decentralized governance and peer-to-peer commerce could have immense benefits that offer truly revolutionary potential. Regulators should continue to take a wait-and-see approach and not use this as an invitation to try to shut them down or impose harsh new regulations.

]]>
in-the-ether https://beta.techdirt.com/comment_rss.php?sid=20160624/13312834815
Wed, 8 Jun 2016 12:46:00 PDT Facebook Is Flagging/Banning Accounts For Posting An Admittedly Strange Children's Book Illustration Timothy Geigner https://beta.techdirt.com/articles/20160606/05432734633/facebook-is-flagging-banning-accounts-posting-admittedly-strange-childrens-book-illustration.shtml https://beta.techdirt.com/articles/20160606/05432734633/facebook-is-flagging-banning-accounts-posting-admittedly-strange-childrens-book-illustration.shtml I'll admit that very few things in this existence we all share give me as much pleasure at poking at the prudish censorship employed by Facebook. The overly broad puritanical guidelines, theoretically designed to save our sensitive eyes from anything as horrible as a breast or a penis, often instead results in the censorship of parody, renowned artwork, and bronze statues. That sincere but misguided attempt to keep things PG on its site is inherently funny, but nearly as inherently funny as is the fact that the following image was (rather innocently) included in one of a collection of children's books in France, entitled Images of Ponies and Horses.


Right about now you're thinking that you just witnessed a French how-to manual on having a horse be all that it can be inside of you. But it isn't! Honest! What this actually is is an attempt by the illustrator to show how similar the bone structure of human beings and horses are by aligning their respective physiology in this way. A rep from the publisher told BuzzFeed:

"Obviously, we never wanted to shock our readers with that drawing," a Fleurus spokesperson told BuzzFeed. "We publish educational books and make realistic or explanatory illustrations. In that case, our goal was to make the child visually comprehend that the bone structure of the horse and the human being are similar," they said. "Putting them in the same position makes the likening more understandable and concrete."

So, we have an unfortunately designed illustration that was supposed to be educational now going viral entirely because of the context our own dirty minds adds to the image. BuzzFeed wrote a post on the image, only to find that -- you guessed it -- Facebook had begun flagging the article as it was being shared on the site.

Well, things got even weirder with this horse thing. Facebook seems to be flagging this article — the one you're reading right now — as pornography. And then, after Facebook removes this article from your feed, it makes you go through your photos and verify that none of them are pornographic. In fact, Facebook's moderators seem to find this horse picture so inappropriate, a member of BuzzFeed's social media team received a 24-hour ban from posting on BuzzFeed's Facebook page.

And this is why Facebook should get out of the morality business to every last degree possible. An article about a hilarious, but innocent, educational illustration is being flagged, users are being hassled about their other photos on the site, and some folks are even getting banned. Because? Well, because it appears that Facebook moderators have the same perverse baseline psyche as the rest of us, resulting in an image of a man and a horse being compared physiologically becoming suspected horse-man-porn. And the article pointing out what it actually is is the one that got flagged. That's as much of a failure of this sort of thing as we could hope for.

]]>
no-leg-to-stand-on https://beta.techdirt.com/comment_rss.php?sid=20160606/05432734633
Fri, 15 Apr 2016 10:37:00 PDT How Bad Are Geolocation Tools? Really, Really Bad Andrew "K'Tetch" Norton https://beta.techdirt.com/articles/20160413/12012834171/how-bad-are-geolocation-tools-really-really-bad.shtml https://beta.techdirt.com/articles/20160413/12012834171/how-bad-are-geolocation-tools-really-really-bad.shtml Geolocation is one of those tools that the less technically minded like to use to feel smart. At its core it's a database, showing locations for IP addresses, but like most database-based tools, the old maxim of GIGO [Garbage In, Garbage Out] applies. Over the weekend Fusion's Kashmir Hill wrote a great story about how one geolocation company has sent hundreds of people to one farm in Kansas for no reason other than laziness. And yes, it's exactly as bad as it sounds.

Most people often aren't the most technically minded, give them a tool, tell them it CAN produce an output, and they'll assume that any output that looks like the best quality possible, IS the best one available. It's extremely common with 'forensic evidence' and jurors in court cases, where it's given weight well beyond its actual evidentiary value (to the point that they now distrust cases without it) – there's even a name for it, "the CSI effect", named after one of the TV shows that uses it as a cornerstone.

One of the latest tools to get the blind trust of morons is IP Geolocation. At its basic level, it's a database of IP addresses with latitude and longitude listed, so when you look up an IP address, you get a pair of coordinates you can associate as an 'origin' for that.

However, there's a number of problems with that.:

  • First, what about those that don't have a lat/long listed?
  • Secondly, how often are they updated?
  • Third, how do they deal with cellular or 'mobile' devices?

So let's quickly address them.

Those that don't have a lat/long listed.

Well, there's a few ways to do it, but the way some chose to do, is just to guess. In the article that started me on this, it points out that the company MaxMind decided to guess at the average closest place it could – the geographical center of the US, except 39°50'N 98°35'W. is a messy decimal (39.8333333 N,98.585522W) so it rounded them to 38N, 97W. It's the front yard of a farm in Kansas.

Other times they just guess and get a town and put it somewhere there, although even that can be off a bit. It can be a lot off, as you'll see shortly.

How often are they updated?

There's no telling. With the great shortage of IPv4 addresses now, but with an ever-expanding list of devices, from cell phones to thermostats and even fridges, IP addresses are shifting around everywhere. There's also mergers and splits of companies, bankruptcies and so on. So unless the database is frequently updated, there's no chance that anything it has to say will be accurate – again we'll see that directly.

Finally, how does it deal with cellular devices?

Simply put, they don't. The handoff mechanism means that you'll often carry one IP address from one tower to the next (otherwise you'd have to terminate and restart any data transfer as you shifted between towers. In addition most cellular providers hide their cell customers behind NAT, precisely because of the lack of discreet IPv4 addresses to give out (and their… slowness in migrating to IPv6)

Odds are you're going to get a local network control center, or regional corporate office instead, which means it's practically no use at all.

Oh dear....

This all assumes as well that entries are made in good faith. One of the more common uses of geolocation is for targeted adverts, especially with 'adult websites', where they promise there's a horny woman (or man, if your browsing is detected as such, or the 'content' suggests you may be female) close by. Or you may have seen it in the scam adverts on news sites that should know better than to accept low-rate advertising based on scams (with easy to tell, clickbait headlines about insurance 'tricks' or similar).

This means that if you can 'rig' the database, you can expose the stupidity in parts of it, as was best demonstrated by Randall Munroe in his XKCD comic series.

So just how inaccurate are these systems? The easiest way to tell by far is to run some IP addresses where you know the location through these systems and see how far off they can be. So I did.

The most obvious one to start with is my own home connection's IP address. So I tried the link in the story, and boy was it off! Just for the record, I live on the south side of Atlanta's metro area, near Macon – Walking Dead country in fact

That's right, it put me in Ottawa, capital of Canada, roughly 1900km (1180 miles) and 1 whole country off. Part of that comes from the second question, how current the data is. It's listing my IP as belonging to Nortel networks. Problem is, I'm not a subscriber to Nortel – no-one is, the company was wound down years ago. Yet some databases still have them listed.

Cellphones don't fare much better either. I used the same service on a 4G Verizon phone sitting at my computer. It's location, San Diego. That's 1900 miles (3050km) off. Others services gave locations of New York, Atlanta, and Macon.

Wondering if it's just my semi-rural system that's messed up, I called a few friends who live in the Atlanta suburbs (a few streets from each other) and asked for their IP addresses, one used Comcast, and the other AT&T. Maybe things will be better and more accurate in a big-city environment?

I ran a number of different GeoIP services, and it was a very mixed bag of results.. One thing's certain though, none of the four set of coordinates gave an accurate location for the person (for obvious reasons I'm not going to give you their address, or mine for that matter)

Of them all, only one service – IPCIM.com – gave an error circle with a location, (twenty five mile radius), but it didn't do it for all. To me that indicates knowledge of its inaccuracy, but it's lack at other times seems to show it just doesn't care.

The second and third locations are the same coordinates, but they're less certain of the third than the second, despite both being off.

There's also something specific to note. There's 4 providers covered here. Two were done from the exact same location, yet their locations came nowhere near matching. Two more were IP addresses just streets away, but they also didn't match that well, although many went to the same default locations, including two which went to the 'lazy US Center' investigated in the Fusion piece.

More importantly, of the 30+ geolocating attempts made here, not a single one managed to be within a mile of the actual location (although one location was within a mile and a half, while another was within 3 miles – again, I'm not going to give out specifics). So for those who want to rely on them as being a source of where something is, the simple answer is "don't". This applies as much to those tracking down people who are leaving spammy comments, as it does to police officers and lawyers seeking to use them for court actions criminal or civil.

In fact lawyers and the police have absolutely NO excuse to use these kinds of databases in litigation at all as there are better, more accurate tools at their disposal – the courts themselves. In criminal cases a warrant is the preferred method, obtaining subscriber information from the ISP (fixed or cellular) which is far more accurate than any geolocation service because it's data coming from the entity actually providing the connection. In a civil trial you have a discovery subpoena to do pretty much the same thing and for the same reasons.

If you're doing it 'on your own', remember that these tools are as accurate as taking a dart and throwing it not at a map on the wall, but at a Google map display on your computer screen. Sure you'll be out a display, but you won't be potentially facing criminal charges when you go to act on what it basically bullshit data. At the very best, it can be used to advise, but it can be INCREDIBLY off, sometimes thousands of miles.

Data

The following services were used

There were 4 IP addresses used, three residential and one cellular comprising four of the biggest ISP's in the US.

IP addresses

  • 32.99.122 (Charter fixed line cable internet connection – K`Tetch)
  • 193.166.88 (Verizon 4G cellular connection – K`Tetch )
  • 137.147.28 (Comcast fixed line cable internet connection – James)
  • 172.126.144.9 (AT&T gigapower fixed line internet connection, less than 6 months old – David)

The first two were located in south metro Atlanta, near Macon. David and James are located approximately half a mile apart in north Cobb county, Georgia.

Raw coordinates

Service

Charter Verizon Comcast

AT&T

checkIP.org 45.4167, -84.3246 32.7977, -117.1322 NOT TESTED BLANK RESULT
IP2Location 33.95621, -83.98796 32.55376, -83.88741 34.02342, -84.61549 34.02342, -84.61549
IPinfo.io 32.8685, -84.3246 32.8975, -83.7536 34.0247, -84.5033 38.0000, -97.0000
EurekAPI 32.8685, -84.3246 33.7981, -84.3877 34.1015, -84.5194 34.0247, -84.5033
DB-IP 33.9562, -83.988 40.7128, -74.0059 33.9413, -84.5177 ("Marietta (bedroom)") 33.8545, -84.2171
IPCIM.com 32.8685, -84.3246 (± 25 mile)  NOT TESTED 34.0247, -84.5033 34.0247, -84.5033 (± 25 mile)
MaxMind (geoLiteCity) 32.8685, -84.3246 32.8975, -83.7536 34.0247, -84.5033 38, -97
MaxMind (GeoIP2) 32.8685, -84.3246 33.7844, -84.2135 34.0247, -84.5033 34.0247, -84.5033

If you'd rather see them on a map, they're here. (Legend Charter in green, Verizon in red, Comcast in blue, AT&T in yellow)

NOTE: One data source was extremely interesting in its provision of 11+ decimal places in its results. While this might seek to imply accuracy, it actually underscores how inaccurate it actually is. Eight decimal places gives a resolution of 1.1 millimeters – half the thickness of a CD/DVD. 11 decimal places as given in all their results is going to extremes, with locations given to less than a hair's thickness. It has been rounded down.
The "Marietta (bedroom)" label was actually on the output from their database.

I would like to thank David and James for their help with this. And for obvious reasons, we have forced changes in IP addresses for all our connections (and the release of this article was delayed to ensure that).

This is a repost from Andrew Norton's Politics & P2P blog

]]>
what-a-mess https://beta.techdirt.com/comment_rss.php?sid=20160413/12012834171
Wed, 30 Jul 2014 20:37:00 PDT Using Spreadsheets In Bioinformatics Can Corrupt Data, Changing Gene Names Into Dates Glyn Moody https://beta.techdirt.com/articles/20140727/03133828025/using-spreadsheets-bioinformatics-can-corrupt-data-changing-gene-names-into-dates.shtml https://beta.techdirt.com/articles/20140727/03133828025/using-spreadsheets-bioinformatics-can-corrupt-data-changing-gene-names-into-dates.shtml A few years back, people were rather disturbed to find out about the famous Excel bug, whereby the multiplication of two numbers in Microsoft's spreadsheet gave the wrong number. It turns out there are other circumstances in which Excel (and, to be fair, presumably other spreadsheets) can give incorrect results, but they are unlikely to be encountered in typical everyday tasks. However, in the specialized world of bioinformatics, which uses computers to analyze data about genes and related areas, careless use of spreadsheets can throw up a significant numbers of errors, as this paper in BMC Bioinformatics explains:
Use of one of the research community's most valuable and extensively applied tools for manipulation of genomic data can introduce erroneous names. A default date conversion feature in Excel (Microsoft Corp., Redmond, WA) was altering gene names that it considered to look like dates. For example, the tumor suppressor DEC1 [Deleted in Esophageal Cancer 1] was being converted to '1-DEC.'
Here we have the interesting interaction of two very different fields, where the name of a gene involved in esophageal cancer, DEC1, was interpreted by Excel to mean the date, 1 December. As the paper points out, these kinds of substitution errors are already to be found in key public databases:
DEC1, a possible target for cancer therapy, was incorrectly rendered, and it could potentially be missed in downstream data analysis. The same type of error can infect, and propagate through, the major public data resources. For example, this type of error occurs several times in even the immaculately curated LocusLink database.
As that notes, a gene that might be relevant for treating cancer could well be missed because of this incorrect conversion to a date by Excel. Although it is unlikely that any serious harm has been caused by this -- yet -- it's a useful reminder of the dangers of depending a little too heavily on the results of software without checking for corruption of this kind.

Follow me @glynmoody on Twitter or identi.ca, and +glynmoody on Google+

]]>
careful,-now https://beta.techdirt.com/comment_rss.php?sid=20140727/03133828025
Mon, 28 Apr 2014 03:35:00 PDT Driver Finds Himself Surrounded By Cops With Guns Out After Automatic License Plate Reader Misreads His Plate Tim Cushing https://beta.techdirt.com/articles/20140423/18531427012/driver-finds-himself-surrounded-cops-with-guns-out-after-automatic-license-plate-reader-misreads-his-plate.shtml https://beta.techdirt.com/articles/20140423/18531427012/driver-finds-himself-surrounded-cops-with-guns-out-after-automatic-license-plate-reader-misreads-his-plate.shtml Automatic license plate readers can scan plates at a rate of one per second. Nationwide, several hundred million plate/location records have been captured and stored by a variety of contractors. Mathematics alone says mistakes will be made. Except when mistakes are made with ALPRs, they tend to put citizens on the bad side of men with guns.
According to the Prairie Village Post, earlier this month lawyer Mark Molner was driving through a Kansas City suburb on his way home from his wife’s sonogram. All of a sudden, his BMW was blocked in front by a police car as another officer on a motorcycle pulled up behind him. (His pregnant wife witnessed the incident from a nearby parked car.)

According to what Molner told the Post, one of the officers then approached his car with his gun out.

“He did not point it at me, but it was definitely out of the holster,” Molner told the Post. “I am guessing that he saw the shock and horror on my face, and realized that I was unlikely to make (more of) a scene.”
The mistake prompting this guns-drawn approach of Molner's video could have been made by anybody. The ALPR read a "7" as a "2" and returned a hit for a stolen vehicle. The hit also returned info for a stolen Oldsmobile, which clearly wasn't what Molner was driving. But that could mean the plates were on the wrong vehicle, which is also an indication of Something Not Quite Right.

The PD's statement on the incident is fairly sensible and measured.
“The officer has discretion on whether or not to unholster his weapon depending on the severity of the crime. In this case he did not point it at the driver, rather kept it down to his side because he thought the vehicle could possibly be stolen. If he was 100 percent sure it was stolen, then he would have conducted a felony car stop which means both officers would have been pointing guns at him while they gave him commands to exit the vehicle.”
That makes sense, but there's still a chance this situation could have been averted. Molner's plate triggered the hit several miles before he was pulled over as pursuing police were unable to verify the plate due to traffic density. But it appears the officers made a last-minute decision to perform the unverified stop shortly before Molner would have driven out of the PD's jurisdiction. The stop occurred on the city/state boundary between Kansas and Missouri.

This lack of verification is what bothers Molner.
“I’m armchair quarterbacking the police, which is not a good position to be in,” Molner told the Post. “But before you unholster your gun, you might want to confirm that you’ve got the people you’re looking for.”
So, when the plate reader kicked back a bad hit, the cops did attempt to verify the plate, but it looks very much like they overrode procedural safeguards in order to prevent possibly losing a collar.

As these plate readers become more common, the number of erroneous readings will increase. If the verification safeguards are followed, problems will be minimal. But if anyone's in a hurry... or the vehicle description is too vague... or it's night... or someone's had a bad/slow day... or if the end of the month is approaching and the definitely-not-a-quota hasn't been met… bad things will happen to good people.

Placing too much faith in an automated system can have terrible consequences. Molner came out of this without extra holes, electricity or bruises. Others may not be so lucky.

]]>
automatic-bullet-catcher-creator https://beta.techdirt.com/comment_rss.php?sid=20140423/18531427012
Tue, 28 Jan 2014 11:01:35 PST New York Times Suffers Redaction Failure, Exposes Name Of NSA Agent And Targeted Network In Uploaded PDF Tim Cushing https://beta.techdirt.com/articles/20140128/08542126021/new-york-times-suffers-redaction-failure-exposes-name-nsa-agent-targeted-network-uploaded-pdf.shtml https://beta.techdirt.com/articles/20140128/08542126021/new-york-times-suffers-redaction-failure-exposes-name-nsa-agent-targeted-network-uploaded-pdf.shtml It appears as if the New York Times, in its latest publication of leaked NSA documents, failed to properly redact the PDF it uploaded, exposing the name of the NSA agent who composed the presentation as well as the name of a targeted network.

Cryptome seems to have been the first site that noticed the redactions that actually weren't, issuing a couple of tweets that informed its followers of this fact. This led to Bob Cesca at the Daily Banter turning the NYT's error into an anti-Snowden rant (which I found via F-Secure's blog) that decried everyone involved while "virtuously" refusing to name the entity that had discovered the poorly-done redactions (but including the uncredited tweets in full for easy searching).
As soon as the article was posted, someone from or associated with a popular cryptography website claims to have downloaded a pdf of the Snowden document from The New York Times and discovered that three of the redactions that were intended to obscure sensitive national security information were easily accessible by highlighting, copying and pasting the text. The poorly-redacted file was subsequently posted to the cryptography website, then promoted via Twitter. (We’re not going to post the name of the website that posted the file to protect the information contained within.)



So, the identity of an NSA agent is out there in public view within the same document in which a target of this program is named. All of this is due to the incompetence of whoever failed to properly redact the pdf before publishing it for the world to see — as well as for the aforementioned cryptography site to nab and republish it.'



This was bound to happen at some point in this ongoing saga: the name of an American agent has been leaked to the public via a document stolen by Edward Snowden. To add to the irresponsibility of how Snowden went about this operation, he distributed untold thousands of documents to a gaggle of technological neophytes who barely understand how to used Adobe Acrobat, much less the phenomenally complicated details of top secret NSA operations.
Cesca somehow feels the privacy of a single NSA agent trumps the public's interest in infringements on their own privacy -- not just here in the US but all over the world. Certainly, the New York Times should have made sure its redactions were actually redactions before publishing the document, but Cesca's hyperbolic attack isn't doing his side any favor.

One agent's name was exposed, one who may not even be employed by the agency at this point. (The documents are from 2010.) The target revealed is nothing more than the Al Qaeda's "branch operation" in Mosul, Iraq. Al Qaeda has been the focus of counterterrorism efforts since before the 9/11 attacks and the revelation that the NSA is targeting mobile networks in Mosul shouldn't come as a shock to anybody, least of all Al Qaeda members.

This doesn't excuse the NYT's carelessness, however. It is disseminating some very sensitive NSA documents and should be ensuring any information it chooses to withhold stays withheld. But this error doesn't invalidate Snowden's exposure of the NSA's programs, no matter how Cesca (and those like him) spin it.

The NSA and other government agencies have suffered redaction failures as well, accidentally exposing information they would rather have withheld from the public. Does the government get held to the same standard by the NSA's booster club? Hardly. Humans make mistakes, no matter which side of this issue they're on.

[The original document uploaded by the NY Times is posted below (via Cryptome). To see the unredacted text, simply click on the Text tab.]

]]>
make-sure-to-dot-all-i's-and-blot-out-all-sensitive-info https://beta.techdirt.com/comment_rss.php?sid=20140128/08542126021
Thu, 1 Aug 2013 03:35:11 PDT Cameron's Anti-Porn Program Tells ISPs To Do The Impossible: Only Block Bad Content; Don't Block Good Content Tim Cushing https://beta.techdirt.com/articles/20130731/19260124033/camerons-anti-porn-program-instructs-isps-to-do-impossible-prevent-unintentionally-blocking-legitimate-content.shtml https://beta.techdirt.com/articles/20130731/19260124033/camerons-anti-porn-program-instructs-isps-to-do-impossible-prevent-unintentionally-blocking-legitimate-content.shtml The Great Internet Porn Firewall of Britain is now in full effect and, contrary to earlier reports, the no-porn filter will be mandatory even for smaller, "boutique" ISPs. How this will play with Andrews & Arnold, the ISP inviting customers seeking internet filtering to check with North Korea, remains to be seen.

All "questionable content" boxes are to be pre-ticked to provide maximum sanitization, per UK policy, and if someone wishes for a less censored internet experience, they'll have to go through the trouble of informing their ISP that they are indeed a responsible adult capable of handling NSFW material.

In addition, the UK government wants a guarantee that legitimate content won't accidentally get sucked into the filter. How it imagines this will be accomplished remains a mystery. I doubt anyone in Parliament will be staying up late trying to solve this problem as the government has decided to "allow" the ISPs to figure it out on their own.
Finally, DCMS demand ISPs give them magic beans (“We want industry to continue to refine and improve their filters to ensure they do not – even unintentionally – filter out legitimate content”) and threaten them with regulation if they do not answer to future demands, or “maintain momentum”.
There's nothing quite like a faith-based technological platform crafted by a crack team of professional busybodies and bureaucrats, especially one that assumes the only fuel needed is good intentions and the "momentum" will sustain itself into perpetuity. OR ELSE.

The not-so-veiled threat on the end really drives the point home. What happens if the ISPs fail to deliver the impossible with their inability to prevent something that is by definition unpreventable? What are the consequences of failing to "maintain momentum" or "proactiveness" or whatever term the government is using to redefine "doing what they're told?" The "strategy guide" spells it out this way.
And while Government looks to the industry to deliver, through the self-regulatory mechanisms already established under UKCCIS, we are clear that if momentum is not maintained, we will consider whether alternative regulatory powers can deliver a culture of universally-available, family-friendly internet access that is easy to use.
Jesus. That's frightening. If ISPs don't march in lockstep with Cameron's orders, they'll simply be beaten into shape by restrictive government mandates that ensure "a culture of universally-available, family-friendly internet access." If that doesn't sound like a slightly kinder, gentler version of any totalitarian regime's homegrown "internet," then I didn't just throw up a little in my mouth while typing out that quote.

Why would the government threaten to set up its own internet, one dangerously low on a.) blackjack and b.) hookers? For the children, of course. Every form of media, not just the internet, is subject to these guidelines.
This should be underpinned by a basic, common set of media standards, building on existing standards that already apply in many places. We would expect this to include:

• Protection of minors: including protecting children’s exposure to material that seeks to sexualise them, strong sexual content, violence, imitable and dangerous behaviour, any specific health priorities, safety of children in content and protecting against commercial influence.
Well, Cameron might want to contact the Daily Mail and ask if it's willing to stop sexualizing minors, something it's never been shy about doing even if the front page is making all sorts of noise about rampant child pornography. I'm sure Cameron will also be clamping down on advertisers who push products pretty much anywhere they can aimed at the wide open wallets of teens and tweens (or ultimately, their parents). (P.S. Have the cast of Jackass shot.)

The UK government's neverending quest to turn the internet into a Disney-esque wonderland where no one sees anything they don't want to and are never even mildly insulted is pathetic. And disturbing. Cameron's plans infantilize the nation's children and adults, treating them both as precious bundles of stupidity too incompetent to make their own decisions on appropriate content.

If Cameron's ultimate goal is to govern a nation of infants, he's well on his way. But he's going to find the behavior behind the disturbing images will continue on unabated. His solutions will work about as well as slapping band-aids on someone bleeding internally. At some point down the road, he or his successors will triumphantly point at the unstained bandages as proof of their effectiveness. And if something should actually mar the surface, the call will out go out for bigger bandages -- and more of them.

]]>
stop-not-controlling-things-you-can't-control! https://beta.techdirt.com/comment_rss.php?sid=20130731/19260124033
Mon, 6 May 2013 12:27:34 PDT Copyright Trolls So Sloppy They Sue The Same Guy Multiple Times Mike Masnick https://beta.techdirt.com/articles/20130502/11360322928/copyright-trolls-so-sloppy-they-sue-same-guy-multiple-times.shtml https://beta.techdirt.com/articles/20130502/11360322928/copyright-trolls-so-sloppy-they-sue-same-guy-multiple-times.shtml suing over the same IP address for sharing the same file (the animated movie, Zambezia) in multiple cases. The story focuses on one guy, who has filed a motion to quash in response, noting that the sloppiness of filing three times raises significant questions about the trolling operation. Either they're incredibly sloppy and not very careful, or they're hoping that by repeating the same IP address in multiple lawsuits, at least one judge will let the subpoena go through, leading to the inevitable demand letter. Either way, it should raise some eyebrows from the court about why anyone would file against the same IP address for the same movie in three different cases. ]]> i'm-sure-tat's-effective https://beta.techdirt.com/comment_rss.php?sid=20130502/11360322928 Fri, 19 Apr 2013 16:30:00 PDT It's Not About Whether Amateur Internet Journalism Is Good Or Bad, But That It Happens And Will Continue To Happen Mike Masnick https://beta.techdirt.com/articles/20130419/15484422771/its-not-about-whether-amateur-internet-journalism-is-good-bad-that-it-happens-will-continue-to-happen.shtml https://beta.techdirt.com/articles/20130419/15484422771/its-not-about-whether-amateur-internet-journalism-is-good-bad-that-it-happens-will-continue-to-happen.shtml the basics of the story, which has allowed the usual crew of folks who hate the concept of "citizen journalism" or whatever it's called today to whine about how awful "Reddit" journalism is. Defender of legacy newspapers, Ryan Chittum, seemed particularly gleeful in calling out that Reddit "fails again," and saying that the mainstream media did it right.

Except, that's ridiculous. Mathew Ingram points out that people attacking Reddit for this are missing the point, which is true by a wide, wide margin. First of all, as he notes, mainstream news folks also got parts of the story wrong. As we noted yesterday, the mainstream TV folks got a hell of a lot wrong. Hell, the NY Post even put the wrong two guys on the cover and falsely claimed that the feds were seeking them.

But the bigger problem is this idea that it's "Reddit" or, as some people have argued) "the internet" against the legacy media. That's not true at all. Everyone made mistakes during the rapidly changing story, but only on Reddit did you actually see the details of the process. The legacy news organizations present things as if coming from a place of authority, while Reddit is like an open newsroom where anyone can jump in. The conversation about Tripathi, for example, was about whether or not Suspect #2 was him -- it wasn't based on a declaration that it absolutely was him. Furthermore, when you look at the reason why the story actually spread, it was after some more known "press" names retweeted the initial tweet from Greg Hughes, which claimed (incorrectly) that Tripathi's name went out on the police scanner (ironically, he posted that about a minute after posting "This is the Internet's test of 'be right, not first' with the reporting of this story").

But here's the real issue: people can fret about all of this, but it doesn't change one thing: this is going to happen and continue to happen. People are naturally curious and they're going to talk to people when there's a news story going on and they'll try to figure things out. That happens all the time in newsrooms already before stuff goes on the air or is officially published. It's just that the public doesn't see the process. On Reddit, or anywhere else that the public can converse, it does happen in public. The problem is to assume the two things are the same. Furthermore, it's even more insane to blame "Reddit" or "the internet" as if those are singular entities that anyone has control over. They're not. As Karl Bode noted, they're just massive crowds of people.

An even better point was made by Charles Luzar, who noted that "the crowd doesn't implicitly profess its empirical correctness like the media does," but rather admits quite openly that it's a process in action. Further, he notes that even if the crowd presents false information before finding factual information, that's still "effective crowdsourcing" and, if anything, provides a greater role to the media to be effective curators of the actual facts.

In the end, it seems likely that this incident will actually help a lot the next time there's a big breaking news story, because (hopefully) it will give people more reason to be at least somewhat skeptical of stories coming out, but it's not going to change the fact that groups on various platforms are going to talk about things, and often try to do a little sleuthing themselves. Sometimes they'll get it right, and sometimes they won't -- just the same as many others. It seems like a much better focus looking forward is in providing more training and tools to help the world be better at it. ]]>
look-forward,-not-back https://beta.techdirt.com/comment_rss.php?sid=20130419/15484422771
Wed, 17 Apr 2013 15:59:00 PDT 'Intellectual Bulwark' Of Austerity Economics Collapses Because Of Three Major Errors Glyn Moody https://beta.techdirt.com/articles/20130417/02263922736/intellectual-bulwark-austerity-economics-collapses-because-three-major-errors.shtml https://beta.techdirt.com/articles/20130417/02263922736/intellectual-bulwark-austerity-economics-collapses-because-three-major-errors.shtml Amongst economists and those who draw on their thinking, the names Reinhart and Rogoff are well known for work published under the title "Growth in a Time of Debt," which sought to establish the relationship between public debt and GDP growth. The key result, that median growth rates for countries with public debt over 90% of GDP are about one percent lower than otherwise, and that the mean growth rate is much lower still, has been cited many times, and invoked frequently to justify austerity economics -- the idea being that if the public debt is not reduced, growth is likely to suffer badly.

Given the economic, political and social importance of that finding, many have tried to reproduce it, but failed. A post by Mike Konczal on The Next New Deal blog explains how three researchers finally succeeded -- with surprising consequences:

In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadsheet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.

They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result.
In his post, Konczal goes on to give a good explanation of just what went wrong. Correcting those three major errors produces the following result:
So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.
That is, not only is there no significant difference between countries whose public debt-to-GDP ratio is over 90%, and those with much lower values, there is apparently no critical number above which growth falls catastrophically. Put another way, from the corrected research, there does not seem to be any reason why the public debt-to-GDP ratio cannot keep on rising while preserving normal levels of growth. That clearly runs entirely contrary to the current dogma that public debt must be reduced at all costs in order to keep growth at a healthy level. As the authors of the new paper conclude (pdf):
RR's [Reinhart and Rogoff's] findings have served as an intellectual bulwark in support of austerity politics. The fact that RR's findings are wrong should therefore lead us to reassess the austerity agenda itself in both Europe and the United States.
That debate about public debt reduction and the need for austerity measures certainly won't stop just because a key justification for the approach has been found to be completely wrong. But it's worth noting that alongside the major political ramifications of this new finding, there is another, rather less contentious, conclusion to be drawn.

The three errors in the original work by Reinhart and Rogoff finally came to light when they allowed other researchers to examine their model and the data they employed in it. It then became clear that the model was flawed, and that not all the relevant data had been included in the calculation. Neither was obvious from the result alone.

This reinforces a point we have made before. Alongside the results of their work, academics also need to release the datasets and any mathematical/computational models that they have used to derive them. Without those additional resources, it is not possible for other researchers to reproduce the results, which may -- as turns out to be the case for Reinhart and Rogoff's famous paper -- contain fundamental errors that completely undermine the conclusions drawn from them.

Follow me @glynmoody on Twitter or identi.ca, and on Google+

]]>
getting-it-wrong https://beta.techdirt.com/comment_rss.php?sid=20130417/02263922736
Thu, 3 Jan 2013 08:26:53 PST Congress So Dysfunctional, It Can't Even Fix The Errors It ADMITS It Made In Patent Reform Mike Masnick https://beta.techdirt.com/articles/20130102/12173821549/congress-so-dysfunctional-it-cant-even-fix-errors-it-admits-it-made-patent-reform.shtml https://beta.techdirt.com/articles/20130102/12173821549/congress-so-dysfunctional-it-cant-even-fix-errors-it-admits-it-made-patent-reform.shtml became law. This was a "patent reform" proposal that had been debated and changed and debated some more for about seven years before finally getting approved in a greatly watered down fashion. We criticized the bill for doing almost nothing to deal with the real problems of the patent system, but there were some incredible, fundamental, blatant mistakes in the final bill. You'd think that with seven years of debate and tweaking that such mistakes would have been whittled away. The first clue to some serious problems was in an analysis by Mark Lemley soon after the bill was approved in which some drafting errors were apparent just in looking at the "effective dates" of various parts of the bill.

Over time, it became clear that Congress had left significant errors in. Recently some of the key people behind the bill admitted that there were errors in the bill, with Eli Lilly's General Counsel, Bob Armitage, stating: "There are a few minor errors in the bill and one major error in the bill." What's the "maajor error"? It's the part on "estoppel" in "post grant review." Basically, there's a provision in the bill which encourages people to seek "post grant review" of questionable patents in the first nine months after they've been approved. In talking about this, Congress was clear that it wanted to encourage more people to use this, and so it wanted to remove barriers. One of those was to make it clear that failing to raise issues during the post grant review shouldn't prevent those issues from being raised later. However, the actual language of the bill says that any issue that "could have been raised" can't be raised later.

As law professors Eric Goldman and Colleen Chien note, it's clear that Congress didn't mean to include this language. The committee report on the bill and direct quotes from both House and Senate sponsors of the bill (Lamar Smith and Patrick Leahy) admitted that this was a mistake:

By all accounts, in the AIA, Congress intended to remove the "could have been raised" language and provide a narrower estoppel for PGR proceedings. As the Congressional committee report explains, the PGR was designed to "remove current disincentives to current administrative processes." But something funny happened on the way to the Congressional floor, and the problematic "could have been raised" language was inadvertently inserted into the bill.

We're not the only ones to recognize the error. House Judiciary Chairman Lamar Smith referred to the AIA's PGR estoppel standard as "an inadvertent scrivener's error." Senate Judiciary Chairman Patrick Leahy, in advocating that the Senate adopt the technical corrections bill, said the PGR estoppel standard in AIA was "unintentional," and it was "regrettable" the technical corrections bill doesn't address the issue. Sen. Leahy expressed "hope we will soon address this issue so that the law accurately reflects Congress's intent." The PTO also thinks Congress made a mistake, saying "Clarity is needed to ensure that the [PGR] provision functions as Congress intended."

To fix some of the errors in the AIA, Congress rushed through a "technical corrections" bill, intended to fix some of the problems with the bill. During all the fiscal cliff mess, with some back and forth between the House and Senate, they approved this bill which will be signed any moment, if it hasn't been already.

Just one problem. For a bill about technical fixes, it didn't actually address this one *admitted* major error in the original bill. Yeah, they left that one out.

Let's recap, because this is quite incredible:
  1. Congress spends seven years debating patent reform.
  2. It finally approves patent reform in late 2011, and despite seven years of debate, had a ton of clear errors in the drafting of the bill.
  3. The official sponsors of the bill flat out admit that there's a major error in a part of the bill that they did not intend to be in there.
  4. A year plus later, Congress finally introduces a bill to "fix problems" in the original bill.
  5. This "technical corrections" bill does not fix the one major problem that all admit was a flat out mistake in the original bill.
And people wonder why Congress' approval rating is so low. ]]>
incredible https://beta.techdirt.com/comment_rss.php?sid=20130102/12173821549
Tue, 18 Sep 2012 14:57:00 PDT Anti-Medical Marijuana Committee Fails To Register Published URL, Hilarity Ensues Tim Cushing https://beta.techdirt.com/articles/20120914/13581620386/anti-medical-marijuana-committee-fails-to-register-published-url-hilarity-ensues.shtml https://beta.techdirt.com/articles/20120914/13581620386/anti-medical-marijuana-committee-fails-to-register-published-url-hilarity-ensues.shtml
It's election season, a time when man's (and more recently, woman's) thoughts turn towards shutting off the TV, radio and phone until mid-November. But! Things must be voted on, including such controversial issues as legalizing medical marijuana and authorizing dispensaries. As an opponent of weed-based medicines, you vow to fight this with every ounce/gram of your being. You set your plan in action.

1. Pick a name for your committee. ("No on Question 3")
2. Pick out a suitable URL ("votenoonquestion3.org")
3. Get your committee and its pertinent information added to the official voters' guide (both print and online.)
4. Register URL.
5. Become aghast.

Can anyone point out where Vote No on Question 3 went wrong? Here are some visual aids, taken from votenoonquestion3.org:

You see, the internet is like magic. And like most magic, it can be used for entertainment purposes. All the do-gooding in the world doesn't amount to much if you forget to register your URL. While you're busy enjoying that "new ink" smell of freshly printed Voter's Guides, someone quicker on the draw is undermining your "marijuana is bad" propaganda proselytizing information with hilariously over-the-top headlines. 

The good news is that the online voters' guide sports the corrected URL: mavotenoonquestion3.com

The bad news is that the paper version will carry the old URL permanently. Of course, very few people are willing to type in a URL by hand, but as news of this blunder spreads, the fake site with the real URL will be receiving much more attention, voters' guide correction or no.

Here's the official reaction from No on Question 3 spokesman, Kevin Sabet:

"It's funny and upsetting, I guess, at the same time."
Yeah. Largely the first part. And to think, the committee can't even blame a late afternoon smokeout for the mental slip.

This statement, however, seems both more on point and more disingenuous:
The group sent out a press release saying proponents of medical marijuana were tampering with the democratic process through “underhanded efforts.
Sabet admits the committee made a mistake and yet, the press release attempts to paint No on Question 3 as the victim of villainous pot smokers rather than treating it like the self-inflicted wound it is.

Oh, and here's more bad news for the "No" side:
The Globe notes that the No on Question 3 campaign has managed to collect all of $600 so far, compared to the $1 million or so that supporters of the initiative have received from Peter Lewis, a longtime patron of drug policy reform.
Maybe it's time to admit your fears of a weed-loaded America are overblown, especially when you've just been outmaneuvered (and outspent) by a bunch of stoners.

]]>
you-can't-like,-OWN-a-URL,-man https://beta.techdirt.com/comment_rss.php?sid=20120914/13581620386
Thu, 26 Jul 2012 12:18:57 PDT WSJ Still Hasn't Corrected Its Bogus Internet Revisionist Story, As Vint Cerf & Xerox Both Claim The Story Is Wrong Mike Masnick https://beta.techdirt.com/articles/20120726/03471619840/wsj-still-hasnt-corrected-its-bogus-internet-revisionist-story-as-vint-cerf-xerox-both-claim-story-is-wrong.shtml https://beta.techdirt.com/articles/20120726/03471619840/wsj-still-hasnt-corrected-its-bogus-internet-revisionist-story-as-vint-cerf-xerox-both-claim-story-is-wrong.shtml Wall Street Journal opinion piece by its former publishers, L. Gordon Crovitz, in which he made some fantastically false claims about the origins of the internet. What was noteworthy was that while the WSJ got the story so totally wrong, lots of others, including bloggers, leapt into the fray to explain why Crovitz was wrong. Almost everyone he sourced or credited to support his argument that the internet was invented entirely privately at Xerox PARC and when Vint Cerf helped create TCP/IP, has spoken out to say he's wrong. And that list includes both Vint Cerf, himself, and Xerox. Other sources, including Robert Taylor (who was there when the internet was invented) and Michael Hiltzik, have rejected Crovitz's spinning of their own stories.

Basically, anyone and everyone is telling the WSJ that it got this story totally and completely wrong. You might think the WSJ would start making some corrections. Instead, it's made one single correction:
That was a pretty minor correction, involving Crovitz being confused about how to understand how blockquotes work in HTML. But what about all of the other factual errors, including whoppers like saying that Tim Berners-Lee invented hyperlinks? Of course, considering the very premise of the article and nearly all of its supporting factoids were in error, it raises questions about how you do such a correction, other than crossing out the whole thing and posting a note admitting to the error (none of which has yet been done). Given the widespread discussion online about these errors -- both in blogs and in traditional media, it seems like the company's silence about the whole thing is just making the problem worse. Why won't the WSJ step up and issue a real correction on all of the errors? ]]>
how do you correct a story that's almost entirely wrong? https://beta.techdirt.com/comment_rss.php?sid=20120726/03471619840
Thu, 28 Jun 2012 03:01:00 PDT Yet Another (Yes Another) Error In Megaupload Case: Search Warrants Ruled Illegal Mike Masnick https://beta.techdirt.com/articles/20120628/00065919518/yet-another-yes-another-error-megaupload-case-search-warrants-ruled-illegal.shtml https://beta.techdirt.com/articles/20120628/00065919518/yet-another-yes-another-error-megaupload-case-search-warrants-ruled-illegal.shtml highly questionable. And since then, we've seen that it wasn't just the legal theories that were problematic, but nearly everything about the case, including a bunch of procedural issues. There's been lost evidence and plans to destroy evidence. There have been procedural errors that knocked out a restraining order for being improperly filed, as well as the failure to properly serve the company, which may lead the case against the company (but not the individuals) to be dropped entirely. On top of that, the US has been acting as this is all pretty straightforward, but have already been surprised to discover that the New Zealand government won't simply rubberstamp the extradition.

The latest update may create an even bigger headache for the US in its crusade against Kim Dotcom and Megaupload. High Court judge Helen Winkelmann has ruled that the search warrants used to seize Kim Dotcom's property... were illegal. Yeah, that's going to present a problem for the US. She also ruled that the FBI broke the law in taking data from Dotcom's computers out of the country. But the illegal warrants are the big deal here:
She said the search warrants were invalid because they were general warrants which lacked specificity about the offence and the scope of the items to be searched for.

Without a valid warrant, police were trespassing and exceeded what they were lawfully authorised to do.

Justice Winkelmann said no one had addressed whether police conduct also amounted to unreasonable search and seizure, but her preliminary view was that it did.
In other words, it's not only entirely possible that the government won't even be able to use anything from what they seized in a case, but they may, themselves, be in trouble for breaking the law and violating Dotcom's privacy rights.

The specific problem? The warrant did not actually state what US laws were supposedly broken -- which is kind of important, especially since this was about a case in the US and a person in New Zealand. If it's not made clear that the warrant is under US laws, then it "would no doubt cause confusion to the subjects of the searches...they would likely read the warrants as authorising a search for evidence of offences as defined by New Zealand law."

So not only do we have a weak case, the whole process in the case has been a complete joke and may mean that the US is unable to use much of the evidence it collected, can't extradite Dotcom and... has little actual basis to move forward with a lawsuit. Honestly, I'm somewhat amazed at the number of mistakes by the feds in such a case. It increasingly feels like they did this because they felt the need to "do something" right after the effort to pass SOPA and PIPA stalled out -- and in their rush to make Hollywood like them again, the feds didn't bother to actually pay much attention to the details. Sometimes it's "creative" to color outside the lines. At other times, it's called cooking up a case on trumped up charges for political reasons. ]]>
keystone kops https://beta.techdirt.com/comment_rss.php?sid=20120628/00065919518
Fri, 20 Apr 2012 17:33:00 PDT Mobile Phones Might Not Interfere With Planes, But They Sure Can Interfere With Pilots Mike Masnick https://beta.techdirt.com/articles/20120419/15363418568/mobile-phones-might-not-interfere-with-planes-they-sure-can-interfere-with-pilots.shtml https://beta.techdirt.com/articles/20120419/15363418568/mobile-phones-might-not-interfere-with-planes-they-sure-can-interfere-with-pilots.shtml had to abort the landing after realizing he forgot to lower the landing gear, because he was too busy responding to text messages. For whatever reason, the pilots shut off the autopilot, but then got distracted with text messages.
Somewhere between 2500 feet and 2000 feet, the captain's mobile phone started beeping with incoming text messages, and the captain twice did not respond to the co-pilot's requests.

The co-pilot looked over and saw the captain "preoccupied with his mobile phone", investigators said. The captain told investigators he was trying to unlock the phone to turn it off, after having forgotten to do so before take-off.

At 1000 feet, the co-pilot scanned the instruments and felt "something was not quite right" but could not spot what it was.
There followed a series of errors, with the pilot and the co-pilot not communicating with each other -- the pilot trying to drop the wheels as the co-pilot prepared to abort the landing -- and then both pilots becoming confused about their actual altitude. Oh, and then there was the fact that the flaps were set incorrectly.

I'm not necessarily one to bemoan the way people get obsessed with text messaging these days, but I generally think that if you're flying a commercial airplane, and taking it in for landing... it shouldn't be that hard to know that it's a good idea to not worry about your phone for five minutes. ]]>
okay,-perhaps-pilots-should-be-barred-from-texting https://beta.techdirt.com/comment_rss.php?sid=20120419/15363418568
Wed, 18 Apr 2012 10:25:00 PDT Guess What? Most Cybercrime 'Losses' Are Massively Exaggerated As Well Mike Masnick https://beta.techdirt.com/articles/20120417/03595418520/guess-what-most-cybercrime-losses-are-massively-exaggerated-as-well.shtml https://beta.techdirt.com/articles/20120417/03595418520/guess-what-most-cybercrime-losses-are-massively-exaggerated-as-well.shtml massively inflated. It appears that others are figuring this out as well. The NY Times has an op-ed piece from two researchers, Dinei Florencio and Cormac Herley, highlighting how all the claims of massive damages from "cybercrime" appear to be exaggerated -- often by quite a bit:
One recent estimate placed annual direct consumer losses at $114 billion worldwide. It turns out, however, that such widely circulated cybercrime estimates are generated using absurdly bad statistical methods, making them wholly unreliable.

Most cybercrime estimates are based on surveys of consumers and companies. They borrow credibility from election polls, which we have learned to trust. However, when extrapolating from a surveyed group to the overall population, there is an enormous difference between preference questions (which are used in election polls) and numerical questions (as in cybercrime surveys).

For one thing, in numeric surveys, errors are almost always upward: since the amounts of estimated losses must be positive, there’s no limit on the upside, but zero is a hard limit on the downside. As a consequence, respondent errors — or outright lies — cannot be canceled out. Even worse, errors get amplified when researchers scale between the survey group and the overall population.
This is pretty common. In the first link above, we wrote about how a single $7,500 "loss" was extrapolated into $1.5 billion in losses. The simple fact is that, while such things can make some people lose some money, the size of the problem has been massively exaggerated. As these researchers note, this kind of thing happens all the time. They point to an FTC report, where two respondents alone provided answers that effectively would have added $37 billion in total "losses" to the estimate.

This doesn't mean that the problems should be ignored, just that we should have some facts and real evidence, rather than ridiculous estimates. If the problem isn't that big, the response should be proportional to that. Unfortunately, that rarely happens. In fact, combining this with the recent ridiculous stories about the need for "cybersecurity," perhaps we can start to estimate just how much of an exaggeration in FUD the prefix "cyber-" adds to things. I'm guessing it's at least an order of magnitude. Combine bad statistical methodology with the scary new interweb thing, and you've got the makings of an all-out moral panic. ]]>
because they're not losses https://beta.techdirt.com/comment_rss.php?sid=20120417/03595418520
Tue, 13 Mar 2012 06:13:22 PDT Brazilian Performance Rights Group Claims Collecting From Bloggers Was Simply An 'Operational Error' After Google Pushes Back Tim Cushing https://beta.techdirt.com/articles/20120312/12002918079/brazilian-performance-rights-group-claims-collecting-bloggers-was-simply-operational-error-after-google-pushes-back.shtml https://beta.techdirt.com/articles/20120312/12002918079/brazilian-performance-rights-group-claims-collecting-bloggers-was-simply-operational-error-after-google-pushes-back.shtml charge a non-profit blog over $200 a month for embedding Youtube and Vimeo videos, and implicitly threatening to similarly bill other blogs. ECAD claimed that not only was this allowed by Brazil's currently-standing laws but that, despite collecting hundreds of thousands of dollars from Youtube itself every year, this new set of fees would not be a double-dip.

How quickly things change, especially for entities who find themselves staring down an angry internet. At first, ECAD seemed disturbingly untroubled by the uproar, including the memeification of its intention to stretch the definition of "public performance" to include all audible sound. But it suddenly changed its prohibitively expensive tune when hundreds of thousands of dollars were at stake.

None other than Google Brazil itself issued a blog post stating that ECAD's existing agreement with Youtube did not allow the agency to collect fees from bloggers, pointing out the obvious to ECAD's wilfully obtuse representatives:

These sites don't host or transmit any content when they associate a YouTube video to their site, and as such, the fact of embedding videos from YouTube can't be treated as a ‘retransmission'. As these sites aren't performing any music, ECAD can't, within the law, collect any payment from these.
Having been smacked down by its main benefactor, ECAD issued a statement of its own, claiming the whole thing was just an "error" and that it had no intention of setting up tollbooths on every website with embedded video:
1- Ecad has never had the intention to curtail the freedom on the internet, known to be a space devoted to information, dissemination of music and other creative works, and propagation of ideas. The institution also lacks a copyright billing strategy geared to embedded videos. Royalties collections for webcasting have been under re-evaluation since February 29th, and the case reported in recent days took place before then. Nevertheless, it resulted from an operational error of interpretation, which represents an isolated fact in this segment. (...)

2- Two years ago, Ecad and Google signed a letter of intent that guides the relationship between both organizations. The document details thatEcad can collect copyright fees for music coming from embedded videos, as long as it gives advance notice to Google/YouTube. As Ecad did not send such a notification, it becomes clear that this is not its goal. If it were the case, it would have sent the notification the letter of intent requires. (...)
Note that ECAD has left itself a bit of an opening for pursuing these fees in the future. Supposedly it can still go after blogs but only if it informs Google/Youtube of its intention to do so. It seems the only error it feels it made was getting caught. Everything else was simply a clerical screw-up and if all ducks had been properly ordered, it would have been free to bill websites for linking to Youtube.

As it stands now, ECAD has backed completely away from this plan. But, once the furor dies down and recedes into the past, I wouldn't be surprised to see this sort of tactic deployed again, if not by ECAD, than certainly by another "aspirational" performance rights organization.

(Hat tip to Techdirt's own Glyn Moody and his amazing Twitter feed. He's asked you all very nicely to follow him and this post is an example of why you should. So, follow this link to do exactly that..)

 

]]>
and-i-thought-google-was-just-there-to-screw-lowly-creatives https://beta.techdirt.com/comment_rss.php?sid=20120312/12002918079
Fri, 8 Apr 2011 13:44:31 PDT Mass Infringement Lawyer: Never Mind The Facts, Just Pay Up Mike Masnick https://beta.techdirt.com/articles/20110408/03285313826/mass-infringement-lawyer-never-mind-facts-just-pay-up.shtml https://beta.techdirt.com/articles/20110408/03285313826/mass-infringement-lawyer-never-mind-facts-just-pay-up.shtml actually understand the law, seems to be at it again. Chris sends in a report of how Steele has been sending out pre-settlement letters with totally screwed up dates, talking about a correspondence allegedly from 2007 and a lawsuit from 2006, neither of which appear to be accurate.

There are also other factually dubious statements in the letter:
The letter continues with a statement that claims that Mr. Steele's office has been unable to get in touch with the recipient.

Odd thing is, the e-mail address that is listed under the mailing address on the letter is not the e-mail address associated with the recipient's ISP. The only way Mr. Steele's firm could obtain the address would be by asking for it during a phone call. One of the five calls which Mr. Steele's firm would like to pretend never happened.
This is incredibly sloppy on the part of Steele, and with errors abounding in his letters that doesn't bode well for his lawsuits:
Personally, I believe that the implications of this letter are extremely disturbing. For one, Mr. Steele's firm appears to not bother proof-reading any of its letters. Mr. Steele is comfortable with asking for thousands of dollars from people, but he can't take 10 seconds to at least review the first sentence of his settlement letters.
There's a suggestion that some of the date errors may be due to whatever software Steele is using, but that also raises questions: if the software for creating these letters is so filled with errors, is the software he uses to track what IP addresses are sharing files also riddled with errors? ]]>
it's-what-year-now? https://beta.techdirt.com/comment_rss.php?sid=20110408/03285313826
Tue, 4 Jan 2011 11:42:50 PST Yes, The Legal & Technical Errors In Homeland Security's Domain Seizure Affidavit Do Matter Mike Masnick https://beta.techdirt.com/articles/20101229/01381312444/yes-legal-technical-errors-homeland-securitys-domain-seizure-affidavit-do-matter.shtml https://beta.techdirt.com/articles/20101229/01381312444/yes-legal-technical-errors-homeland-securitys-domain-seizure-affidavit-do-matter.shtml debunk some of the posts from here on Techdirt concerning Homeland Security's seizure of domain names. Specifically, the posts that he claims to be debunking are the three posts I made highlighting the technical and legal errors in the affidavit ICE special agent Andrew Reynolds used to get a warrant to seize the domains, as well as the post which highlighted how all four songs he named to get "probable cause" for the seizure of the popular DJ blog dajaz1.com were all sent legally for the purpose of promotions.

Hart claims that if there are any errors in the affidavit they don't matter:
Are there errors in the affidavit? If so, do they even matter? The answer is no.
Hart's reasoning is that since Homeland Security only has to show "probable cause" in its affidavit, the various errors don't matter. Now, without a doubt, the standard for probable cause is different than for guilt in a trial. But that does not mean there are no standards. He quotes various Supreme Court rulings, which grant law enforcement leeway in filing the affidavits and reaching the probable cause barriers, and specifically noting that some level of mistakes are allowed. Specifically, he quotes Brinegar v. United States, where the court gives law enforcement some leeway for errors:
These long-prevailing standards seek to safeguard citizens from rash and unreasonable interferences with privacy and from unfounded charges of crime. They also seek to give fair leeway for enforcing the law in the communitys protection. Because many situations which confront officers in the course of executing their duties are more or less ambiguous, room must be allowed for some mistakes on their part. But the mistakes must be those of reasonable men, acting on facts leading sensibly to their conclusions of probability. The rule of probable cause is a practical, nontechnical conception affording the best compromise that has been found for accommodating these often opposing interests. Requiring more would unduly hamper law enforcement. To allow less would be to leave law-abiding citizens at the mercy of the officers whim or caprice.
With all due respect to Hart, I believe his analysis falls short on a variety of different factors. First, I believe he greatly simplifies the overall ruling in Brinegar to a level that the Court almost certainly did not intend. It does allow for some mistakes (in Brinegar it was a small one). It does not allow for massive mistakes that undermine the entire probability equation that makes up probable cause. Obviously, it's expected that sometimes errors will be made. But, given the vast number of errors in this affidavit, combined with the seriousness of those errors, and the fact that (especially with dajaz1) they made up the very core of the probable cause argument, it would seem that the "balance" would shift against this affidavit having been properly executed.

Furthermore, among the Supreme Court quotes that Hart uses to support his argument is the idea that mistakes are okay because the affidavits are done "in the midst of haste of a criminal investigation." There was no urgency here, however. These sites had all been operating for years, and there was no likelihood that they would suddenly disappear. There was no reason for haste, and thus, less of an excuse for the sort of errors which may be acceptable under other circumstances. Even Hart admits that the "leeway" is about "the realities of law enforcement." The realities in this case were that there was no such urgency, and thus the mistakes are less excusable than they might be elsewhere.

More serious than this is the fact that Hart seems to ignore the specifics of what was seized and why. He notes, accurately, that seizure is much like an arrest, done prior to a trial, but (conveniently) leaves out the basis for seizures, which is supposed to be about preventing the destruction of evidence. As the Court notes in Heller v. New York, the purpose of content-based seizures is "preserving it as evidence." As we have already noted, that makes little sense in this situation, as the domain names would not and could not be "destroyed," in any meaningful manner -- and it's easy to copy the contents of the site to preserve that as evidence. Agent Reynolds explanation for why a seizure was necessary was that he was afraid that some third party might somehow get the domain name and continue the criminal copyright infringement, ignoring that an injunction could easily prevent that, and the actual likelihood of that scenario happening was close to nil.

However, the biggest flaw in Hart's argument is that he focuses solely on the issue of probable cause for warrants, and pays no attention to the key issue that we brought up: how seizing full domain names without an adversarial hearing, based on a series of legal and technical errors is almost certainly prior restraint, and a violation of the First Amendment. As was made quite clear in Fort Wayne Books, Inc. v. Indiana, when a seizure involves issues of protected speech, a higher bar is required:
Thus, while the general rule under the Fourth Amendment is that any and all contraband, instrumentalities, and evidence of crimes may be seized on probable cause (and even without a warrant in various circumstances), it is otherwise when materials presumptively protected by the First Amendment are involved... It is "[t]he risk of prior restraint, which is the underlying basis for the special Fourth Amendment protections accorded searches for and seizure of First Amendment materials" that motivates this rule.
This line of thinking goes back through a long, long, long line of cases, many of which repeat the famous line: "Any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity." In seizure cases where expressive speech is part of what is removed from circulation, the bar is higher than your average probable cause. That's why those errors are incredibly important, and the lack of any attempt to avoid First Amendment issues is glaring. Hart doesn't mention any of this, which I find surprising.

Finally, Hart closes his post (somewhat out of character for him) by suggesting our motivations for highlighting the problematic nature of the affidavit, arguing that we really don't care about the errors, and our posts are really just another way of attacking copyright law. I would suggest that Hart focus his analysis on legal issues, rather than playing amateur psychologist. My problem with the seizures is not about copyright law (though, I obviously have serious concerns about copyright law as well), but with the clear issue of a violation of the First Amendment. Separately, while Hart seems fine with it (as do the courts), I remain seriously troubled by the entire seizure process, which is widely abused, in cases where it has nothing to do with taking possession of evidence that might otherwise disappear. Playing those concerns down because there's a copyright element to this and I'm a critic of copyright law as it stands today is simply inaccurate, and seems like a cheap shot designed -- unfairly -- to attack my credibility on the situation at hand. ]]>
apologists-gone-mad https://beta.techdirt.com/comment_rss.php?sid=20101229/01381312444
Tue, 14 Dec 2010 06:16:56 PST Weighing The Benefits And Costs Of DRM: Type I & Type II Errors Mike Masnick https://beta.techdirt.com/articles/20101109/22502811788/weighing-the-benefits-and-costs-of-drm-type-i-type-ii-errors.shtml https://beta.techdirt.com/articles/20101109/22502811788/weighing-the-benefits-and-costs-of-drm-type-i-type-ii-errors.shtml excellent analysis of why DRM is "toxic to culture," it occurred to me that one of the main areas of disagreement concerning DRM is over disagreements over the types of "errors" that DRM creates. Phipps compares subways in France and Germany -- with the French subways involving a barrier which only rises if you insert a ticket. The idea here is, obviously, to prevent people from getting on the train without paying. In other words, they want to stop those who should not be in the set from riding the train. They're trying to minimize Type II errors (get on the train when they shouldn't be on the train). However, there are "costs." The barriers cost money and need to be maintained. The technology may break at times, requiring repairs and blocking legitimate ticketholders from getting on their train (a Type I error). Law enforcement is needed to monitor the barriers to watch for "gate jumpers."

In Germany, the system is much more open -- there are no barriers, and no one may ever check your ticket. However, every so often tickets do get checked (somewhat randomly), at which point you would get fined for not having the proper ticket. This minimizes a different type of error -- where someone who has paid and has a legitimate ticket has trouble getting through a gate. In other words, it's minimizing Type I errors (blocking someone from getting on the train when they should be on the train). It also lowers many of those other costs (or takes them away entirely). Of course, the "cost" to such a system is that, obviously, some number will game the system and ride without paying (a Type II error).

I think one of the problems that people have in discussing DRM is that they only look at one type of error, and never bother to compare the two. As a result of that, those who support strong DRM tend to focus only on the "error" of letting people get a "free ride," and ignore all of the collateral damage, as Phipps explains. Yet, when you compare the two, it's difficult to see how one can argue that the "free ride" problem is worse than the problem of collateral damage from limiting legitimate uses. And that is why so many people have such problems with DRM. It's not that we want a "free ride." It's that we worry about the costs associated with all of those collateral damage points. ]]>
letting-in-too-much-or-not-enough https://beta.techdirt.com/comment_rss.php?sid=20101109/22502811788