Timothy Lee's Techdirt Profile

Timothy Lee

About Timothy Lee

Posted on Techdirt - 20 April 2009 @ 10:01am

Congress Ponders Cybersecurity Power Grab

There was a lot of attention paid last week to a new “cybersecurity” bill that would drastically expand the government’s power over the Internet. The two provisions that have probably attracted the most attention are the parts that would allow the president to “declare a cybersecurity emergency” and then seize control of “any compromised Federal government or United States critical infrastructure information system or network.” Perhaps even more troubling, the EFF notes a section that states that the government “shall have access to all relevant data concerning (critical infrastructure) networks without regard to any provision of law, regulation, rule, or policy restricting such access.” Read literally, this language would seem to give the government the power to override the privacy protections in such laws as the Electronic Communications Privacy Act and the Foreign Intelligence Surveillance Act. Thankfully, Congress can’t override the Fourth Amendment by statute, but this language poses a real threat to Fourth Amendment rights.

One clause that I haven’t seen get the attention it deserves is the provision that would require a federal license, based on criteria determined by the Secretary of Commerce, to provide cybersecurity services to any federal agency or any “information system or network” the president chooses to designate as “critical infrastructure.” It’s hard to overstate how bad an idea this is. Cybersecurity is a complex and fast-moving field. There’s no reason to think the Department of Commerce has any special expertise in certifying security professionals. Indeed, security experts tend to be a contrarian bunch, and it seems likely that some of the best cybersecurity professionals will refuse to participate. Therefore, it’s a monumentally bad idea to ban the government from soliciting security advice from people who haven’t jumped through the requisite government hoops. Even worse, the proposal leaves the definition of “critical infrastructure” to the president’s discretion, potentially allowing him to designate virtually any privately-owned network or server as “critical infrastructure,” thereby limiting the freedom of private firms to choose cybersecurity providers.

When thinking about cyber-security, it’s important to keep in mind that an open network like the Internet is never going to be perfectly secure. Providers of genuinely critical infrastructure like power grids and financial networks should avoid connecting it to the Internet at all. Moreover, the most significant security threats on the Internet, including botnets and viruses, are already illegal under federal law. If Congress is going to pass cybersecurity legislation this session (and it probably shouldn’t) it should focus on providing federal law enforcement officials with the resources to enforce the cyber-security laws we already have (and getting the government’s own house in order), not give the government sweeping and totally unnecessary new powers that are likely to be abused.

Posted on Techdirt - 25 March 2009 @ 09:24pm

Hyper-local News In The Post-Newspaper Era

Rather than simply wringing his hands about how the decline of the newspaper means that no one will report local news, Reason‘s Jesse Walker actually gives some thought to where local news coverage might come from in a post-newspaper world. He focuses on people and institutions that can provide hyper-local news: not just about a state or metropolitan area, but of a particular town or even a specific neighborhood. For example, most communities already have one or more local gadflies who regularly attend city council and school board meetings and are often the first to notice funny business by government officials. Traditionally, if a gadfly spotted something he thought the public should know about, he had to convince a reporter to cover his scoop. Now there’s no filter: the gadfly can post the story to his blog. That won’t necessarily mean that a lot of people will read his post, but it at least gives him the opportunity to be noticed by others online. Jesse notes that local activists, government insiders, and community organizations are also candidates to do much of the work that has traditionally been done by local reporters.

The striking thing about this list is how diverse it is. In the traditional, vertically-indicated news business, a single institution oversees the entire news “supply chain,” from the reporter attending the local city council meeting to the paper boy who delivers the finished newspaper to readers. The technological and economic constraints of newsprint meant that the whole process had to be done by full-time employees and carefully coordinated by a single, monolithic organization. But the Internet makes possible a much more decentralized model, in which lots of different people, most of them volunteers, participate in the process of gathering and filtering the news. Rather than a handful of professional reporters writing stories and an even smaller number of professional editors deciding which ones get printed, we’re moving toward a world that Clay Shirky calls publish, then filter: anyone can write any story they want, and the stories that get the most attention are determined after publication by decentralized, community-driven processes like Digg, del.icio.us, and the blogosphere.

Decentralized news-gathering processes can incorporate small contributions from a huge number of people who aren’t primarily in the news business. You don’t need to be a professional reporter to write a blog post every couple of weeks about your local city council meeting. Nor do you need to be a professional editor to mark your favorite items in Google Reader. Yet if millions of people each contribute small amounts of time to this kind of decentralized information-gathering, they can collectively do much of the work that used to be done by professional reporters and editors.

Unfortunately, this process is hard to explain to people who don’t have extensive experience with the Internet’s infrastructure for decentralized information-gathering. Decentralized processes are counter-intuitive. Having a single institution promise to cover “all the news that’s fit to print” seems more reliable than having a bunch of random bloggers cover the news in an uncoordinated fashion. The problem is that, in reality, newspapers are neither as comprehensive nor as reliable as they like to pretend. Just as a few dozen professionals at Britannica couldn’t produce an encyclopedia that was anywhere near as comprehensive as the amateur-driven Wikipedia, so a few thousand newspaper reporters can’t possibly to cover the news as thoroughly as millions of Internet-empowered individuals can. This isn’t to disparage the reporters and editors, who tend to be smart and dedicated. It’s just that they’re vastly outnumbered. As Jesse Walker points out, any news gathering strategy that doesn’t incorporate the contributions of amateurs is going to be left in the dust by those that do.

Posted on Techdirt - 25 March 2009 @ 12:45pm

Is It A Good Thing That Computer Science Is 'Cool Again'?

Computer science is cool again. At least, that’s what the headline at Network World says. Apparently, CS enrollments are up for the first time in six years, driven by “teens’ excitement about social media and mobile technologies.” I’m a CS grad student, so you might expect me to be excited about this development, but I’m not actually sure it’s such a good sign. It’s great that there are more people considering careers in the IT industry, but I worry about people going into computer science for the wrong reasons. In my experience, if your brain works a certain way, you’ll love programming and will have a successful career in the software industry. If it doesn’t, there probably isn’t much you can do to change that. So I’d love to see more kids explore CS, but if, after taking a couple of classes, they’re not sure if CS is the right major for them, then frankly it probably isn’t. If you don’t enjoy programming, you’re almost certainly not going to be a good programmer, and you’re not going to be either successful or happy in that career. The fact that you like Facebook or your iPhone definitely isn’t enough reason to be a CS major.

I think it would be better if colleges focused on expanding the computer training that non-CS majors receive. Almost every technical field involves manipulating large datasets, and so the ability to write basic computer programs will be a big productivity boost in a wide variety of fields, from economics to biology. Most people aren’t cut out to be full-time programmers, but lots of people could benefit from a 1-semester course that focuses on practical data manipulation skills with a high-level scripting language like Perl or Python.

Posted on Techdirt - 23 March 2009 @ 10:10am

Security Researchers Shouldn't Face DMCA Liability While Protecting Users From Faulty DRM

Longtime Techdirt readers may remember Alex Halderman, who conducted influential research into the problems created by CD-based DRM during his time as a grad student here at Princeton. He’s now a professor at the University of Michigan, and he’s working on a new project: seeking a DMCA exemption for security research related to defective DRM schemes that endanger computer security. We’ve seen in the past that DRM schemes can open up security vulnerabilities in users’ computers, and Halderman argues that the public would benefit if security researchers could examine DRM schemes without being threatened with litigation under the DMCA for doing so.

The DMCA gives the Librarian of Congress the power to grant three-year exemptions for DRM circumventions that are perceived to be in the public interest, and one of the exemptions granted in the 2006 triennial review was for CD-based DRM schemes that create security problems. Alex points out in his filing that the most serious security vulnerabilities created by DRM since that rule-making have come not from CD-based DRM but from video game DRM, which has not been adequately studied by security researchers. A ton of prominent security researchers (including Alex and my mutual advisor, Ed Felten) have endorsed Alex’s request, arguing that the threat of DMCA liability hampers their research. We hope the Librarian of Congress is listening. If you live near Palo Alto or Washington, DC, you can sign up to testify about Alex’s proposal (or others) by filling out this form.

Posted on Techdirt - 19 March 2009 @ 11:01pm

Freeing Journalists From Newsprint's Straitjacket

One of the interesting things about the end of the Seattle Post-Intelligencer’s print edition, which Mike noted on Monday, is how much more flexibility the PI will have to adjust to changing economic conditions now that it’s an online-only publication. I don’t think it’s generally appreciated how constraining the newspaper format is. Readers expect a daily paper to be a certain size every day, and to arrive on their doorstep at a certain time every morning. Meeting those requirements involves a ton of infrastructure and personnel: typesetters, printing presses, delivery trucks, paper carriers, and so forth. To meet these infrastructure requirements, a paper has to have a minimum circulation, which in turn requires covering a wide geographical area. All of which means that as a daily paper’s circulation falls below a certain threshold, it can lead to a death spiral where cost-cutting leads to lower quality, which leads to circulation declines and more cost-cutting. Of course, some papers manage to survive with much smaller circulations than the PI, but these tend to be either weekly papers (which tend to have a very different business model) or papers serving smaller towns where they have a de facto monopoly on local news.

These economic constraints, in turn, greatly constrain what journalists can do. They have a strict deadline every evening, and there are strict limits on the word count they can publish. Because newspapers have to target a large, general audience with limited space, reporters are often discouraged from covering niche topics where they have the greatest interest or expertise. Moreover, because many newspaper readers rely on the paper as their primary source of news, people expect their newspaper to cover a broad spectrum of topics: national and international news, movie reviews, a business section, a comics page, a sports page, and so forth. Which means that reporters frequently get dispatched to cover topics they don’t understand very well and that don’t especially interest them. The content they produce on these assignments is certainly valuable, but it’s probably not as valuable as the content they’d produce if they were given more freedom to pursue the subjects they were most passionate about.

The web is very different. Servers and bandwidth are practically free compared with printing presses and delivery trucks, so news organizations of virtually any size—from a lone blogger to hundreds of people—can thrive if they can attract an audience. And thanks to aggregation technologies such as RSS and Google News, readers don’t expect or even want every news organization to cover every topic. Here at Techdirt, we don’t try to cover sports, the weather, foreign affairs, or lots of other topics because we know there are other outlets that can cover those topics better than we could. Instead, we focus on the topics we know the most about—technology and business—and cover them in a way that (we hope) can’t be found anywhere else. In the news business, as in any other industry, greater specialization tends to lead to higher quality and productivity.

Moving online will give the PI vastly more flexibility to adapt to changing market conditions and focus on those areas where they can create the most value. The PI says they’ll have about 20 people producing content for the new web-based outlet. That’s a lot fewer than the print paper employed, but it’s enough to produce a lot of valuable content. And now that they’re freed of the costs and constraints of newsprint, and the expectation to cover every topic under the sun, it’ll be a lot easier to experiment and find a sustainable business model.

Posted on Techdirt - 18 March 2009 @ 07:54pm

Will The Internet Kill The Foreign Correspondent?

The New York Times takes a look at the changing role of foreign correspondents in the Internet age. A generation ago, journalists who covered foreign countries could send reports back home without worrying about how their coverage would be perceived by the natives. This may have allowed more candid reporting, but it also meant coverage was less accurate because reporters never got feedback from the people they were covering. Now all that has changed. On the Internet, Indian readers can read the New York Times as easily as the Times of India. When reporters make mistakes, they get instant feedback from the subjects of their stories.

One question the story doesn’t specifically discuss is whether there’s a need for foreign correspondents, at all, in the Internet age. In the 20th century, newspapers needed foreign correspondents because the process of gathering and transmitting news across oceans was expensive and cumbersome. Having a foreign bureau gave a newspaper a competitive advantage because it allowed it to get fresher and more complete international news than its competitors. Now, of course, transmitting information around the world is incredibly cheap and easy. My local newspaper is no longer the only—or even the best—source of information about world events. Those who understand the language can get their news directly from foreign media outlets. And for the rest of us there are a ton of people who translate, filter, and interpret the news coming out of foreign countries for domestic consumption. Given these realities, it’s not obvious how much value is added by having American newspapers send reporters to the far-flung corners of the globe.

Of course, there are still tremendous advantages to having people who can explain foreign events and put them in context for American readers. I can read India’s newspapers, but I’m not going to pick on all the nuances of the coverage. But there are lots of ways to provide this kind of context and analysis. For example, there are undoubtedly smart Indian journalists who went to college in the United States and then returned to India. Such journalists are going to possess a much deeper understanding of Indian culture than an American journalist could. Conversely, there may be American expats living in India (perhaps with day jobs other than journalism), who could provide an American perspective on Indian news. Most importantly, there are lots of people here in the United States, who can read Indian news sources and then write about developments there, from an American perspective. These include Indian immigrants and Americans who have spent time in India, in the past.

One of the things people frequently cite as evidence of the dire state of the news industry is the fact that newspapers are closing their foreign bureaus and laying off their foreign correspondents. Maybe this is a sign that journalism, as a profession, is in trouble. But another interpretation is that we’ve just found more efficient ways to get news about foreign events. American readers will continue to demand coverage of overseas events. But 21st century news organizations are likely to discover that shipping American journalists overseas is not the most efficient way to meet that demand.

Posted on Techdirt - 18 March 2009 @ 03:17am

TomTom Caught Between Microsoft Rock And GPL Hard Place

Last month we covered Microsoft’s patent infringement lawsuit against GPS device maker TomTom. As Mike noted, this is a pretty clear example of abusive patent litigation. The patents in question are so broad that it’s virtually impossible to innovate in this space without first paying Microsoft for the privilege. Obviously, that prospect doesn’t bother Microsoft’s top patent lawyer very much, but it should be a serious concern for the rest of us. Since Mike wrote that post, another angle of the case has gotten a lot of attention from tech blogs: whether it’s possible for TomTom to settle the lawsuit without running afoul of the GPL, the free software license that covers the Linux code that Microsoft claims infringes at least three of those patents.

A bit of background is helpful here. When the Free Software Foundation drafted version 2 of the GPL, it included a clause saying that if a vendor is forced to place restrictions on downstream redistribution of software covered by the GPL (due to a per-unit patent licensing agreement, for example), that vendor loses the right to distribute the software at all. This clause acts as a kind of mutual defense pact, because it prevents any firm in the free software community from making a separate peace with patent holders. A firm’s only options are to either fight to invalidate the patent or stop using the software altogether. This clause of the GPL actually strengthens the hands of free software firms in their negotiations with patent holders. A company like Red Hat can credibly refuse to license patents by saying “we’d love to license your patent, but the GPL won’t let us.”

This creates a problem for a company like Microsoft that wants to extract licensing revenues from firms distributing GPLed software. Ordinarily, a patent holder sues in the hope that it will be able to get a quick settlement and a nice revenue stream from patent royalties. But the vendor of GPLed software can’t settle. And if the patent holder wins the lawsuit, the defendant will be forced to stop distributing the software, depriving the patent holder of an ongoing revenue stream. Either way, the trial will generate a ton of bad publicity for the patent holder.

In a comment at the “Open…” blog, prominent Samba developer Jeremy Allison charged that Microsoft has tried to sidestep this agreement by basically forcing companies to sign patent licensing agreements that violate the GPL under the cover of non-disclosure agreements. Allison argues that TomTom got sued because it was the first company to refuse to participate in this fraud. It’s important to note here that Allison can’t prove the existence of these agreements, so we should take his claims with a grain of salt. But if these charges are ever conclusively proven, they would have explosive consequences. The Free Software Foundation would likely insist that such firms either cancel their agreements with Microsoft (likely triggering a patent lawsuit) or stop distributing GPLed software altogether (which could be a death sentence for a firm that relies on such software).

Regardless, TomTom is now stuck between a rock and a hard place. The GPL has left the firm with only two options. It must either fight Microsoft’s patents to the death (literally) or it must settle with Microsoft and immediately stop distributing GPLed software. Given how deeply-entwined GPLed software apparently is in TomTom’s products, that second option may be no option at all. So expect a long and bloody fight in the courts.

One likely result will be to create a serious PR problem for Microsoft. Some people might remember the infamous GIF patent wars of the 1990s. When Unisys tried to collect patent royalties on the GIF format, the Internet community responded by switching in droves to the PNG format. In the process, Unisys earned a ton of bad press and a terrible reputation among computer geeks who care about software freedom. Microsoft risks a similar fate if it pursues this litigation campaign against Linux. And given that Microsoft is in a business where innovation is king, it’s probably not a good idea to become a pariah in a community that includes many of the world’s most talented software engineers.

Posted on Techdirt - 17 March 2009 @ 07:59pm

Does 'Cyber-Security' Mean More NSA Dragnet Surveillance?

As network infrastructure has become an increasingly important part of our economy, there’s been growing concern about the problems of cybersecurity. So far, the key debate is over whether the government should be involved in helping the private sector secure its networks or should focus on government networks. But another important question is which part of the government should be in charge of cyber-security. We’re in the midst of a bureaucratic turf war between the Department of Homeland Security and the National Security Agency over who will be in charge of government cybersecurity policy. The NSA’s head, Keith Alexander, is pushing the theory that cyber-security is a “national security issue,” and that therefore an intelligence agency like the NSA ought to be in charge of it.

The problem with this is that the NSA has a peculiar definition of cyber-security. When most of us talk about cyber-security, we mean securing our communications against intrusion by third parties, including the government. Yet the NSA has made no secret of its belief that “cyber security” means being able to spy on people more easily. Moreover, as Amit Yoran, former head of the Department of Homeland Security’s National Cyber Security Division, points out, the NSA’s penchant for secrecy, and concomitant lack of transparency, will be counterproductive in the effort to secure ordinary commercial networks. Therefore, the fight between DHS and the NSA is more than just a bureaucratic squabble. There’s plenty to criticize about the Department of Homeland Security, and reasons to doubt whether they should be helping to secure private sector networks at all. But at least DHS is relatively transparent, and (as far as we know) doesn’t engage in the kind of indiscriminate, warrantless wiretapping for which the NSA has become notorious.

Posted on Techdirt - 17 March 2009 @ 05:13pm

Mostly Toothless Video Game Bill Passes the Utah Legislature

The Utah legislature has seemed strangely obsessed with technology issues this session. Perhaps spurred on by a questionable BYU study on the problems created by video games, the Utah legislature has passed a bill promoted by disgraced lawyer and anti-videogame activist Jack Thompson to regulate the sale of video games to minors. The good news, as Ars Technica reports, is that the law was largely defanged during the legislative process. Under the final version of the bill, retailers would not be liable for selling M-rated video games to minors if they’d put their employees through a training program. They’d also not be liable if the children had gotten the games by lying about their age. With that said, there’s still plenty to object to here. For starters, the legislation punishes retailers for failing to follow their published policy on video game sales. That means that a retailer that has a strong policy against selling to minors will face more liability if it breaks that policy than a retailer that doesn’t have such a policy. This could have the perverse effect of discouraging retailers from adopting strong policies against selling violent video games to children. It will also force a lot of retailers to put their employees through “training” programs that may be completely unnecessary. But probably the most serious problem with this legislation is that it may be an opening wedge for future regulation of video game sales. Expect the same interest groups that pushed this legislation through to come back in future years with bills that would close the “loopholes” in this year’s legislation.

Posted on Techdirt - 17 March 2009 @ 08:27am

Google 'Requests' That We Not Copy Works That Are Already In The Public Domain

Computer scientist Steven Bellovin notes a troubling trend: companies that republish public domain works are increasingly trying to use contract law to place restrictions on their use. For example, Google is apparently in the habit of “requesting” that people only use the out-of-copyright works they’ve scanned for “personal, non-commercial purposes.” Even more troubling, works like this one that were produced by the US federal government—and have therefore never been subject to copyright—come with copyright-like notices stating that any use other than “individual research” requires a license. Fundamentally, this is problematic because copyright law is supposed to be a bargain between authors and the general public: we give authors a limited, temporary monopoly over their works, in exchange for those works being created. But in this case, the restrictions are being imposed by parties—Google and Congressional Research Services, Inc., respectively—who had nothing to do with the creation of the works. The latter case is particularly outrageous because taxpayers already paid for the works once, through our tax dollars.

With that said, there are a couple of reasons to think that things aren’t as bad as Bellovin suggests. It’s hardly unusual for companies to claim rights they don’t have in creative works—that doesn’t mean those claims will stand up in court. The fact that Google “requests” that users limit how works are used doesn’t mean they can stop people who ignore their requests. And especially in the case of government works, there’s a strong case to be made that copyright law’s explicit exemption of government works from legal restrictions should trump any rights that private companies might claim to limit the dissemination of such works. Moreover, a few courts have recognized the concept of copyright misuse, the attempt to extend a copyright holder’s rights beyond those that are specified in the law. So it’s not at all clear that these purported contractual restrictions would actually be binding. Companies might say that you need permission to reproduce the works, but they’re unlikely to try to enforce those requirements in court. Nevertheless, government officials and librarians should do a better job of policing these kinds of spurious claims. As Bellovin says, government agencies that hire firms to manage collections of public domain works should ensure that the private firms are contractually obligated not to place additional restrictions on downstream uses of those works.

More posts from Timothy Lee >>