In fact, the word "platform" does not even show up in the statute. Instead the statute uses the term "interactive computer service provider." The idea of a "service provider" is a meaningful one, because the whole point of Section 230 is to make sure that the people who provide the services that facilitate others' use of the Internet are protected in order for them to be able to continue to provide those services. We give them immunity from the legal consequences of how people use those services because without it they wouldn't be able to – it would simply be too risky.
But saying "interactive computer service provider" is a mouthful, and it also can get a little confusing because we sometimes say "internet service provider" to mean just a certain kind of interactive computer service provider, when Section 230 is not nearly so specific. Section 230 applies to all kinds of service providers, from ISPs to email services, from search engines to social media providers, from the dial-up services we knew in the 1990s back when Section 230 was passed to whatever new services have yet to be invented. There is no limit to the kinds of services Section 230 applies to. It simply applies to anyone and everyone, including individual people, who are somehow providing someone else the ability to use online computing. (See Section 230(f)(2).)
So for shorthand people have started to colloquially refer to protected service providers as "platforms." Because statutes are technical creatures it is not generally a good idea to use shorthand terms in place of the precise ones used by the statutes; often too much important meaning can be lost in the translation. But in this case "platform" is a tolerable synonym for most of our policy discussions because it still captures the essential idea: a Section 230-protected "platform" is the service that enables someone else to use the Internet.
Which brings us to the term "publisher," which does appear in the statute. In particular it appears in the critically important provision at Section 230(c)(1), which does most of the work making Section 230 work:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
In this provision the term "publisher" (or "speaker") refers to the creator of the content at issue. Who did? Was it the provider of the computer service, aka the platform itself? Or was it someone else? Because if it had been someone else, if the information at issue had been "provided by another information content provider," then we don't get to treat the platform as the "publisher or speaker" of that information – and it is therefore immune from liability for it.
Where the confusion has arisen is in the use of the term "publisher" in another context as courts have interpreted Section 230. Sometimes the term "publisher" itself means "facilitator" or "distributor" of someone else's content. When courts first started thinking about Section 230 (see, e.g., Zeran v. AOL) they sometimes used the term because it helped them understand what Section 230 was trying to accomplish. It was trying to protect the facilitator or distributor of others' expression – or, in other words, the platform people used to make that expression – and using the term "publisher" from our pre-Section 230 understanding of media law helped the courts recognize the legal effect of the statute.
Using the term did not, however, change that effect. Or the basic operation of the statute. The core question in any Section 230 analysis has always been: who originated the content at issue? That a platform may have "published" it by facilitating its appearance on the Internet does not make it the publisher for purposes of determining legal responsibility for it, because "publishing" is not the same as "creating." And Section 230 – and all the court cases interpreting it – have made clear that it is only the creator who can be held liable for what was created.
There are plenty of things we can still argue about regarding Section 230, but whether someone is a publisher versus a platform should not be one of them. It is only the creator v. facilitator distinction that matters.
]]>I apologize if it feels a bit cold and rude to respond in such an impersonal way, but I've been wasting a ton of time lately responding individually to different people saying the same wrong things over and over again, and I was starting to feel like this guy:
And... I could probably use more sleep, and my blood pressure could probably use a little less time spent responding to random wrong people. And, so, for my own good you get this. Also for your own good. Because you don't want to be wrong on the internet, do you?
Also I've totally copied the idea for this from Ken "Popehat" White, who wrote Hello! You've Been Referred Here Because You're Wrong About The First Amendment a few years ago, and it's great. You should read it too. Yes, you. Because if you're wrong about 230, there's a damn good chance you're wrong about the 1st Amendment too.
While this may all feel kind of mean, it's not meant to be. Unless you're one of the people who is purposefully saying wrong things about Section 230, like Senator Ted Cruz or Rep. Nancy Pelosi (being wrong about 230 is bipartisan). For them, it's meant to be mean. For you, let's just assume you made an honest mistake -- perhaps because deliberately wrong people like Ted Cruz and Nancy Pelosi steered you wrong. So let's correct that.
Before we get into the specifics, I will suggest that you just read the law, because it seems that many people who are making these mistakes seem to have never read it. It's short, I promise you. If you're in a rush, just jump to part (c), entitled Protection for “Good Samaritan” blocking and screening of offensive material, because that's the only part of the law that actually matters. And if you're in a real rush, just read Section (c)(1), which is only 26 words, and is the part that basically every single court decision (and there have been many) has relied on.
With that done, we can discuss the various ways you might have been wrong about Section 230.
If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"
I regret to inform you that you are wrong. I know that you've likely heard this from someone else -- perhaps even someone respected -- but it's just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between "interactive computer services" and "information content providers," but that is not, as some imply, a fancy legalistic ways of saying "platform" or "publisher." There is no "certification" or "decision" that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.
To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.
Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you'll understand most of the most important things about Section 230.
To reinforce this point: there is nothing any website can do to "lose" Section 230 protections. That's not how it works. There may be situations in which a court decides that those protections do not apply to a given piece of content, but it is very much fact-specific to the content in question. For example, in the lawsuit against Roommates.com for violating the Fair Housing Act, the court ruled against Roommates, but not that the site "lost" its Section 230 protections, or that it was now a "publisher." Rather, the court explicitly found that some content on Roommates.com was created by 3rd party users and thus protected by Section 230, and some content (namely pulldown menus designating racial preferences) was created by the site itself, and thus not eligible for Section 230 protections.
If you said "Because of Section 230, websites have no incentive to moderate!"
You are wrong. If you reformulated that statement to say that "Section 230 itself provides no incentives to moderate" then you'd be less wrong, but still wrong. First, though, let's dispense with the idea that thanks to Section 230, sites have no incentive to moderate. Find me a website that doesn't moderate. Go on. I'll wait. Lots of people say things like one of the "chans" or Gab or some other site like that, but all of those actually do moderate. There's a reason that all such websites do moderate, even those that strike a "free speech" pose: (1) because other laws require at least some level of moderation (e.g., copyright laws and laws against child porn), and (2) more importantly, with no moderation, a platform fills up with spam, abuse, harassment, and just all sorts of garbage that make it a very unenjoyable place to spend your internet time.
So there are many, many incentives for nearly all websites to moderate: namely to keep users happy, and (in many cases) to keep advertisers or other supporters happy. When sites are garbage, it's tough to attract a large user base, and even more difficult to attract significant advertising. So, to say that 230 means there's no incentive to moderate is wrong -- as proven by the fact that every site does some level of moderation (even the ones that claim they don't).
Now, to tackle the related argument -- that 230 by itself provides no incentive to moderate -- that is also wrong. Because courts have ruled Section (c)(1) to have immunized moderation choices, and Section (c)(2) explicitly says that sites are not liable for their moderation choices, sites actually have a very strong incentive provided by 230 to moderate. Indeed, this is one key reason why Section 230 was written in the first place. It was done in response to a ruling in the Stratton Oakmont v. Prodigy lawsuit, in which Prodigy, in an effort to provide a "family friendly" environment, did some moderation of its message boards. The judge in that case rules that since Prodigy did moderate the boards, that meant it would be liable for anything it left up.
If that ruling had stood and been adopted by others, it would, by itself, be a massive disincentive to moderation. Because the court was saying that moderation itself creates liability. And smart lawyers will say that the best way to avoid that kind of liability is not to moderate at all. So Section 230 explicitly overruled that judicial decision, and eliminated liability for moderation choices.
If you said "Section 230 is a massive gift to big tech!"
Once again, I must inform you that you are very, very wrong. There is nothing in Section 230 that applies solely to big tech. Indeed, it applies to every website on the internet and every user of those websites. That means it applies to you, as well, and helps to protect your speech. It's what allows you to repeat something someone else said on Facebook and not be liable for it. It's what protects every website that has comments, or any other third-party content. It applies across the entire internet to every website and every user, and not just to big tech.
The "user" protections get less attention, but they're right there in the important 26 words. "No provider or user of an interactive computer service shall be treated as the publisher or speaker...." That's why there are cases like Barrett v. Rosenthal where someone who forwarded an email to a mailing list was held to be protected by Section 230, as a user of an interactive computer service who did not write the underlying material that was forwarded.
And it's not just big tech companies that rely on Section 230 every day. Every news organization (even those that write negative articles about Section 230) that has comments on its website is protected thanks to Section 230. This very site was sued, in part, over comments, and Section 230 helped protect us as well. Section 230 fundamentally protects free speech across the internet, and thus it is more properly called out as a gift to internet users and free speech, not to big tech.
If you said "A site that has political bias is not neutral, and thus loses its Section 230 protections"
I'm sorry, but you are very, very, very wrong. Perhaps more wrong than anyone saying any of the other things above. First off, there is no "neutrality" requirement at all in Section 230. Seriously. Read it. If anything, it says the opposite. It says that sites can moderate as they see fit and face no liability. This myth is out there and persists because some politicians keep repeating it, but it's wrong and the opposite of truth. Indeed, any requirement of neutrality would likely raise significant 1st Amendment questions, as it would be involving the law in editorial decision making.
Second, as described earlier, you can't "lose" your Section 230 protections, especially not over your moderation choices (again, the law explicitly says that you cannot face liability for moderation choices, so stop trying to make it happen). If content is produced by someone else, the site is protected from lawsuit, thanks to Section 230. If the content is produced by the site, it is not. Moderating the content is not producing content, and so the mere act of moderation, whether neutral or not, does not make you lose 230 protections. That's just not how it works.
If you said "Section 230 requires all moderation to be in "good faith" and this moderation is "biased" so you don't get 230 protections"
You are, yet again, wrong. At least this time you're using a phrase that actually is in the law. The problem is that it's in the wrong section. Section (c)(2)(a) does say that:
No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
However, that's just one part of the law, and as explained earlier, nearly every Section 230 case about moderation hasn't even used that part of the law, instead relying on Section (c)(1)'s separation of an interactive computer service from the content created by users. Second, the good faith clause is only in half of Section (c)(2). There's also a separate section, which has no good faith limitation, that says:
No provider or user of an interactive computer service shall be held liable on account of... any action taken to enable or make available to information content providers or others the technical means to restrict access to material....
So, again, even if (c)(2) applied, most content moderation could avoid the "good faith" question by relying on that part, (c)(2)(B), which has no good faith requirement.
However, even if you could somehow come up with a case where the specific moderation choices were somehow crafted such that (c)(1) and (c)(2)(B) did not apply, and only (c)(2)(A) were at stake, even then, the "good faith" modifier is unlikely to matter, because a court trying to determine what constitutes "good faith" in a moderation decision is making a very subjective decision regarding expression choices, which would create massive 1st Amendment issues. So, no, the "good faith" provision is of no use to you in whatever argument you're making.
If you said "Section 230 is why there's hate speech online..."
Ooof. You're either the The NY Times or very confused. Maybe both. The 1st Amendment protects hate speech in the US. Elsewhere not so much. Either way, it has little to do with Section 230.
If you said "Section 230 means these companies can never be sued!"
I regret to inform you that you are wrong. Internet companies are sued all the time. Section 230 merely protects them from a narrow set of frivolous lawsuits, in which the websites are sued either for the content created by others (in which case the actual content creators remain liable) or in cases where they're being sued for the moderation choices they make, which are mostly protected by the 1st Amendment anyway (but Section 230 helps get those frivolous lawsuits kicked out faster). The websites can and do still face lawsuits for many, many other reasons.
If you said "Section 230 is a get out of jail card for websites!"
You're wrong. Again, websites are still 100% liable for any content that they themselves create. Separately, Section 230 explicitly exempts federal criminal law -- meaning that stories that blame things like sex trafficking and opioid sales on 230 are very much missing the point as well. The Justice Department is not barred by Section 230. It says so quite clearly:
Nothing in this section shall be construed to impair the enforcement of... any other Federal criminal statute
So many of the complaints about criminal activity are not about Section 230, but about a lack of enforcement.
If you said "Section 230 is why there's piracy online"
You again may be the NY Times or someone who has not read Section 230. Section 230 explicitly exempts intellectual property law:
Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.
If you said "Section 230 gives websites blanket immunity!"
The courts have made it clear this is not the case at all. In fact, many courts have highlighted situations in which Section 230 does not apply, from the Roommates case, to the Accusearch case, to the Doe v. Internet Brands case, to the Oberdorf v. Amazon case, we see plenty of cases where judges have made it clear that there are limits to Section 230 protections, and the immunity conveyed by Section 230 is not as broad as people claim. At the very least, the courts seem to have little difficulty targeting what they consider to be "bad actors" with regards to the law.
If you said "Section 230 is why big internet companies are so big!"
You are, again, incorrect. As stated earlier, Section 230 is not unique to big internet companies, and indeed, it applies to the entire internet. Research shows that Section 230 actually helps incentivize competition, in part because without Section 230, the costs of running a website would be massive. Without Section 230, large websites like Google and Facebook could handle the liability, but smaller firms would likely be forced out of business, and many new competitors might never get started.
If you said "Section 230 was designed to encourage websites to be neutral common carriers"
You are exactly 100% wrong. We've already covered why it does not require neutrality above, but it was also intended as the opposite of requiring websites to be "common carriers." Specifically, as mentioned above, part of the impetus for Section 230 was to enable services to create "family friendly" spaces, in which plenty of legal speech would be blocked. A common carrier is a very specific thing that has nothing to do with websites and less than nothing to do with Section 230.
If you said "If all this stuff is actually protected by the 1st Amendment, then we can just get rid of Section 230"
You're still wrong, though perhaps not as wrong as everyone else making these bad takes. Without Section 230, and relying solely on the 1st Amendment, you still open up basically the entire internet to nuisance suits. Section 230 helps get cases dismissed early, whereas using the 1st Amendment would require lengthy and costly litigation. 230 does rely strongly on the 1st Amendment, but it provides a procedural advantage in getting vexatious, frivolous nuisance lawsuits shut down much faster than they would be otherwise.
There seems to be more and more wrong stuff being said about Section 230 nearly every day, but hopefully this covers most of the big ones. If you see someone saying something wrong about Section 230, and you don't feel like going over all of their mistakes, just point them here, and they can be educated.
]]>Thorpe's bill [PDF] says it will turn platforms into publishers at the drop of a bias accusation:
Specifies a person who allows online users to upload publicly accessible content on the internet and who edits, deletes or makes it difficult for online users to locate and access the uploaded content in an easy or timely manner for politically biased reasons is:
a) Deemed to be a publisher;
b) Deemed to not be a platform; and
c) Liable for damages suffered by an online user because of the person's actions, including damage for violations of rights guaranteed to the online user by the Federal or State Constitutions.
To be clear, there are no enshrined rights guaranteeing unimpeded use of private companies' platforms or access to "uploaded content." Writing a bill that proclaims there are doesn't make the false assertion any more true. If platforms are perceived to be engaging in politically-motivated moderation, the bill allows the affected user (or state Attorney General) to engage in litigation that's doomed to fail.
For someone so concerned about walled gardens of the political variety, Rep. Thorpe doesn't seem all that interested in practicing what he's preaching to his voter base. Thorpe's Twitter account is protected, which means most constituents don't have access to it and it allows him to pick and choose who gets to see his tweets.
This is only Thorpe's latest attempt to adversely affect protections and rights while claiming to be very concerned about rights violations. Journalists and activists have already pointed out Thorpe's tendency to block critics, all while claiming his account (@azrepbobthorpe) is for personal use, rather than for his legislative work. His bio points claims he's a "Christian Constitutional Legislator," but his account is an ongoing Constitutional violation.
In 2017, he introduced a bill that would violate First Amendment under the theory it would somehow make public universities more protective of the rights he's undermining. Thorpe has a problem with any "divisive" speech -- especially the kind that might come from teachers and professors who might -- as the bill put it -- "promote division, resentment or social justice toward a race, gender, religion, political affiliation, social class or other class of people." As Fire noted then, Thorpe's proposed government interference in classroom instruction would only harm the First Amendment, not save it.
Prohibiting instructors from teaching particular perspectives or topics is precisely the kind of content- and viewpoint-based restriction forbidden under the First Amendment. This does not change even if someone on campus might deem those perspectives or topics likely “to promote social justice, division, or resentment.” In practice, this legislation would confer unfettered discretion on campus administrators to shut down discussion on nearly any subject.
During the next legislative session, Bob Thorpe proposed another free speech-threatening bill that suggested free-market capitalism should be shielded from criticism.
This week, Arizona Rep. Bob Thorpe (R – Flagstaff) introduced a bill that would designate "American free-market capitalism" the state's official "political-economic system", and declares the legislature's intent "that taxpayer dollars not be used to promote or to provide material support for any political-economic system that opposes the principles of free-market capitalism."
Thorpe thinks he's a free speech warrior. But all he does is try to erect content-based restrictions on speech -- the kind of thing no court has been sympathetic to. As Reason noted, the bill was stupid political point-scoring bullshit that had zero chance of surviving even the most cursory Constitutional glance..
For starters, the bill contains a naked content-based restriction on the use of taxpayer dollars to promote something other than "free-market capitalism". From what activities Thorpe would want to yank state support is not entirely clear; his bill says only that it would include the promotion of "socialism, communism, and fascism."
Over at the Phoenix New Times, Antonia Noori Farzan notes that "taken to an extreme, it could potentially mean that state universities would be banned from providing any resources to campus chapters of the Democratic Socialists of America." Depending on your definition of free-market, the bill could be used to deny resources to college Democrat and Republican chapters too.
The good times roll on. Bob Thorpe is finally winding down his state house career, exiting his unstoried seven-term career with another stupid bill that ignores the Constitution in favor of allowing people who don't understand Section 230 or the First Amendment feel like their grievances are being redressed.
]]>But it's unclear what sort of reform the government actually has in mind. Australia's Attorney General Christian Porter says the country's defamation law is "unfair." It's certainly not a good law, but Porter thinks it doesn't strike a "perfect balance" between protecting journalists from being hit with bogus lawsuits and protecting individuals from being defamed.
He's right. The law doesn't strike the right balance. But he's wrong about how to fix it. Very wrong.
Attorney-General Christian Porter says social media platforms should be treated the same as traditional publishers under defamation law, a change that would present a fundamental new challenge for global companies such as Facebook and Twitter.
It appears Porter believes the playing field can only be leveled by dragging social media platforms down to the level of local journalists the law fails to protect. This is Porter's idea of "fairness," apparently. If the law is going to continue to suck, it should suck for more people.
Despite the fact that social media platforms don't actually "publish" anything, Porter wants to treat Facebook, et al like newspapers. In Porter's mind, anything posted by users apparently should be vetted and fact-checked and edited by social media platforms before it goes live. You know, like a newspaper.
Of course, this is impossible and Porter knows it. So, "reforming" the law just means making it easier for bad faith litigation to proceed, allowing actual defamers to escape punishment while judgments and fees are extracted from American social media companies. Porter is pretty sure this is the right thing to do, even as he admits he has no idea if it even can be done.
"My own view ... is that online platforms, so far as reasonably possible, should be held to essentially the same standards as other publishers," Mr Porter told an audience at the National Press Club.
"But you have to, of course, take into account, reasonable, sensible measures for how you do that ... because of the volume of what goes on in Twitter and Facebook is much larger than the volume from a standard newspaper."
Saying that "the volume of what goes on in Twitter and Facebook is much larger than that the volume from a standard newspaper" is such an understatement as to suggest that Porter has absolutely no familiarity with the issue at hand. Comparing the two is like saying the volume of Niagra Falls is larger than a leaky sink. Yes, they both involve water moving downward, but that's about the extent of the comparison. Saying that the volume from one is "much larger" than the other leaves out just how much larger. Indeed, it's so much larger that there literally is no reasonable comparison. Yet, he chose to make it anyway.
The AG's defamation law "fix" appears to be a response to a NSW Supreme Court decision handed down earlier this year -- one that held Australian press outlets legally responsible for defamatory comments made by readers on the outlets' Facebook pages. But rather than improve the law to protect press outlets, AG Porter just wants to make it worse for social media companies.
How this is supposed to fix anything is anyone's guess. Maybe the Attorney General feels the country's court system just isn't seeing enough bogus litigation. Whatever the case, this reform effort by the Australian government appears poised to make things worse for Australians and everyone who provides a platform for them.
]]>Even so, the Washington Post's decision to publish this op-ed by Charlie Kirk attacking Section 230 may be the worst we've seen. It is so full of factually false information, misleading spin, and just downright disinformation that no respectable publication should have allowed it to be published. And yet, there it is in the Washington Post -- one of the major news organizations that Donald Trump likes to declare "fake news." If you're unaware of Kirk, he's a vocal Trump supporter, who runs an organization called Turning Point USA that appears to specialize in playing the victim in all sorts of ridiculous conspiracies... all while (hypocritically) arguing that his political opponents ("the libs") are always acting as victims and are "training a generation of victims who are being trained to be offended by something." And yet, it seems that it's really Kirk who is always offended.
This Washington Post op-ed is just one example. Here, Kirk is playing the victim of (as of yet, still unproven) anti-conservative bias on social media.
By now, most conservatives are convinced that our voices are being shadow-banned, throttled, muted and outright censored online. In fact, amid protestations by groups including the Internet Association, which claims Facebook, Google and Twitter are bias free, it’s an open fact that Big Tech is run predominantly by those on the ideological left. Facebook’s founder Mark Zuckerberg and Twitter’s chief executive Jack Dorsey even admitted this before Congress, and footage of Google’s leadership consoling one another after President Trump’s victory in 2016 indicates the same is true for them.
Many on the right have complained loudly and often of anti-conservative bias online. Unfortunately, all too often this is where our efforts stop. Once we’re ignored or dismissed long enough, conservatives seem to just shrug our collective shoulders and accept defeat. It’s this type of passivity that has allowed progressives to dominate film and television, universities and large swaths of the mainstream news media. How did they accomplish that? By fighting tooth and nail for what they believe in every vertical.
While it is true that many people who work in the big internet companies probably lean towards the Democratic side of the aisle (though not nearly as far as some make it out to be), that's different than proving that they have put in place policies that are biased against "conservatives" (and I use that term loosely). Again, nearly every example that people trot out actually involves trolling, harassment, actual Nazis or other violations of terms of service. And while these companies sometimes make mistakes, they seem to do so pretty much across the board -- which is the very nature of moderating so much content.
Separately, many of the links in Kirk's opening above don't actually say what he pretends they say. Professor Matthew Boedy went through the above links and put together a great Twitter thread unpacking how he misrespresents nearly everything:
This is not the same as Kirk's claim. While Pew also found 85% of that same group believes it is likely "social media companies intentionally censor political viewpoints that those companies find objectionable," that is not specific to just conservative tweets. 4/
— Matthew Boedy (@MatthewBoedy) July 11, 2019
"Whether or not monopolies are a bad thing is for the consumer and aspiring competitors to decide." Kirk inaccurately makes it seem the article agrees with him that these companies "are unfairly stifling competition." It doesn't say that. 12/
— Matthew Boedy (@MatthewBoedy) July 11, 2019
The thread is a lot longer and covers many more examples. Either way, Charlie needs to play the victim, and he's decided that the culprit is Section 230. Because he either doesn't understand Section 230, or is deliberately misrepresenting it.
The second obstacle to the free market is Big Tech’s exploitation of preexisting laws, namely Section 230 of the Communications Decency Act that was passed by Congress in the '90s. Social media companies have leveraged Section 230 to great effect, and astounding profits, by claiming they are platforms — not publishers — thereby avoiding under the law billions of dollars in potential copyright infringement and libel lawsuits. YouTube, for example, advertises itself as an open platform “committed to fostering a community where everyone’s voice can be heard.” Facebook and Twitter make similar claims. Let’s be clear, when these companies censor or suppress conservative content, they are behaving as publishers, and they should be held legally responsible for the all the content they publish. If they want to continue hiding behind Section 230 and avoid legal and financial calamity, they must reform.
And here's where an editor totally should have stepped in, because almost all of this is wrong or gibberish. First off, even a cursory glance at the text of CDA 230 shows that it excludes intellectual property, such as copyright. Section (e)(2) literally says: "Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property." So what the fuck is Kirk talking about when he says that they used this law to avoid "billions of dollars in potential copyright infringement... lawsuits." The answer is that Kirk has no idea what he's talking about, and now seems to be repeating propaganda pushed out by "liberal" Hollywood.
As for it allowing them to avoid "libel lawsuits," well, yes. But that's because Section 230 is about properly applying liability to those who make the statements. We don't blame AT&T when someone uses a phone to make a bomb threat. We don't blame Ford when someone gets into a car accident. And we don't blame Facebook when someone posts defamatory content. It's kind of straightforward.
Still, where it's really egregious is that Kirk continues to push the total myth that Section 230 allows companies to hide if they just claim they're a "platform" rather than "a publisher." That's not how the law works at all. It doesn't make any such distinction.
And here's the really crazy thing: if Kirk got his "wish" and actually got rid of CDA 230 and made internet companies liable, his own content would likely be at the top of the chopping block. Remember, one of Kirk's claims to fame was when he published a "Professor Watchlist" calling out allegedly "left-leaning academics" who he feels discriminate against conservatives. He can do that because that's 1st Amendment protected speech (opinion). But if 230 is amended to require "neutrality," well, such a list is anything but neutral. Furthermore, the risk of liability of hosting such a list would be high. Even though I'd argue that it's protected speech, you can bet that someone might find some of the claims on the list defamatory -- and thus there would be strong pressure for sites to pull it down to avoid liability.
As radio host Dennis Prager often says, if an airline permitted only those passengers holding the New York Times to board but then denied Wall Street Journal readers, we would all rightly call this discrimination and demand the airline change its policy.
This is dumb for a huge number of reasons. First of all, I don't think we'd all rightly call it discrimination. We'd call it a business decision. Probably a bad one. Which is why no airline would ever do such a thing. Second, where exactly is the social media platform that is banning people for subscribing to the WSJ, but not the NYT? It doesn't exist. This is such a hyperbolic, misleading example. People are being banned for harassment and trolling. Not for holding conservative viewpoints. No one's being kicked off of platforms for calling for lower taxes, less government, or other traditionally "conservative" ideas.
In the same way, conservatives cannot win the battle of ideas if we’re marginalized or removed from mainstream culture and mainstream platforms.
This, also, is laughable. Remember, "right wing" media dominates both radio and cable television. I don't see Kirk demanding that Fox News host more liberal viewpoints to balance out Hannity. And, once again, even in the supposedly "liberal" Washington Post, he's allowed to post this blatantly false nonsense.
Again, the Washington Post should absolutely be willing to post different points of view, including those of Kirk and his allies. But they shouldn't allow him to blatantly spread disinformation about what the law says and what it does. That's just... as Kirk would say, "fake news."
]]>There was a time when there were no “platforms” as we now know them. That time was, oh, about 2007. For decades, computing (video games included) had had this term “platform.” As the 2000s began, Tim O’Reilly and John Battelle proposed “the web as a platform,” primarily focusing on the ability of different services to connect to one another.
The venture capitalist Marc Andreessen, then the CEO of the also-ran social network Ning, blasted anyone who wanted to extend the definition. “A ‘platform’ is a system that can be programmed and therefore customized by outside developers,” he wrote. “The key term in the definition of platform is ‘programmed.’ If you can program it, then it’s a platform. If you can’t, then it’s not.” My colleague Ian Bogost, who co-created an MIT book series called Platform Studies, agreed, as did most people in the technical community. Platforms were about being able to run code in someone else’s system.
This was Facebook’s original definition of its product, Facebook Platform, which allowed outside developers to build widgets and games, and extend the core service. In the years before 2016, nearly all of Mark Zuckerberg’s public references to Facebook as a platform were technical, about connecting with developers.
Amusingly, this actually reminded me of articles I had written over a decade ago, talking up why Google and Facebook needed to become a new kind of internet platform -- which I meant in the same manner as Madrigal describes above and which most people talking about "platforms" meant in the mid-aughts. It meant a system on which others could develop new applications and services. I have to admit that I don't know quite how and when the world switched to calling general internet services "platforms" instead, and I'm just as guilty of doing so as others.
I have two quick thoughts on why this may have happened before I get back to Madrigal's piece. First, many of the discussions around these big internet companies didn't really have a good descriptive term. When talking about the law, things like Section 230 of the Communications Decency Act refer to them as "interactive computer services" which is awkward. And the DMCA refers to them as "service providers," which is quite confusing, because "internet service provider" has an existing (and somewhat different) meaning, as the company who provides you internet access. Ideally, those company should be called "internet access providers" (IAPs) rather than ISPs, but what's done is done. And, then of course, there's the equally awkward term "intermediary," which just confuses the hell out of most non-lawyers (and some lawyers). So "platform" came out in the wash as the most useful, least awkward option.
And if Madrigal's piece had just stuck with that interesting historical shift, and maybe dug into things like I did in the previous paragraph, that might be really compelling. Unfortunately, Madrigal goes a step or two further -- and one that goes right up to the line (though it doesn't totally cross it) of suggesting that there's some legal significance to calling oneself a platform. This is something we've seen too many reporters do of late, spreading a false impression that internet "platforms" somehow get magic protections that internet "publishers" don't get.
As we've explained there is literally no distinction here. Usually people are making this argument with regards to CDA 230's protections, but as we've discussed in great detail that law makes no distinction between a "platform" and a "publisher." Instead, it applies to all "interactive computer services" including any publisher, so long as they host 3rd party content. Madrigal's piece doesn't call out CDA 230 the way others have, but, unfortunately, his piece absolutely can be read in a misleading way to suggest that there is some magical legal distinction here that matters. Specifically this part:
This new rhetorical device wasn’t just for press releases, but also for ginning up business and creating a legal architecture.
Uh, what "legal architecture"? Again, CDA 230, the key law in this area, makes no special distinction for "platforms." There was no need for a "rhetorical device" to consider yourself protected (and there still isn't). Nothing in calling oneself a platform set up any legal architecture, no matter how many ignorant people on Twitter claim it is so. Unfortunately, someone who has already heard that false claim is likely to read Madrigal's piece as a confirmation of that incorrect bit of info.
So, let's be clear, once again and state that there is no special legal distinction for "platforms," and it makes no difference in the world if an internet company refers to itself as a platform, or a publisher (or, for that matter, an instigator, an enabler, a middleman, a gatekeeper, a forum, or anything). All that matters is do they meet the legal definition of an interactive computer service (which, if they're online, the answer is generally "yes"), and (to be protected under CDA 230) whether there's a legal question about whether or not they're to be held liable for third party content.
Some people may want the law changed. And they may think that "internet platforms" should require some specific rules and regulations -- including silly, unenforceable ideas like "being neutral," -- but that's got nothing to do with the law today, and any suggestion that it does is simply incorrect.
]]>Effective Copyright Policy: The federal government should modernize the copyright system through reforms that facilitate access to out-of-print and orphan works, while protecting the innovation incentives in the system. It should also promote open-licensing arrangements for copyrighted material supported by federal grant funding.Now, just the fact that a Presidential campaign mentions that there's a problem with copyright law blocking access to content is somewhat revolutionary, so kudos to whoever got that into the plan. But the weird "while protecting the innovation incentives in the system" trailing line could mean anything and is designed to be just vague enough for anyone to read anything into it. What are the "innovation incentives in the system" right now? Well, on that, people totally disagree. Some people think that fair use, user rights and DMCA safe harbors are the innovation incentives in the system. Others, of course, argue it's long copyright terms and insane statutory damages. These two groups disagree and the Clinton platform offers no further enlightenment.
Effective Copyright Policy: Copyrights encourage creativity and incentivize innovators to invest knowledge, time, and money into the generation of myriad forms of content. However, the copyright system has languished for many decades, and is in need of administrative reform to maximize its benefits in the digital age. Hillary believes the federal government should modernize the copyright system by unlocking—and facilitating access to—orphan works that languished unutilized, benefiting neither their creators nor the public. She will also promote open-licensing arrangements for copyrighted material and data supported by federal grant funding, including in education, science, and other fields. She will seek to develop technological infrastructure to support digitization, search, and repositories of such content, to facilitate its discoverability and use. And she will encourage stakeholders to work together on creative solutions that remove barriers to the seamless and efficient licensing of content in the U.S. and abroad.Open licensing is good. Removing barriers to effective licensing is also good. But there's no plan here. People have talked about these things for ages and never gotten anywhere because entrenched interests don't want this kind of thing to happen at all.
She also maintains her opposition to policies that unnecessarily restrict the free flow of data online –such as the high profile fight over the Stop Online Piracy Act (SOPA).The language choices here appear to have been workshopped by a committee of hundreds. What the hell does this mean? Does it mean that she would oppose the fight over SOPA? Or SOPA itself? Because it's pretty clear that she's implying that she would oppose things like SOPA (which, again, had nothing to do with net neutrality). But she also was a SOPA supporter -- at least until it was politically inconvenient. During the height of the SOPA battle, she sent a letter insisting (contrary to the statement in her new platform) that there was "no contradiction" between supporting the free flow of information and enforcing strict copyright laws:
"There is no contradiction between intellectual property rights protection and enforcement and ensuring freedom of expression on the internet."So if she believes that, then SOPA wouldn't have restricted the free flow of data. Of course, once the public tide turned against SOPA -- guess what -- so did Hillary, suddenly making it out to have been an important fight for internet freedom, even though she denied that very point just months earlier:
“The United States wants the Internet to remain a space where economic, political, and social exchanges flourish. To do that, we need to protect people who exercise their rights online, and we also need to protect the Internet itself from plans that would undermine its fundamental characteristics.”In other words, like a standard politician, we've got vague promises and flip flops -- along with ignoring previous positions when convenient.
Improve the Patent System to Reward Innovators: Hillary will enact targeted reforms to the patent system to reduce excessive patent litigation and strengthen the capacity of the Patent and Trademark Office, so that we continue to reward innovators.Again, vague language that can be taken in many different ways (again, obviously on purpose). The good: highlighting the problem of "excessive patent litigation" is definitely a good sign and is basically an acknowledgement of the problems with the patent system -- mainly patent trolling, but that should include excessive litigation by operating companies as well. But again, that's immediately weighed down by what follows, which could mean basically anything. Strengthening the capacity of the PTO... for what? To reject bad patents? That would be good. To grant more patents? That might be bad. And the whole "so that we continue to reward innovators." What does that mean? If you believe that the patent system itself rewards innovators, then that would mean encouraging more patenting. If you believe that the patent system is stifling innovators, then that should mean ending bad patents that are used to hinder innovation. Which is it? Who the hell knows. And I doubt Clinton herself has any real understanding of the issues here either.
The Obama Administration made critical updates to our patent system through the America Invents Act, which created the Patent Trial and Appeals Board, and through other efforts to rein in frivolous suits by patent trolls. But costly and abusive litigation remains, which is why Hillary supports additional targeted rule changes. She supports laws to curb forum shopping and ensure that patent litigants have a nexus to the venue in which they are suing; require that specific allegations be made in demand letters and pleadings; and increase transparency in ownership by making patent litigants disclose the real party in interest.Those are good things. But then we've got the expanded explanation of strengthening the PTO and again it's a giant "huh?"
Hillary believes it is essential that the PTO have the tools and resources it needs to act expeditiously on patent applications and ensure that only valid patents are issued. That is why she supports legislation to allow the PTO to retain the fees it collects from patent applicants in a separate fund—ending the practice of fee diversion by Congress, and enabling the PTO to invest funds left over from its annual operations in new technologies, personnel, and training. Hillary also believes we should set a standard of faster review of patent applications and clear out the backlog of patent applications.Of course, this is somewhat contradictory with the stuff raised earlier. Fee retention is one of those ideas that perhaps makes sense, but skews the incentives in dangerous ways, possibly pushing the PTO to encourage more patent applications and patents in order to get more fees. Similarly, "faster review" historically has meant lots more crappy patents getting approved -- leading to more patent trolling over bogus patents.
"Mr. Cicconi, who worked in the White House for Presidents Ronald Reagan and George H.W. Bush, said he has backed every GOP presidential candidate since 1976. “But this year I think it’s vital to put our country’s well being ahead of party,” he said in a statement provided by the campaign. “Hillary Clinton is experienced, qualified, and will make a fine president. The alternative, I fear, would set our nation on a very dark path."Given AT&T's threat to take the neutrality fight to the Supreme Court, Cicconi's support is curious, but may say more about Trump's unpredictability than it does about Clinton. Regardless, the 14-page "technology and innovation agenda" includes upsetting her new BFF by continuing to fight for net neutrality:
"Hillary believes that the government has an obligation to protect the open internet. The open internet is not only essential for consumer choice and civic empowerment – it is a cornerstone of start-up innovation and creative disruption in technology markets. Hillary strongly supports the FCC decision under the Obama Administration to adopt strong network neutrality rules that deemed internet service providers to be common carriers under Title II of the Communications Act. These rules now ban broadband discrimination, prohibit pay-for-play favoritism, and establish oversight of “interconnection” relationships between providers. Hillary would defend these rules in court and continue to enforce them."The plan also makes some arguably vague promises on broadband, promising to deliver ubiquitous broadband to all Americans by 2020:
"Hillary will finish the job of connecting America’s households to the internet, committing that by 2020, 100 percent of households in America will have the option of affordable broadband that delivers speeds sufficient to meet families’ needs. She will deliver on this goal with continue investments in the Connect America Fund, Rural Utilities Service program, and Broadband Technology Opportunities Program (BTOP), and by directing federal agencies to consider the full range of technologies as potential recipients—i.e., fiber, fixed wireless, and satellite—while focusing on areas that lack any fixed broadband networks currently."While some outlets were quick to call this plan ambitious, historically vague broadband coverage promises haven't meant all that much.
"Hillary rejects the false choice between privacy interests and keeping Americans safe. She was a proponent of the USA Freedom Act, and she supports Senator Mark Warner and Representative Mike McCaul’s idea for a national commission on digital security and encryption. This commission will work with the technology and public safety communities to address the needs of law enforcement, protect the privacy and security of all Americans that use technology, assess how innovation might point to new policy approaches, and advance our larger national security and global competitiveness interests."Yes, it's abundantly clear that Clinton and friends continue to struggle with the idea that encryption is simply a tool, and like any tool it can be used for a myriad of purposes. That doesn't mean you unilaterally declare war on said tool -- or work tirelessly to make that tool less useful or more dangerous via backdoors -- a conversation we'll apparently be having over and over and over again should Clinton's presidency ascend beyond the rhetorical, larval stage. ]]>
Namely, a crucial turning point in Twitter’s evolution that arguably helped put it where it is today, both in a positive sense (it is a publicly-traded $25-billion company) and a negative one (its growth potential is in question and its strategy doesn’t seem to be working). And that turning point happened about five years ago, when Twitter decided to turn its back on the third-party ecosystem that helped make it successful in the first place.We see this sort of thing in all sorts of areas -- especially around "intellectual property." People have a very emotional "holy shit moment" pretty frequently when they see "someone else" making money by leveraging something that they feel some sort of ownership attachment to, whether or not there's any legitimate basis for that attachment. So many of the intellectual property fights we see stem from that general feeling of "Hey, that's ripping me off!" even if the actions of those third parties may not have any real impact on the originating content, service or idea.
This process began gradually, with the acquisition of Tweetie — which became Twitter’s official iOS client — and restrictions on what third parties could do with tweets, including selling advertising related to them. But it escalated quickly, and arguably became an all-out war with Twitter’s moves against Bill Gross, the Idealab founder and inventor of search-related advertising, who was busy acquiring Twitter clients and trying to build an ad model around the public Twitter stream. The idea that someone could monetize Twitter before Twitter itself got around to doing so was what one investor called a “holy shit moment” for the company.
But he lost that argument to those who wanted to keep the pie smaller, but to capture more of it for themselves. That may have helped the company go public, but it has put the company in a serious bind today. One in which Wall Street is profoundly disappointed that what Twitter is capturing for itself "isn't enough" and the innovations that the company needs to keep growing and innovating are much harder to come by. Sure, it does things like Vine and Periscope -- both of which it bought out in infancy -- but to do so it's had to hamstring other third-party developers like Meerkat.Some time ago, I circulated a document internally with a straightforward thesis: Twitter needs to decentralize or it will die. Maybe not tomorrow, maybe not even in a decade, but it was (and, I think, remains) my belief that all communications media will inevitably be decentralized, and that all businesses who build walled gardens will eventually see them torn down. Predating Twitter, there were the wars against the centralized IM providers that ultimately yielded Jabber, the breakup of Ma Bell, etc. etc. This isn’t to say that one can’t make quite a staggeringly lot of money with a walled garden or centralized communications utility, and the investment community’s salivation over the prospect of IPOs from LinkedIn, Facebook, and Twitter itself suggests that those companies will probably do quite well with a closed-but-for-our-API approach.
The call for a decentralized Twitter speaks to deeper motives than profit: good engineering and social justice. Done right, a decentralized one-to-many communications mechanism could boast a resilience and efficiency that the current centralized Twitter does not. Decentralization isn’t just a better architecture, it’s an architecture that resists censorship and the corrupting influences of capital and marketing. At the very least, decentralization would make tweeting as fundamental and irrevocable a part of the Internet as email. Now that would be a triumph of humanity.
I would argue that what makes Twitter the company valuable is not Twitter the app or 140 characters or @names or anything else having to do with the product: rather, it’s the interest graph that is nearly priceless. More specifically, it is Twitter identities and the understanding that can be gleaned from how those identities are used and how they interact that matters.There's a more fundamental premise at work here. In the information era, spreading more information increases the pie massively and opens up many more opportunities. The challenge is that many others can also take advantage of many of those opportunities, but as the core player in the space, a company like Twitter has a clear and natural advantage, even if it did what Payne had wanted to do many years ago and give up the underlying control altogether.
If one starts with that sort of understanding — that Twitter the company is about the graph, not the app — one would make very different decisions. For one, the clear priority would not be increasing ad inventory on the Twitter timeline (which in this understanding is but one manifestation of an interest graph) but rather ensuring as many people as possible have and use a Twitter identity. And what would be the best way to do that? Through 3rd-parties, of course! And by no means should those 3rd-parties be limited to recreating the Twitter timeline: they should build all kinds of apps that have a need to connect people with common interests: publishers would be an obvious candidate, and maybe even an app that streams live video. Heck, why not a social network that requires a minimum of 140 characters, or a killer messaging app? Try it all, anything to get more people using the Twitter identity and the interest graph.
[E]ven if every single U.S. producer is shut down, wouldn't foreign sites happily take up the slack? It's not like Americans have some great irreproducible national skills in smut-making, or like it takes a $100 million Hollywood budget to make a porn movie. Foreign porn will doubtless be quite an adequate substitute for the U.S. market. Plus the foreign distributors might even be able to make and distribute copies of the existing U.S.-produced stock — I doubt that the imprisoned American copyright owners will be suing them for infringement (unless the U.S. government seizes the copyrights, becomes the world's #1 pornography owner, starts trying to enforce the copyrights against overseas distributors, and gets foreign courts to honor those copyrights, which is far from certain and likely far from cheap).This is an interesting conjecture. Removing the producers from the equation opens up the possibility that foreign producers would simply do the math and up their profits by reselling product they didn't create. Having the US government eliminate their competition is an added bonus. It seems unlikely that the government would act on the behalf of porn companies it's legislated or prosecuted out of existence. But would it tolerate abuse of American IP, no matter how abhorrent the subject? Probably. The porn industry isn't known for its lobbying efforts.
The U.S. spends who knows how many prosecutorial and technical resources going after U.S. pornographers. A bunch of them get imprisoned. U.S. consumers keep using the same amount of porn as before.This tactic sounds like it would work as well as current IP enforcement measures. As it stands now, ICE is better known for its RIAA/MPAA lapdog status than for producing credible results. Sites get taken down, sat on and returned to their owners with no charges brought or apologies offered. Drawing a bead on targets like porn producers makes for some rah-rah press but will have little effect on the amount of porn available.
Nor do I think that the crackdown will somehow subtly affect consumers’ attitudes about the morality of porn — it seems highly unlikely that potential porn consumers will decide to stop getting it because they hear that some porn producers are being prosecuted.This falls right in line with the perception of file sharing as a "moral" issue. It's all well and good to claim the high road in the fight against infringement, but if the general public doesn't share your beliefs then the battle is not winnable. Legislation and prosecution aren't going to change anyone's mindset. It just makes the punishment seem ridiculous or unduly harsh.
The government gets understandably outraged by the “foreign smut loophole.” “Given all the millions that we’ve invested in going after the domestic porn industry, how can we tolerate all our work being undone by foreign filth-peddlers?,” pornography prosecutors and their political allies would ask. So they unveil the solution, in fact pretty much the only solution that will work: Nationwide filtering.This goes far beyond simply requiring pre-installed filtering software. Instituting any sort of a blacklist combines the futility of whack-a-mole with the "we don't have time to follow procedures/respect rights" urgency of "doing something" to make the internet a "safer" place. As these actions prove futile, enforcement will move to cutting off the money supply, targeting credit card transactions, pressuring foreign governments to play by the US''s rules, etc.
It’s true: Going after cyberporn isn’t really that tough — if you require every service provider in the nation to block access to all sites that are on a constantly updated government-run “Forbidden Off-Shore Site” list. Of course, there couldn’t be any trials applying community standards and the like before a site is added to the list; that would take far too long. The government would have to be able to just order a site instantly blocked, without any hearing with an opportunity for the other side to respond, since even a quick response would take up too much time, and would let the porn sites just move from location to location every several weeks.
Finally, the government can go after the users: Set up “honeypot” sites (seriously, that would be the technically correct name for them) that would look like normal offshore pornography sites. Draw people in to buy the stuff. Figure out who the buyers are. To do that, you'd also have to ban any anonymizer Web sites that might be used to hide such transactions, by setting up some sort of mandatory filtering such as what I described in option (2).Politicians may state that they think porn should be outlawed or controlled, and some are even willing to trample on some rights to put that in motion. But it's hard for most to jump from taking down the supply side to attacking the demand. If your aim is to make the internet "safer," it's fairly easy to see that removing users has no effect on "safety." But while this logic leap is hard, it is by no means impossible. The War on Drugs has locked up thousands of users by making possession a crime. "Possession with the intent to distribute" is simply a matter of going above an arbitrary quantity. Possession laws assume the only reason a person would be carrying [x] amount of drugs is because they're selling to others. Would a person with more than [x] megabytes of porn on their hard drive be considered a distributor, thus opening up the possibility of additional charges? I don't see why not, given the attitude surrounding the issues.
Then arrest the pornography downloaders and prosecute them for receiving obscene material over the Internet, in violation of 18 U.S.C. § 1462; see, e.g.,United States v. Whorley (4th Cir. 2008) (holding that such enforcement is constitutional, and quite plausibly so holding, given the United States v. Orito Supreme Court case).
How can the government's policy possibly achieve its stated goals, without creating an unprecedentedly intrusive censorship machinery, one that's far, far beyond what any mainstream political figures are talking about right now?The answer is: it can't. But these concerns aren't being considered, at least not during an election run. Post-election, if anyone gets around to fighting this unwinnable battle, the concerns likely won't be considered at that point, either. It's usually not until the public gets noisy enough to jeopardize politicians' careers that any sort of consideration is given to the rights of the people affected. Even more disturbing is the fact that pursuing this end effects both sides of the creative effort: the producers and the consumers. Considering the resemblance these actions have to past overreaching legislative efforts crafted to "protect" certain industries, it's rather disconcerting to see the possibility of these same actions being used to destroy a creative industry simply because certain people don't care for the product. ]]>
Google+ is a prime example of our complete failure to understand platforms from the very highest levels of executive leadership (hi Larry, Sergey, Eric, Vic, howdy howdy) down to the very lowest leaf workers (hey yo). We all don't get it. The Golden Rule of platforms is that you Eat Your Own Dogfood. The Google+ platform is a pathetic afterthought. We had no API at all at launch, and last I checked, we had one measly API call. One of the team members marched in and told me about it when they launched, and I asked: "So is it the Stalker API?" She got all glum and said "Yeah." I mean, I was joking, but no... the only API call we offer is to get someone's stream. So I guess the joke was on me.This part rings incredibly true. I know that when Google+ launched, I liked it as a program, but asked people about APIs, because it needed to better integrate into my workflow -- and was told that that would be coming "sometime later." And while I still mess around with Goolge+, it's largely become an afterthought to me, because it just lives off in its own separate world, rather than integrating well. There are still features I like, but until developers have a chance to dive in and make it useful... it just doesn't feel like a necessity.
Microsoft has known about the Dogfood rule for at least twenty years. It's been part of their culture for a whole generation now. You don't eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking.
Google+ is a knee-jerk reaction, a study in short-term thinking, predicated on the incorrect notion that Facebook is successful because they built a great product. But that's not why they are successful. Facebook is successful because they built an entire constellation of products by allowing other people to do the work. So Facebook is different for everyone. Some people spend all their time on Mafia Wars. Some spend all their time on Farmville. There are hundreds or maybe thousands of different high-quality time sinks available, so there's something there for everyone.
Our Google+ team took a look at the aftermarket and said: "Gosh, it looks like we need some games. Let's go contract someone to, um, write some games for us." Do you begin to see how incredibly wrong that thinking is now? The problem is that we are trying to predict what people want and deliver it for them.