Notice: Use of undefined constant EDITION_TOKEN - assumed 'EDITION_TOKEN' in /home/beta6/deploy/itasca_20201215-3691-c395/rss.php on line 20

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss.php on line 2

Warning: Cannot modify header information - headers already sent by (output started at /home/beta6/deploy/itasca_20201215-3691-c395/rss.php:20) in /home/beta6/deploy/itasca_20201215-3691-c395/custom/rss-template.inc on line 2
Techdirt. Stories filed under "platform" Easily digestible tech news... https://beta.techdirt.com/ en-us Techdirt. Stories filed under "platform"https://beta.techdirt.com/images/td-88x31.gifhttps://beta.techdirt.com/ Tue, 20 Oct 2020 09:37:05 PDT Section 230 Basics: There Is No Such Thing As A Publisher-Or-Platform Distinction Cathy Gellis https://beta.techdirt.com/articles/20201017/13051145526/section-230-basics-there-is-no-such-thing-as-publisher-or-platform-distinction.shtml https://beta.techdirt.com/articles/20201017/13051145526/section-230-basics-there-is-no-such-thing-as-publisher-or-platform-distinction.shtml We've said it before, many times: there is no such thing as a publisher/platform distinction in Section 230. But in those posts we also said other things about how Section 230 works, and perhaps doing so obscured that basic point. So just in case we'll say it again here, simply and clearly: there is no such thing as a publisher/platform distinction in Section 230. The idea that anyone could gain or lose the immunity the statute provides depending on which one they are is completely and utterly wrong.

In fact, the word "platform" does not even show up in the statute. Instead the statute uses the term "interactive computer service provider." The idea of a "service provider" is a meaningful one, because the whole point of Section 230 is to make sure that the people who provide the services that facilitate others' use of the Internet are protected in order for them to be able to continue to provide those services. We give them immunity from the legal consequences of how people use those services because without it they wouldn't be able to – it would simply be too risky.

But saying "interactive computer service provider" is a mouthful, and it also can get a little confusing because we sometimes say "internet service provider" to mean just a certain kind of interactive computer service provider, when Section 230 is not nearly so specific. Section 230 applies to all kinds of service providers, from ISPs to email services, from search engines to social media providers, from the dial-up services we knew in the 1990s back when Section 230 was passed to whatever new services have yet to be invented. There is no limit to the kinds of services Section 230 applies to. It simply applies to anyone and everyone, including individual people, who are somehow providing someone else the ability to use online computing. (See Section 230(f)(2).)

So for shorthand people have started to colloquially refer to protected service providers as "platforms." Because statutes are technical creatures it is not generally a good idea to use shorthand terms in place of the precise ones used by the statutes; often too much important meaning can be lost in the translation. But in this case "platform" is a tolerable synonym for most of our policy discussions because it still captures the essential idea: a Section 230-protected "platform" is the service that enables someone else to use the Internet.

Which brings us to the term "publisher," which does appear in the statute. In particular it appears in the critically important provision at Section 230(c)(1), which does most of the work making Section 230 work:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

In this provision the term "publisher" (or "speaker") refers to the creator of the content at issue. Who did? Was it the provider of the computer service, aka the platform itself? Or was it someone else? Because if it had been someone else, if the information at issue had been "provided by another information content provider," then we don't get to treat the platform as the "publisher or speaker" of that information – and it is therefore immune from liability for it.

Where the confusion has arisen is in the use of the term "publisher" in another context as courts have interpreted Section 230. Sometimes the term "publisher" itself means "facilitator" or "distributor" of someone else's content. When courts first started thinking about Section 230 (see, e.g., Zeran v. AOL) they sometimes used the term because it helped them understand what Section 230 was trying to accomplish. It was trying to protect the facilitator or distributor of others' expression – or, in other words, the platform people used to make that expression – and using the term "publisher" from our pre-Section 230 understanding of media law helped the courts recognize the legal effect of the statute.

Using the term did not, however, change that effect. Or the basic operation of the statute. The core question in any Section 230 analysis has always been: who originated the content at issue? That a platform may have "published" it by facilitating its appearance on the Internet does not make it the publisher for purposes of determining legal responsibility for it, because "publishing" is not the same as "creating." And Section 230 – and all the court cases interpreting it – have made clear that it is only the creator who can be held liable for what was created.

There are plenty of things we can still argue about regarding Section 230, but whether someone is a publisher versus a platform should not be one of them. It is only the creator v. facilitator distinction that matters.

]]>
foundational-understanding https://beta.techdirt.com/comment_rss.php?sid=20201017/13051145526
Tue, 23 Jun 2020 09:26:34 PDT Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act Mike Masnick https://beta.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act.shtml https://beta.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act.shtml Hello! Someone has referred you to this post because you've said something quite wrong about Section 230 of the Communications Decency Act.

I apologize if it feels a bit cold and rude to respond in such an impersonal way, but I've been wasting a ton of time lately responding individually to different people saying the same wrong things over and over again, and I was starting to feel like this guy:

Duty Calls

And... I could probably use more sleep, and my blood pressure could probably use a little less time spent responding to random wrong people. And, so, for my own good you get this. Also for your own good. Because you don't want to be wrong on the internet, do you?

Also I've totally copied the idea for this from Ken "Popehat" White, who wrote Hello! You've Been Referred Here Because You're Wrong About The First Amendment a few years ago, and it's great. You should read it too. Yes, you. Because if you're wrong about 230, there's a damn good chance you're wrong about the 1st Amendment too.

While this may all feel kind of mean, it's not meant to be. Unless you're one of the people who is purposefully saying wrong things about Section 230, like Senator Ted Cruz or Rep. Nancy Pelosi (being wrong about 230 is bipartisan). For them, it's meant to be mean. For you, let's just assume you made an honest mistake -- perhaps because deliberately wrong people like Ted Cruz and Nancy Pelosi steered you wrong. So let's correct that.

Before we get into the specifics, I will suggest that you just read the law, because it seems that many people who are making these mistakes seem to have never read it. It's short, I promise you. If you're in a rush, just jump to part (c), entitled Protection for “Good Samaritan” blocking and screening of offensive material, because that's the only part of the law that actually matters. And if you're in a real rush, just read Section (c)(1), which is only 26 words, and is the part that basically every single court decision (and there have been many) has relied on.

With that done, we can discuss the various ways you might have been wrong about Section 230.

If you said "Once a company like that starts moderating content, it's no longer a platform, but a publisher"

I regret to inform you that you are wrong. I know that you've likely heard this from someone else -- perhaps even someone respected -- but it's just not true. The law says no such thing. Again, I encourage you to read it. The law does distinguish between "interactive computer services" and "information content providers," but that is not, as some imply, a fancy legalistic ways of saying "platform" or "publisher." There is no "certification" or "decision" that a website needs to make to get 230 protections. It protects all websites and all users of websites when there is content posted on the sites by someone else.

To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a "publisher." What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.

Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you'll understand most of the most important things about Section 230.

To reinforce this point: there is nothing any website can do to "lose" Section 230 protections. That's not how it works. There may be situations in which a court decides that those protections do not apply to a given piece of content, but it is very much fact-specific to the content in question. For example, in the lawsuit against Roommates.com for violating the Fair Housing Act, the court ruled against Roommates, but not that the site "lost" its Section 230 protections, or that it was now a "publisher." Rather, the court explicitly found that some content on Roommates.com was created by 3rd party users and thus protected by Section 230, and some content (namely pulldown menus designating racial preferences) was created by the site itself, and thus not eligible for Section 230 protections.

If you said "Because of Section 230, websites have no incentive to moderate!"

You are wrong. If you reformulated that statement to say that "Section 230 itself provides no incentives to moderate" then you'd be less wrong, but still wrong. First, though, let's dispense with the idea that thanks to Section 230, sites have no incentive to moderate. Find me a website that doesn't moderate. Go on. I'll wait. Lots of people say things like one of the "chans" or Gab or some other site like that, but all of those actually do moderate. There's a reason that all such websites do moderate, even those that strike a "free speech" pose: (1) because other laws require at least some level of moderation (e.g., copyright laws and laws against child porn), and (2) more importantly, with no moderation, a platform fills up with spam, abuse, harassment, and just all sorts of garbage that make it a very unenjoyable place to spend your internet time.

So there are many, many incentives for nearly all websites to moderate: namely to keep users happy, and (in many cases) to keep advertisers or other supporters happy. When sites are garbage, it's tough to attract a large user base, and even more difficult to attract significant advertising. So, to say that 230 means there's no incentive to moderate is wrong -- as proven by the fact that every site does some level of moderation (even the ones that claim they don't).

Now, to tackle the related argument -- that 230 by itself provides no incentive to moderate -- that is also wrong. Because courts have ruled Section (c)(1) to have immunized moderation choices, and Section (c)(2) explicitly says that sites are not liable for their moderation choices, sites actually have a very strong incentive provided by 230 to moderate. Indeed, this is one key reason why Section 230 was written in the first place. It was done in response to a ruling in the Stratton Oakmont v. Prodigy lawsuit, in which Prodigy, in an effort to provide a "family friendly" environment, did some moderation of its message boards. The judge in that case rules that since Prodigy did moderate the boards, that meant it would be liable for anything it left up.

If that ruling had stood and been adopted by others, it would, by itself, be a massive disincentive to moderation. Because the court was saying that moderation itself creates liability. And smart lawyers will say that the best way to avoid that kind of liability is not to moderate at all. So Section 230 explicitly overruled that judicial decision, and eliminated liability for moderation choices.

If you said "Section 230 is a massive gift to big tech!"

Once again, I must inform you that you are very, very wrong. There is nothing in Section 230 that applies solely to big tech. Indeed, it applies to every website on the internet and every user of those websites. That means it applies to you, as well, and helps to protect your speech. It's what allows you to repeat something someone else said on Facebook and not be liable for it. It's what protects every website that has comments, or any other third-party content. It applies across the entire internet to every website and every user, and not just to big tech.

The "user" protections get less attention, but they're right there in the important 26 words. "No provider or user of an interactive computer service shall be treated as the publisher or speaker...." That's why there are cases like Barrett v. Rosenthal where someone who forwarded an email to a mailing list was held to be protected by Section 230, as a user of an interactive computer service who did not write the underlying material that was forwarded.

And it's not just big tech companies that rely on Section 230 every day. Every news organization (even those that write negative articles about Section 230) that has comments on its website is protected thanks to Section 230. This very site was sued, in part, over comments, and Section 230 helped protect us as well. Section 230 fundamentally protects free speech across the internet, and thus it is more properly called out as a gift to internet users and free speech, not to big tech.

If you said "A site that has political bias is not neutral, and thus loses its Section 230 protections"

I'm sorry, but you are very, very, very wrong. Perhaps more wrong than anyone saying any of the other things above. First off, there is no "neutrality" requirement at all in Section 230. Seriously. Read it. If anything, it says the opposite. It says that sites can moderate as they see fit and face no liability. This myth is out there and persists because some politicians keep repeating it, but it's wrong and the opposite of truth. Indeed, any requirement of neutrality would likely raise significant 1st Amendment questions, as it would be involving the law in editorial decision making.

Second, as described earlier, you can't "lose" your Section 230 protections, especially not over your moderation choices (again, the law explicitly says that you cannot face liability for moderation choices, so stop trying to make it happen). If content is produced by someone else, the site is protected from lawsuit, thanks to Section 230. If the content is produced by the site, it is not. Moderating the content is not producing content, and so the mere act of moderation, whether neutral or not, does not make you lose 230 protections. That's just not how it works.

If you said "Section 230 requires all moderation to be in "good faith" and this moderation is "biased" so you don't get 230 protections"

You are, yet again, wrong. At least this time you're using a phrase that actually is in the law. The problem is that it's in the wrong section. Section (c)(2)(a) does say that:

No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

However, that's just one part of the law, and as explained earlier, nearly every Section 230 case about moderation hasn't even used that part of the law, instead relying on Section (c)(1)'s separation of an interactive computer service from the content created by users. Second, the good faith clause is only in half of Section (c)(2). There's also a separate section, which has no good faith limitation, that says:

No provider or user of an interactive computer service shall be held liable on account of... any action taken to enable or make available to information content providers or others the technical means to restrict access to material....

So, again, even if (c)(2) applied, most content moderation could avoid the "good faith" question by relying on that part, (c)(2)(B), which has no good faith requirement.

However, even if you could somehow come up with a case where the specific moderation choices were somehow crafted such that (c)(1) and (c)(2)(B) did not apply, and only (c)(2)(A) were at stake, even then, the "good faith" modifier is unlikely to matter, because a court trying to determine what constitutes "good faith" in a moderation decision is making a very subjective decision regarding expression choices, which would create massive 1st Amendment issues. So, no, the "good faith" provision is of no use to you in whatever argument you're making.

If you said "Section 230 is why there's hate speech online..."

Ooof. You're either the The NY Times or very confused. Maybe both. The 1st Amendment protects hate speech in the US. Elsewhere not so much. Either way, it has little to do with Section 230.

If you said "Section 230 means these companies can never be sued!"

I regret to inform you that you are wrong. Internet companies are sued all the time. Section 230 merely protects them from a narrow set of frivolous lawsuits, in which the websites are sued either for the content created by others (in which case the actual content creators remain liable) or in cases where they're being sued for the moderation choices they make, which are mostly protected by the 1st Amendment anyway (but Section 230 helps get those frivolous lawsuits kicked out faster). The websites can and do still face lawsuits for many, many other reasons.

If you said "Section 230 is a get out of jail card for websites!"

You're wrong. Again, websites are still 100% liable for any content that they themselves create. Separately, Section 230 explicitly exempts federal criminal law -- meaning that stories that blame things like sex trafficking and opioid sales on 230 are very much missing the point as well. The Justice Department is not barred by Section 230. It says so quite clearly:

Nothing in this section shall be construed to impair the enforcement of... any other Federal criminal statute

So many of the complaints about criminal activity are not about Section 230, but about a lack of enforcement.

If you said "Section 230 is why there's piracy online"

You again may be the NY Times or someone who has not read Section 230. Section 230 explicitly exempts intellectual property law:

Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property.

If you said "Section 230 gives websites blanket immunity!"

The courts have made it clear this is not the case at all. In fact, many courts have highlighted situations in which Section 230 does not apply, from the Roommates case, to the Accusearch case, to the Doe v. Internet Brands case, to the Oberdorf v. Amazon case, we see plenty of cases where judges have made it clear that there are limits to Section 230 protections, and the immunity conveyed by Section 230 is not as broad as people claim. At the very least, the courts seem to have little difficulty targeting what they consider to be "bad actors" with regards to the law.

If you said "Section 230 is why big internet companies are so big!"

You are, again, incorrect. As stated earlier, Section 230 is not unique to big internet companies, and indeed, it applies to the entire internet. Research shows that Section 230 actually helps incentivize competition, in part because without Section 230, the costs of running a website would be massive. Without Section 230, large websites like Google and Facebook could handle the liability, but smaller firms would likely be forced out of business, and many new competitors might never get started.

If you said "Section 230 was designed to encourage websites to be neutral common carriers"

You are exactly 100% wrong. We've already covered why it does not require neutrality above, but it was also intended as the opposite of requiring websites to be "common carriers." Specifically, as mentioned above, part of the impetus for Section 230 was to enable services to create "family friendly" spaces, in which plenty of legal speech would be blocked. A common carrier is a very specific thing that has nothing to do with websites and less than nothing to do with Section 230.

If you said "If all this stuff is actually protected by the 1st Amendment, then we can just get rid of Section 230"

You're still wrong, though perhaps not as wrong as everyone else making these bad takes. Without Section 230, and relying solely on the 1st Amendment, you still open up basically the entire internet to nuisance suits. Section 230 helps get cases dismissed early, whereas using the 1st Amendment would require lengthy and costly litigation. 230 does rely strongly on the 1st Amendment, but it provides a procedural advantage in getting vexatious, frivolous nuisance lawsuits shut down much faster than they would be otherwise.

There seems to be more and more wrong stuff being said about Section 230 nearly every day, but hopefully this covers most of the big ones. If you see someone saying something wrong about Section 230, and you don't feel like going over all of their mistakes, just point them here, and they can be educated.

]]>
duty calls https://beta.techdirt.com/comment_rss.php?sid=20200531/23325444617
Wed, 12 Feb 2020 09:39:52 PST Arizona Legislator Wants To Strip Platforms Of Section 230 Immunity If They're 'Politically Biased' Tim Cushing https://beta.techdirt.com/articles/20200211/14004143906/arizona-legislator-wants-to-strip-platforms-section-230-immunity-if-theyre-politically-biased.shtml https://beta.techdirt.com/articles/20200211/14004143906/arizona-legislator-wants-to-strip-platforms-section-230-immunity-if-theyre-politically-biased.shtml Another bill containing some bad ideas is being floated in the Arizona legislature. Rep. Bob Thorpe thinks social media companies are biased against conservatives and feels the best way to address this is to steamroll the Constitution and Section 230. (via Eric Goldman)

Thorpe's bill [PDF] says it will turn platforms into publishers at the drop of a bias accusation:

Specifies a person who allows online users to upload publicly accessible content on the internet and who edits, deletes or makes it difficult for online users to locate and access the uploaded content in an easy or timely manner for politically biased reasons is:

a) Deemed to be a publisher;

b) Deemed to not be a platform; and

c) Liable for damages suffered by an online user because of the person's actions, including damage for violations of rights guaranteed to the online user by the Federal or State Constitutions.

To be clear, there are no enshrined rights guaranteeing unimpeded use of private companies' platforms or access to "uploaded content." Writing a bill that proclaims there are doesn't make the false assertion any more true. If platforms are perceived to be engaging in politically-motivated moderation, the bill allows the affected user (or state Attorney General) to engage in litigation that's doomed to fail.

For someone so concerned about walled gardens of the political variety, Rep. Thorpe doesn't seem all that interested in practicing what he's preaching to his voter base. Thorpe's Twitter account is protected, which means most constituents don't have access to it and it allows him to pick and choose who gets to see his tweets.

This is only Thorpe's latest attempt to adversely affect protections and rights while claiming to be very concerned about rights violations. Journalists and activists have already pointed out Thorpe's tendency to block critics, all while claiming his account (@azrepbobthorpe) is for personal use, rather than for his legislative work. His bio points claims he's a "Christian Constitutional Legislator," but his account is an ongoing Constitutional violation.

In 2017, he introduced a bill that would violate First Amendment under the theory it would somehow make public universities more protective of the rights he's undermining. Thorpe has a problem with any "divisive" speech -- especially the kind that might come from teachers and professors who might -- as the bill put it -- "promote division, resentment or social justice toward a race, gender, religion, political affiliation, social class or other class of people." As Fire noted then, Thorpe's proposed government interference in classroom instruction would only harm the First Amendment, not save it.

Prohibiting instructors from teaching particular perspectives or topics is precisely the kind of content- and viewpoint-based restriction forbidden under the First Amendment. This does not change even if someone on campus might deem those perspectives or topics likely “to promote social justice, division, or resentment.” In practice, this legislation would confer unfettered discretion on campus administrators to shut down discussion on nearly any subject.

During the next legislative session, Bob Thorpe proposed another free speech-threatening bill that suggested free-market capitalism should be shielded from criticism.

This week, Arizona Rep. Bob Thorpe (R – Flagstaff) introduced a bill that would designate "American free-market capitalism" the state's official "political-economic system", and declares the legislature's intent "that taxpayer dollars not be used to promote or to provide material support for any political-economic system that opposes the principles of free-market capitalism."

Thorpe thinks he's a free speech warrior. But all he does is try to erect content-based restrictions on speech -- the kind of thing no court has been sympathetic to. As Reason noted, the bill was stupid political point-scoring bullshit that had zero chance of surviving even the most cursory Constitutional glance..

For starters, the bill contains a naked content-based restriction on the use of taxpayer dollars to promote something other than "free-market capitalism". From what activities Thorpe would want to yank state support is not entirely clear; his bill says only that it would include the promotion of "socialism, communism, and fascism."

Over at the Phoenix New Times, Antonia Noori Farzan notes that "taken to an extreme, it could potentially mean that state universities would be banned from providing any resources to campus chapters of the Democratic Socialists of America." Depending on your definition of free-market, the bill could be used to deny resources to college Democrat and Republican chapters too.

The good times roll on. Bob Thorpe is finally winding down his state house career, exiting his unstoried seven-term career with another stupid bill that ignores the Constitution in favor of allowing people who don't understand Section 230 or the First Amendment feel like their grievances are being redressed.

]]>
good-luck-with-that-[stares-at-bio]-'Constitutional-Legislator' https://beta.techdirt.com/comment_rss.php?sid=20200211/14004143906
Tue, 26 Nov 2019 10:44:00 PST Australian Attorney General Wants To Make The Country's Defamation Law Even Worse Tim Cushing https://beta.techdirt.com/articles/20191125/08355643452/australian-attorney-general-wants-to-make-countrys-defamation-law-even-worse.shtml https://beta.techdirt.com/articles/20191125/08355643452/australian-attorney-general-wants-to-make-countrys-defamation-law-even-worse.shtml Australia's government is planning to revamp its defamation law. Good. Because it's all kinds of fucked up. The law that's in place has encouraged all sorts of litigation from people who would prefer to sue service providers and social media platforms, rather than the people who actually said defamatory things.

But it's unclear what sort of reform the government actually has in mind. Australia's Attorney General Christian Porter says the country's defamation law is "unfair." It's certainly not a good law, but Porter thinks it doesn't strike a "perfect balance" between protecting journalists from being hit with bogus lawsuits and protecting individuals from being defamed.

He's right. The law doesn't strike the right balance. But he's wrong about how to fix it. Very wrong.

Attorney-General Christian Porter says social media platforms should be treated the same as traditional publishers under defamation law, a change that would present a fundamental new challenge for global companies such as Facebook and Twitter.

It appears Porter believes the playing field can only be leveled by dragging social media platforms down to the level of local journalists the law fails to protect. This is Porter's idea of "fairness," apparently. If the law is going to continue to suck, it should suck for more people.

Despite the fact that social media platforms don't actually "publish" anything, Porter wants to treat Facebook, et al like newspapers. In Porter's mind, anything posted by users apparently should be vetted and fact-checked and edited by social media platforms before it goes live. You know, like a newspaper.

Of course, this is impossible and Porter knows it. So, "reforming" the law just means making it easier for bad faith litigation to proceed, allowing actual defamers to escape punishment while judgments and fees are extracted from American social media companies. Porter is pretty sure this is the right thing to do, even as he admits he has no idea if it even can be done.

"My own view ... is that online platforms, so far as reasonably possible, should be held to essentially the same standards as other publishers," Mr Porter told an audience at the National Press Club.

"But you have to, of course, take into account, reasonable, sensible measures for how you do that ... because of the volume of what goes on in Twitter and Facebook is much larger than the volume from a standard newspaper."

Saying that "the volume of what goes on in Twitter and Facebook is much larger than that the volume from a standard newspaper" is such an understatement as to suggest that Porter has absolutely no familiarity with the issue at hand. Comparing the two is like saying the volume of Niagra Falls is larger than a leaky sink. Yes, they both involve water moving downward, but that's about the extent of the comparison. Saying that the volume from one is "much larger" than the other leaves out just how much larger. Indeed, it's so much larger that there literally is no reasonable comparison. Yet, he chose to make it anyway.

The AG's defamation law "fix" appears to be a response to a NSW Supreme Court decision handed down earlier this year -- one that held Australian press outlets legally responsible for defamatory comments made by readers on the outlets' Facebook pages. But rather than improve the law to protect press outlets, AG Porter just wants to make it worse for social media companies.

How this is supposed to fix anything is anyone's guess. Maybe the Attorney General feels the country's court system just isn't seeing enough bogus litigation. Whatever the case, this reform effort by the Australian government appears poised to make things worse for Australians and everyone who provides a platform for them.

]]>
if-it's-already-broke-just-keep-breaking-it https://beta.techdirt.com/comment_rss.php?sid=20191125/08355643452
Thu, 11 Jul 2019 11:58:42 PDT Why Is The Washington Post Publishing Blatantly False Propaganda About Section 230? Mike Masnick https://beta.techdirt.com/articles/20190711/10380542570/why-is-washington-post-publishing-blatantly-false-propaganda-about-section-230.shtml https://beta.techdirt.com/articles/20190711/10380542570/why-is-washington-post-publishing-blatantly-false-propaganda-about-section-230.shtml One of the big points we keep making about Section 230 of the Communications Decency Act is that we totally get it when grandstanding politicians or online trolls misrepresent the law. But the media should not be complicit in pumping blatantly false statements. While I may disagree with them personally, there are intellectually honest arguments for why Section 230 should be amended or changed. I'm happy to debate those arguments. What's ridiculous, however, is when the arguments are based on a completely false reading of the law. And no upstanding news organization should allow blatant misinformation like that. However, with all the misguided screaming about "liberal bias" in the media, newspapers like the Washington Post and the NY Times seem to feel like they need to publish blatant disinformation, to avoid having trolls and idiots accuse them of bias.

Even so, the Washington Post's decision to publish this op-ed by Charlie Kirk attacking Section 230 may be the worst we've seen. It is so full of factually false information, misleading spin, and just downright disinformation that no respectable publication should have allowed it to be published. And yet, there it is in the Washington Post -- one of the major news organizations that Donald Trump likes to declare "fake news." If you're unaware of Kirk, he's a vocal Trump supporter, who runs an organization called Turning Point USA that appears to specialize in playing the victim in all sorts of ridiculous conspiracies... all while (hypocritically) arguing that his political opponents ("the libs") are always acting as victims and are "training a generation of victims who are being trained to be offended by something." And yet, it seems that it's really Kirk who is always offended.

This Washington Post op-ed is just one example. Here, Kirk is playing the victim of (as of yet, still unproven) anti-conservative bias on social media.

By now, most conservatives are convinced that our voices are being shadow-banned, throttled, muted and outright censored online. In fact, amid protestations by groups including the Internet Association, which claims Facebook, Google and Twitter are bias free, it’s an open fact that Big Tech is run predominantly by those on the ideological left. Facebook’s founder Mark Zuckerberg and Twitter’s chief executive Jack Dorsey even admitted this before Congress, and footage of Google’s leadership consoling one another after President Trump’s victory in 2016 indicates the same is true for them.

Many on the right have complained loudly and often of anti-conservative bias online. Unfortunately, all too often this is where our efforts stop. Once we’re ignored or dismissed long enough, conservatives seem to just shrug our collective shoulders and accept defeat. It’s this type of passivity that has allowed progressives to dominate film and television, universities and large swaths of the mainstream news media. How did they accomplish that? By fighting tooth and nail for what they believe in every vertical.

While it is true that many people who work in the big internet companies probably lean towards the Democratic side of the aisle (though not nearly as far as some make it out to be), that's different than proving that they have put in place policies that are biased against "conservatives" (and I use that term loosely). Again, nearly every example that people trot out actually involves trolling, harassment, actual Nazis or other violations of terms of service. And while these companies sometimes make mistakes, they seem to do so pretty much across the board -- which is the very nature of moderating so much content.

Separately, many of the links in Kirk's opening above don't actually say what he pretends they say. Professor Matthew Boedy went through the above links and put together a great Twitter thread unpacking how he misrespresents nearly everything:

The thread is a lot longer and covers many more examples. Either way, Charlie needs to play the victim, and he's decided that the culprit is Section 230. Because he either doesn't understand Section 230, or is deliberately misrepresenting it.

The second obstacle to the free market is Big Tech’s exploitation of preexisting laws, namely Section 230 of the Communications Decency Act that was passed by Congress in the '90s. Social media companies have leveraged Section 230 to great effect, and astounding profits, by claiming they are platforms — not publishers — thereby avoiding under the law billions of dollars in potential copyright infringement and libel lawsuits. YouTube, for example, advertises itself as an open platform “committed to fostering a community where everyone’s voice can be heard.” Facebook and Twitter make similar claims. Let’s be clear, when these companies censor or suppress conservative content, they are behaving as publishers, and they should be held legally responsible for the all the content they publish. If they want to continue hiding behind Section 230 and avoid legal and financial calamity, they must reform.

And here's where an editor totally should have stepped in, because almost all of this is wrong or gibberish. First off, even a cursory glance at the text of CDA 230 shows that it excludes intellectual property, such as copyright. Section (e)(2) literally says: "Nothing in this section shall be construed to limit or expand any law pertaining to intellectual property." So what the fuck is Kirk talking about when he says that they used this law to avoid "billions of dollars in potential copyright infringement... lawsuits." The answer is that Kirk has no idea what he's talking about, and now seems to be repeating propaganda pushed out by "liberal" Hollywood.

As for it allowing them to avoid "libel lawsuits," well, yes. But that's because Section 230 is about properly applying liability to those who make the statements. We don't blame AT&T when someone uses a phone to make a bomb threat. We don't blame Ford when someone gets into a car accident. And we don't blame Facebook when someone posts defamatory content. It's kind of straightforward.

Still, where it's really egregious is that Kirk continues to push the total myth that Section 230 allows companies to hide if they just claim they're a "platform" rather than "a publisher." That's not how the law works at all. It doesn't make any such distinction.

And here's the really crazy thing: if Kirk got his "wish" and actually got rid of CDA 230 and made internet companies liable, his own content would likely be at the top of the chopping block. Remember, one of Kirk's claims to fame was when he published a "Professor Watchlist" calling out allegedly "left-leaning academics" who he feels discriminate against conservatives. He can do that because that's 1st Amendment protected speech (opinion). But if 230 is amended to require "neutrality," well, such a list is anything but neutral. Furthermore, the risk of liability of hosting such a list would be high. Even though I'd argue that it's protected speech, you can bet that someone might find some of the claims on the list defamatory -- and thus there would be strong pressure for sites to pull it down to avoid liability.

As radio host Dennis Prager often says, if an airline permitted only those passengers holding the New York Times to board but then denied Wall Street Journal readers, we would all rightly call this discrimination and demand the airline change its policy.

This is dumb for a huge number of reasons. First of all, I don't think we'd all rightly call it discrimination. We'd call it a business decision. Probably a bad one. Which is why no airline would ever do such a thing. Second, where exactly is the social media platform that is banning people for subscribing to the WSJ, but not the NYT? It doesn't exist. This is such a hyperbolic, misleading example. People are being banned for harassment and trolling. Not for holding conservative viewpoints. No one's being kicked off of platforms for calling for lower taxes, less government, or other traditionally "conservative" ideas.

In the same way, conservatives cannot win the battle of ideas if we’re marginalized or removed from mainstream culture and mainstream platforms.

This, also, is laughable. Remember, "right wing" media dominates both radio and cable television. I don't see Kirk demanding that Fox News host more liberal viewpoints to balance out Hannity. And, once again, even in the supposedly "liberal" Washington Post, he's allowed to post this blatantly false nonsense.

Again, the Washington Post should absolutely be willing to post different points of view, including those of Kirk and his allies. But they shouldn't allow him to blatantly spread disinformation about what the law says and what it does. That's just... as Kirk would say, "fake news."

]]>
the-view-from-nowhere https://beta.techdirt.com/comment_rss.php?sid=20190711/10380542570
Mon, 17 Jun 2019 09:35:00 PDT Once More With Feeling: There Is No Legal Distinction Between A 'Platform' And A 'Publisher' Mike Masnick https://beta.techdirt.com/articles/20190613/03172142391/once-more-with-feeling-there-is-no-legal-distinction-between-platform-publisher.shtml https://beta.techdirt.com/articles/20190613/03172142391/once-more-with-feeling-there-is-no-legal-distinction-between-platform-publisher.shtml Alexis Madrigal, over at the Atlantic has a mostly interesting piece recounting the history of how the big internet companies started calling themselves platforms. The history is actually pretty fascinating:

There was a time when there were no “platforms” as we now know them. That time was, oh, about 2007. For decades, computing (video games included) had had this term “platform.” As the 2000s began, Tim O’Reilly and John Battelle proposed “the web as a platform,” primarily focusing on the ability of different services to connect to one another.

The venture capitalist Marc Andreessen, then the CEO of the also-ran social network Ning, blasted anyone who wanted to extend the definition. “A ‘platform’ is a system that can be programmed and therefore customized by outside developers,” he wrote. “The key term in the definition of platform is ‘programmed.’ If you can program it, then it’s a platform. If you can’t, then it’s not.” My colleague Ian Bogost, who co-created an MIT book series called Platform Studies, agreed, as did most people in the technical community. Platforms were about being able to run code in someone else’s system.

This was Facebook’s original definition of its product, Facebook Platform, which allowed outside developers to build widgets and games, and extend the core service. In the years before 2016, nearly all of Mark Zuckerberg’s public references to Facebook as a platform were technical, about connecting with developers.

Amusingly, this actually reminded me of articles I had written over a decade ago, talking up why Google and Facebook needed to become a new kind of internet platform -- which I meant in the same manner as Madrigal describes above and which most people talking about "platforms" meant in the mid-aughts. It meant a system on which others could develop new applications and services. I have to admit that I don't know quite how and when the world switched to calling general internet services "platforms" instead, and I'm just as guilty of doing so as others.

I have two quick thoughts on why this may have happened before I get back to Madrigal's piece. First, many of the discussions around these big internet companies didn't really have a good descriptive term. When talking about the law, things like Section 230 of the Communications Decency Act refer to them as "interactive computer services" which is awkward. And the DMCA refers to them as "service providers," which is quite confusing, because "internet service provider" has an existing (and somewhat different) meaning, as the company who provides you internet access. Ideally, those company should be called "internet access providers" (IAPs) rather than ISPs, but what's done is done. And, then of course, there's the equally awkward term "intermediary," which just confuses the hell out of most non-lawyers (and some lawyers). So "platform" came out in the wash as the most useful, least awkward option.

And if Madrigal's piece had just stuck with that interesting historical shift, and maybe dug into things like I did in the previous paragraph, that might be really compelling. Unfortunately, Madrigal goes a step or two further -- and one that goes right up to the line (though it doesn't totally cross it) of suggesting that there's some legal significance to calling oneself a platform. This is something we've seen too many reporters do of late, spreading a false impression that internet "platforms" somehow get magic protections that internet "publishers" don't get.

As we've explained there is literally no distinction here. Usually people are making this argument with regards to CDA 230's protections, but as we've discussed in great detail that law makes no distinction between a "platform" and a "publisher." Instead, it applies to all "interactive computer services" including any publisher, so long as they host 3rd party content. Madrigal's piece doesn't call out CDA 230 the way others have, but, unfortunately, his piece absolutely can be read in a misleading way to suggest that there is some magical legal distinction here that matters. Specifically this part:

This new rhetorical device wasn’t just for press releases, but also for ginning up business and creating a legal architecture.

Uh, what "legal architecture"? Again, CDA 230, the key law in this area, makes no special distinction for "platforms." There was no need for a "rhetorical device" to consider yourself protected (and there still isn't). Nothing in calling oneself a platform set up any legal architecture, no matter how many ignorant people on Twitter claim it is so. Unfortunately, someone who has already heard that false claim is likely to read Madrigal's piece as a confirmation of that incorrect bit of info.

So, let's be clear, once again and state that there is no special legal distinction for "platforms," and it makes no difference in the world if an internet company refers to itself as a platform, or a publisher (or, for that matter, an instigator, an enabler, a middleman, a gatekeeper, a forum, or anything). All that matters is do they meet the legal definition of an interactive computer service (which, if they're online, the answer is generally "yes"), and (to be protected under CDA 230) whether there's a legal question about whether or not they're to be held liable for third party content.

Some people may want the law changed. And they may think that "internet platforms" should require some specific rules and regulations -- including silly, unenforceable ideas like "being neutral," -- but that's got nothing to do with the law today, and any suggestion that it does is simply incorrect.

]]>
stop-pushing-this-nonsense https://beta.techdirt.com/comment_rss.php?sid=20190613/03172142391
Wed, 29 Jun 2016 06:19:00 PDT Hillary Clinton's Intellectual Property Platform: Too Vague & Confusing Mike Masnick https://beta.techdirt.com/articles/20160628/17275434854/hillary-clintons-intellectual-property-platform-too-vague-confusing.shtml https://beta.techdirt.com/articles/20160628/17275434854/hillary-clintons-intellectual-property-platform-too-vague-confusing.shtml Hillary Clinton's newly released platform on technology & innovation as it related to broadband policy and encryption. Today I wanted to look through what it said on another set of key issues to folks around Techdirt: copyright and patent policy. And, as with Karl's post yesterday, there appear to be some things that sound good, but are so vague and devoid of actual nuance as to be laughable. I get it: this is the political platform of someone running for President, and thus it's going to be worded in a vague and noncommital way on these issues, because these aren't issues that lead people to decide whether or not to vote for someone as President.

On the good-sounding side, there are promises about dealing with the orphan works problem and the patent troll problem. But they're weighted down with language that is quite vague and could mean almost anything, including lots of bad policy proposals.
Effective Copyright Policy: The federal government should modernize the copyright system through reforms that facilitate access to out-of-print and orphan works, while protecting the innovation incentives in the system. It should also promote open-licensing arrangements for copyrighted material supported by federal grant funding.
Now, just the fact that a Presidential campaign mentions that there's a problem with copyright law blocking access to content is somewhat revolutionary, so kudos to whoever got that into the plan. But the weird "while protecting the innovation incentives in the system" trailing line could mean anything and is designed to be just vague enough for anyone to read anything into it. What are the "innovation incentives in the system" right now? Well, on that, people totally disagree. Some people think that fair use, user rights and DMCA safe harbors are the innovation incentives in the system. Others, of course, argue it's long copyright terms and insane statutory damages. These two groups disagree and the Clinton platform offers no further enlightenment.

The fact that the orphan works problem gets called out is exciting, but even then the solution isn't clear. The only real solution to the orphan works problem is to go back to a system of formalities, requiring registration to get a copyright. Then place stuff that isn't registered and where there's no way to contact the copyright holder in the public domain. Boom. Problem solved. But it seems unlikely that that's where Clinton is going with this.

In the more detailed fact sheet, the expansion of these ideas is basically just the same thing as the condensed version but with way more words:
Effective Copyright Policy: Copyrights encourage creativity and incentivize innovators to invest knowledge, time, and money into the generation of myriad forms of content. However, the copyright system has languished for many decades, and is in need of administrative reform to maximize its benefits in the digital age. Hillary believes the federal government should modernize the copyright system by unlocking—and facilitating access to—orphan works that languished unutilized, benefiting neither their creators nor the public. She will also promote open-licensing arrangements for copyrighted material and data supported by federal grant funding, including in education, science, and other fields. She will seek to develop technological infrastructure to support digitization, search, and repositories of such content, to facilitate its discoverability and use. And she will encourage stakeholders to work together on creative solutions that remove barriers to the seamless and efficient licensing of content in the U.S. and abroad.
Open licensing is good. Removing barriers to effective licensing is also good. But there's no plan here. People have talked about these things for ages and never gotten anywhere because entrenched interests don't want this kind of thing to happen at all.

Also, there's a weird call out to SOPA -- but not in the copyright section. Rather, she mentions it in the net neutrality section because whatever, no one cares:
She also maintains her opposition to policies that unnecessarily restrict the free flow of data online –such as the high profile fight over the Stop Online Piracy Act (SOPA).
The language choices here appear to have been workshopped by a committee of hundreds. What the hell does this mean? Does it mean that she would oppose the fight over SOPA? Or SOPA itself? Because it's pretty clear that she's implying that she would oppose things like SOPA (which, again, had nothing to do with net neutrality). But she also was a SOPA supporter -- at least until it was politically inconvenient. During the height of the SOPA battle, she sent a letter insisting (contrary to the statement in her new platform) that there was "no contradiction" between supporting the free flow of information and enforcing strict copyright laws:
"There is no contradiction between intellectual property rights protection and enforcement and ensuring freedom of expression on the internet."
So if she believes that, then SOPA wouldn't have restricted the free flow of data. Of course, once the public tide turned against SOPA -- guess what -- so did Hillary, suddenly making it out to have been an important fight for internet freedom, even though she denied that very point just months earlier:
“The United States wants the Internet to remain a space where economic, political, and social exchanges flourish. To do that, we need to protect people who exercise their rights online, and we also need to protect the Internet itself from plans that would undermine its fundamental characteristics.”
In other words, like a standard politician, we've got vague promises and flip flops -- along with ignoring previous positions when convenient.

As for patents, for the short version, we've got:
Improve the Patent System to Reward Innovators: Hillary will enact targeted reforms to the patent system to reduce excessive patent litigation and strengthen the capacity of the Patent and Trademark Office, so that we continue to reward innovators.
Again, vague language that can be taken in many different ways (again, obviously on purpose). The good: highlighting the problem of "excessive patent litigation" is definitely a good sign and is basically an acknowledgement of the problems with the patent system -- mainly patent trolling, but that should include excessive litigation by operating companies as well. But again, that's immediately weighed down by what follows, which could mean basically anything. Strengthening the capacity of the PTO... for what? To reject bad patents? That would be good. To grant more patents? That might be bad. And the whole "so that we continue to reward innovators." What does that mean? If you believe that the patent system itself rewards innovators, then that would mean encouraging more patenting. If you believe that the patent system is stifling innovators, then that should mean ending bad patents that are used to hinder innovation. Which is it? Who the hell knows. And I doubt Clinton herself has any real understanding of the issues here either.

The longer version makes it clear she's supporting some of the current anti-patent troll legislation, which is a good thing:
The Obama Administration made critical updates to our patent system through the America Invents Act, which created the Patent Trial and Appeals Board, and through other efforts to rein in frivolous suits by patent trolls. But costly and abusive litigation remains, which is why Hillary supports additional targeted rule changes. She supports laws to curb forum shopping and ensure that patent litigants have a nexus to the venue in which they are suing; require that specific allegations be made in demand letters and pleadings; and increase transparency in ownership by making patent litigants disclose the real party in interest.
Those are good things. But then we've got the expanded explanation of strengthening the PTO and again it's a giant "huh?"
Hillary believes it is essential that the PTO have the tools and resources it needs to act expeditiously on patent applications and ensure that only valid patents are issued. That is why she supports legislation to allow the PTO to retain the fees it collects from patent applicants in a separate fund—ending the practice of fee diversion by Congress, and enabling the PTO to invest funds left over from its annual operations in new technologies, personnel, and training. Hillary also believes we should set a standard of faster review of patent applications and clear out the backlog of patent applications.
Of course, this is somewhat contradictory with the stuff raised earlier. Fee retention is one of those ideas that perhaps makes sense, but skews the incentives in dangerous ways, possibly pushing the PTO to encourage more patent applications and patents in order to get more fees. Similarly, "faster review" historically has meant lots more crappy patents getting approved -- leading to more patent trolling over bogus patents.

So, basically, she's promising points to the two key sides of the patent debate, without noting how the two plans are in conflict with each other if she's looking to solve real problems.

Again, none of this is a surprise. This kind of wishy washy political language where none of it really means anything is par for the course for just about any major politician, and Clinton has historically made this kind of noncommittal hand-wavy bullshit an artform all her own. She's not looking to solve real problems. She's looking to convince you that she's actually heard of the pet problem you're focused on and she has a vague plan to "solve it." Never mind the details or the fact that the plan conflicts with other parts of her plan. ]]>
take-a-stand https://beta.techdirt.com/comment_rss.php?sid=20160628/17275434854
Tue, 28 Jun 2016 09:26:05 PDT Hillary Clinton's Tech Policy Plan Includes Some Empty Broadband Promises And A Continued War On Encryption Karl Bode https://beta.techdirt.com/articles/20160628/05570734838/hillary-clintons-tech-policy-plan-includes-some-empty-broadband-promises-continued-war-encryption.shtml https://beta.techdirt.com/articles/20160628/05570734838/hillary-clintons-tech-policy-plan-includes-some-empty-broadband-promises-continued-war-encryption.shtml tech policy plan has been released, and it includes some new, potentially hollow broadband promises, a pledge to continue defending the FCC's net neutrality rules from telecom industry attack, some feel good commentary on the sharing and innovation economies, and continued support for the candidate's absurd war on encryption.

With the FCC's recent net neutrality court victory, the broadband industry's best path forward is to elect a President who'll stock the commission with revolving door regulators who'll simply fail to enforce the rules. But Trump's proven so divisive to some Conservatives, that even AT&T's top lobbyist Jim Cicconi this week came out in gushing support of Clinton:
"Mr. Cicconi, who worked in the White House for Presidents Ronald Reagan and George H.W. Bush, said he has backed every GOP presidential candidate since 1976. “But this year I think it’s vital to put our country’s well being ahead of party,” he said in a statement provided by the campaign. “Hillary Clinton is experienced, qualified, and will make a fine president. The alternative, I fear, would set our nation on a very dark path."
Given AT&T's threat to take the neutrality fight to the Supreme Court, Cicconi's support is curious, but may say more about Trump's unpredictability than it does about Clinton. Regardless, the 14-page "technology and innovation agenda" includes upsetting her new BFF by continuing to fight for net neutrality:
"Hillary believes that the government has an obligation to protect the open internet. The open internet is not only essential for consumer choice and civic empowerment – it is a cornerstone of start-up innovation and creative disruption in technology markets. Hillary strongly supports the FCC decision under the Obama Administration to adopt strong network neutrality rules that deemed internet service providers to be common carriers under Title II of the Communications Act. These rules now ban broadband discrimination, prohibit pay-for-play favoritism, and establish oversight of “interconnection” relationships between providers. Hillary would defend these rules in court and continue to enforce them."
The plan also makes some arguably vague promises on broadband, promising to deliver ubiquitous broadband to all Americans by 2020:
"Hillary will finish the job of connecting America’s households to the internet, committing that by 2020, 100 percent of households in America will have the option of affordable broadband that delivers speeds sufficient to meet families’ needs. She will deliver on this goal with continue investments in the Connect America Fund, Rural Utilities Service program, and Broadband Technology Opportunities Program (BTOP), and by directing federal agencies to consider the full range of technologies as potential recipients—i.e., fiber, fixed wireless, and satellite—while focusing on areas that lack any fixed broadband networks currently."
While some outlets were quick to call this plan ambitious, historically vague broadband coverage promises haven't meant all that much.

A favorite pastime of politicians is to make broadband promises they know will be completed even if government doesn't lift a finger, then gobble up the easy political brownie points (with ample help from an unskeptical tech press) after the fact. Obama, for example, in 2011 promised wireless broadband coverage to 98% of all Americans, ignoring the fact that industry data at the time suggested we'd already met that mark (albeit poorly) with 2G and 3G wireless. Former FCC boss Julius Genachowski similarly received ample praise for issuing a "gigabit city challenge", knowing full well gigabit service was arriving without much help from him or other politicians at the time (mostly via frustrated towns and cities forced into the broadband business on their own).

And while the FCC will help us get to 100% broadband coverage by opening up spectrum for 5G, moving from the supposed 98% broadband coverage mark to 100% really won't require much government help. 5G is arriving by 2020 or so regardless of what Clinton does, as it's a cornerstone of AT&T and Verizon's plan to hang up on unwanted DSL customers they refuse to upgrade. That doesn't somehow mean the broadband that's "100% available" to you is going to actually be good or cheap, since that would involve the government acknowledging that lack of competition means Americans pay more for broadband than most developed nations. Fixing this will take significantly more than empty promises, and for Clinton, it will certainly involve pissing off new allies like Jim Cicconi.

The lion's share of Clinton's tech agenda consists of ambiguous promises that, as with all campaign promises, may or may not have any actual basis in fact.

Clinton's plan calls for improving government adoption of technology and efficiency, improving our patent system (which the Clinton camp declares "has been an envy of the world"), and other feel good efforts such as "facilitating citizen engagement in government innovation" and using technology to "improve outcomes and drive government accountability" (doesn't that sound lovely?). But Clinton also makes it clear she intends to continue waging war on encryption -- her plan for a "Manhattan Project" to "solve" (read: weaken) encryption still very much on the table:
"Hillary rejects the false choice between privacy interests and keeping Americans safe. She was a proponent of the USA Freedom Act, and she supports Senator Mark Warner and Representative Mike McCaul’s idea for a national commission on digital security and encryption. This commission will work with the technology and public safety communities to address the needs of law enforcement, protect the privacy and security of all Americans that use technology, assess how innovation might point to new policy approaches, and advance our larger national security and global competitiveness interests."
Yes, it's abundantly clear that Clinton and friends continue to struggle with the idea that encryption is simply a tool, and like any tool it can be used for a myriad of purposes. That doesn't mean you unilaterally declare war on said tool -- or work tirelessly to make that tool less useful or more dangerous via backdoors -- a conversation we'll apparently be having over and over and over again should Clinton's presidency ascend beyond the rhetorical, larval stage. ]]>
making friends and influencing people https://beta.techdirt.com/comment_rss.php?sid=20160628/05570734838
Tue, 5 May 2015 13:37:00 PDT In The Information Age, It's More Important To Expand The Pie Than Eat The Whole Damn Pie Mike Masnick https://beta.techdirt.com/articles/20150503/18165930878/information-age-more-important-to-expand-pie-than-eat-whole-damn-pie.shtml https://beta.techdirt.com/articles/20150503/18165930878/information-age-more-important-to-expand-pie-than-eat-whole-damn-pie.shtml Twitter's big mistake a few years back, basically killing off its openness for developers. He builds his argument off of an interesting post from Ben Thompson, arguing that Twitter has lost its strategic focus. Both articles are great, and I recommend them both. In the early days, Twitter was almost completely open. Many of its most useful features and services came from others building on top of it. The very idea of the "@" symbol was the invention of a user. Same with the retweet. Now both are core to Twitter's identity. And, of course, third-party services were what made Twitter usable in the first place. The service didn't really ever take off for me until I used Tweetdeck -- which was a third party service until Twitter bought it. Thankfully, I can still use Tweetdeck (though not on mobile) because Twitter's actions killed off most competitors (and, because of this, Tweetdeck still lags in fixing some basic things -- like an autoscroll problem I've complained about for years). As Ingram notes, Twitter made a big strategic shift, as it started to fear its own openness and worry that it may have resulted in the dreaded "someone else profiting" off of Twitter's foundation:
Namely, a crucial turning point in Twitter’s evolution that arguably helped put it where it is today, both in a positive sense (it is a publicly-traded $25-billion company) and a negative one (its growth potential is in question and its strategy doesn’t seem to be working). And that turning point happened about five years ago, when Twitter decided to turn its back on the third-party ecosystem that helped make it successful in the first place.

This process began gradually, with the acquisition of Tweetie — which became Twitter’s official iOS client — and restrictions on what third parties could do with tweets, including selling advertising related to them. But it escalated quickly, and arguably became an all-out war with Twitter’s moves against Bill Gross, the Idealab founder and inventor of search-related advertising, who was busy acquiring Twitter clients and trying to build an ad model around the public Twitter stream. The idea that someone could monetize Twitter before Twitter itself got around to doing so was what one investor called a “holy shit moment” for the company.
We see this sort of thing in all sorts of areas -- especially around "intellectual property." People have a very emotional "holy shit moment" pretty frequently when they see "someone else" making money by leveraging something that they feel some sort of ownership attachment to, whether or not there's any legitimate basis for that attachment. So many of the intellectual property fights we see stem from that general feeling of "Hey, that's ripping me off!" even if the actions of those third parties may not have any real impact on the originating content, service or idea.

In the internet era, however, this is almost always the wrong decision. The internet thrives based on the flow of information. You want information to flow more broadly, rather than to hoard it. Historical economics is based on worlds of scarcity, and in worlds of scarcity it makes sense to hoard resources, as they are valuable by themselves. Yet, in worlds of abundance you want the opposite. You want abundant or infinite resources to flow freely because they do something special: they increase the value of everything else around them. You want openness, not closed systems. You want sharing, not hoarding. You want copying, not restrictions. Because all of those things increase the overall pie massively, even if some of that pie (or even large portions) are captured by others.

As Ingram notes, at least some at Twitter recognized this at the time. An early influential employee at Twitter, its chief engineer Alex Payne, wrote about how he tried to persuade the company to go in that direction:

Some time ago, I circulated a document internally with a straightforward thesis: Twitter needs to decentralize or it will die. Maybe not tomorrow, maybe not even in a decade, but it was (and, I think, remains) my belief that all communications media will inevitably be decentralized, and that all businesses who build walled gardens will eventually see them torn down. Predating Twitter, there were the wars against the centralized IM providers that ultimately yielded Jabber, the breakup of Ma Bell, etc. etc. This isn’t to say that one can’t make quite a staggeringly lot of money with a walled garden or centralized communications utility, and the investment community’s salivation over the prospect of IPOs from LinkedIn, Facebook, and Twitter itself suggests that those companies will probably do quite well with a closed-but-for-our-API approach.

The call for a decentralized Twitter speaks to deeper motives than profit: good engineering and social justice. Done right, a decentralized one-to-many communications mechanism could boast a resilience and efficiency that the current centralized Twitter does not. Decentralization isn’t just a better architecture, it’s an architecture that resists censorship and the corrupting influences of capital and marketing. At the very least, decentralization would make tweeting as fundamental and irrevocable a part of the Internet as email. Now that would be a triumph of humanity.

But he lost that argument to those who wanted to keep the pie smaller, but to capture more of it for themselves. That may have helped the company go public, but it has put the company in a serious bind today. One in which Wall Street is profoundly disappointed that what Twitter is capturing for itself "isn't enough" and the innovations that the company needs to keep growing and innovating are much harder to come by. Sure, it does things like Vine and Periscope -- both of which it bought out in infancy -- but to do so it's had to hamstring other third-party developers like Meerkat.

Ingram also highlights another Ben Thompson post on what Twitter might have been had it gone down this more open path (he wrote this after the whole Meerkat thing):
I would argue that what makes Twitter the company valuable is not Twitter the app or 140 characters or @names or anything else having to do with the product: rather, it’s the interest graph that is nearly priceless. More specifically, it is Twitter identities and the understanding that can be gleaned from how those identities are used and how they interact that matters.

If one starts with that sort of understanding — that Twitter the company is about the graph, not the app — one would make very different decisions. For one, the clear priority would not be increasing ad inventory on the Twitter timeline (which in this understanding is but one manifestation of an interest graph) but rather ensuring as many people as possible have and use a Twitter identity. And what would be the best way to do that? Through 3rd-parties, of course! And by no means should those 3rd-parties be limited to recreating the Twitter timeline: they should build all kinds of apps that have a need to connect people with common interests: publishers would be an obvious candidate, and maybe even an app that streams live video. Heck, why not a social network that requires a minimum of 140 characters, or a killer messaging app? Try it all, anything to get more people using the Twitter identity and the interest graph.
There's a more fundamental premise at work here. In the information era, spreading more information increases the pie massively and opens up many more opportunities. The challenge is that many others can also take advantage of many of those opportunities, but as the core player in the space, a company like Twitter has a clear and natural advantage, even if it did what Payne had wanted to do many years ago and give up the underlying control altogether.

This is, unfortunately, a profoundly difficult concept for many to grasp -- especially when they're in the midst of it. Hell, even as someone who regularly talks about this very idea, I still get the initial emotional pang of being upset when I see someone else get success with an idea that I had first (whether or not they got it from me). It's only natural to have that visceral reaction. The real question is what do you do about it. Do you fret? Do you try to control? Or do you realize that in broadening these ideas and sharing them more widely, it creates greater opportunities across the board?

It's impossible to know what would have happened had Twitter taken a different path. But it seems clear that remaining a more open platform (or even moving to a fully distributed one), would have resulted in a tool that was much more useful today, with a much larger audience and much greater innovation. It's too bad we didn't get to live in that world. ]]>
open-v.-closed https://beta.techdirt.com/comment_rss.php?sid=20150503/18165930878
Fri, 7 Sep 2012 08:20:43 PDT Far Beyond Filtering: Is The GOP Looking To Shut Down Porn Producers? Tim Cushing https://beta.techdirt.com/articles/20120904/17400920273/far-beyond-filtering-is-gop-looking-to-shut-down-porn-producers.shtml https://beta.techdirt.com/articles/20120904/17400920273/far-beyond-filtering-is-gop-looking-to-shut-down-porn-producers.shtml unfortunate anti-porn provisions. Romney declared that, if elected president, every new computer would have an anti-porn filter installed. At the very least, this filtering would be redundant. As Mike pointed out, porn filters already exist and are easily available. If this is being done "for the children," perhaps the application of a porn filter should be left to the parents, rather than made mandatory via legislation.

That handles the user end of the experience. I would imagine that additional filtering might be suggested (or required) at the ISP level, aligning it with efforts in the UK. Whether or not an opt-in Known Perverts option will be available is still open to speculation. Most likely, once the rhetoric clears, it will simply be a matter of computer manufacturers offering filtering software right out of the box. This will fulfill the requirement without needing much more than some cursory compliance checks, and everyone involved will feel proud to have "done something" to keep porn out of kids' eyeballs. This will also be a boon for developers of filtering software, who will be jockeying for lucrative OEM contracts.

Romney hasn't really specified what he means by "computer," meaning that the spread of pre-installed filterware could envelop any device that connects with the internet, including tablets and smartphones. There is also no information on how "mandatory" these filters will be or what issues computer/device manufacturers will face should they fail to comply.

It's a vague concept that hardly anyone will argue against for fear of appearing to be siding with pornographers, or worse, child pornographers (thanks to always-handy conflation). Perhaps more unsettling than the feel-good, do-nothing "filtering" promise is another sentence lurking in the platform: "Current laws on all forms of pornography and obscenity need to be vigorously enforced." Eugene Volokh tackles the troubling implications of this phrase, putting together a set of tactics the government could implement in an effort to enforce standing obscenity laws.

First off, Volokh tries to determine the endgame? Is the intent to shut down as many US pornographers as possible? If so, supply from other sources will fill the demand:
[E]ven if every single U.S. producer is shut down, wouldn't foreign sites happily take up the slack? It's not like Americans have some great irreproducible national skills in smut-making, or like it takes a $100 million Hollywood budget to make a porn movie. Foreign porn will doubtless be quite an adequate substitute for the U.S. market. Plus the foreign distributors might even be able to make and distribute copies of the existing U.S.-produced stock — I doubt that the imprisoned American copyright owners will be suing them for infringement (unless the U.S. government seizes the copyrights, becomes the world's #1 pornography owner, starts trying to enforce the copyrights against overseas distributors, and gets foreign courts to honor those copyrights, which is far from certain and likely far from cheap).
This is an interesting conjecture. Removing the producers from the equation opens up the possibility that foreign producers would simply do the math and up their profits by reselling product they didn't create. Having the US government eliminate their competition is an added bonus. It seems unlikely that the government would act on the behalf of porn companies it's legislated or prosecuted out of existence. But would it tolerate abuse of American IP, no matter how abhorrent the subject? Probably. The porn industry isn't known for its lobbying efforts.

Moving on, Volokh speculates on three possible outcomes of enforcing existing laws on pornography and obscenity.
The U.S. spends who knows how many prosecutorial and technical resources going after U.S. pornographers. A bunch of them get imprisoned. U.S. consumers keep using the same amount of porn as before.
This tactic sounds like it would work as well as current IP enforcement measures. As it stands now, ICE is better known for its RIAA/MPAA lapdog status than for producing credible results. Sites get taken down, sat on and returned to their owners with no charges brought or apologies offered. Drawing a bead on targets like porn producers makes for some rah-rah press but will have little effect on the amount of porn available. 

As ineffective as these actions would be, the greater issue is that increased enforcement will do absolutely nothing to change people's perception of porn:
Nor do I think that the crackdown will somehow subtly affect consumers’ attitudes about the morality of porn — it seems highly unlikely that potential porn consumers will decide to stop getting it because they hear that some porn producers are being prosecuted.
This falls right in line with the perception of file sharing as a "moral" issue. It's all well and good to claim the high road in the fight against infringement, but if the general public doesn't share your beliefs then the battle is not winnable. Legislation and prosecution aren't going to change anyone's mindset. It just makes the punishment seem ridiculous or unduly harsh.

There are more echoes of the ongoing anti-piracy efforts. Volokh's next scenario involves going after foreign producers:
The government gets understandably outraged by the “foreign smut loophole.” “Given all the millions that we’ve invested in going after the domestic porn industry, how can we tolerate all our work being undone by foreign filth-peddlers?,” pornography prosecutors and their political allies would ask. So they unveil the solution, in fact pretty much the only solution that will work: Nationwide filtering.

It’s true: Going after cyberporn isn’t really that tough — if you require every service provider in the nation to block access to all sites that are on a constantly updated government-run “Forbidden Off-Shore Site” list. Of course, there couldn’t be any trials applying community standards and the like before a site is added to the list; that would take far too long. The government would have to be able to just order a site instantly blocked, without any hearing with an opportunity for the other side to respond, since even a quick response would take up too much time, and would let the porn sites just move from location to location every several weeks.
This goes far beyond simply requiring pre-installed filtering software. Instituting any sort of a blacklist combines the futility of whack-a-mole with the "we don't have time to follow procedures/respect rights" urgency of "doing something" to make the internet a "safer" place. As these actions prove futile, enforcement will move to cutting off the money supply, targeting credit card transactions, pressuring foreign governments to play by the US''s rules, etc.

The third option, and probably the least palatable to politicians? Going after end users:
Finally, the government can go after the users: Set up “honeypot” sites (seriously, that would be the technically correct name for them) that would look like normal offshore pornography sites. Draw people in to buy the stuff. Figure out who the buyers are. To do that, you'd also have to ban any anonymizer Web sites that might be used to hide such transactions, by setting up some sort of mandatory filtering such as what I described in option (2).

Then arrest the pornography downloaders and prosecute them for receiving obscene material over the Internet, in violation of 18 U.S.C. § 1462; see, e.g.,United States v. Whorley (4th Cir. 2008) (holding that such enforcement is constitutional, and quite plausibly so holding, given the United States v. Orito Supreme Court case).
Politicians may state that they think porn should be outlawed or controlled, and some are even willing to trample on some rights to put that in motion. But it's hard for most to jump from taking down the supply side to attacking the demand. If your aim is to make the internet "safer," it's fairly easy to see that removing users has no effect on "safety." But while this logic leap is hard, it is by no means impossible. The War on Drugs has locked up thousands of users by making possession a crime. "Possession with the intent to distribute" is simply a matter of going above an arbitrary quantity. Possession laws assume the only reason a person would be carrying [x] amount of drugs is because they're selling to others. Would a person with more than [x] megabytes of porn on their hard drive be considered a distributor, thus opening up the possibility of additional charges? I don't see why not, given the attitude surrounding the issues.

There's plenty of food for thought in Volokh's post, especially considering the faint echoes of SOPA/PIPA present in the discussion of enforcing morality. Both parties claim to be working towards a more open internet, but seem willing to scuttle that openness in reaction to hot-button issues or overly-friendly nudges from lobbyists. Ultimately, the question isn't about whether or not porn is "bad" for citizens, but rather, how can these laws possibly be enforced without descending quickly into "draconian measures"?
How can the government's policy possibly achieve its stated goals, without creating an unprecedentedly intrusive censorship machinery, one that's far, far beyond what any mainstream political figures are talking about right now?
The answer is: it can't. But these concerns aren't being considered, at least not during an election run. Post-election, if anyone gets around to fighting this unwinnable battle, the concerns likely won't be considered at that point, either. It's usually not until the public gets noisy enough to jeopardize politicians' careers that any sort of consideration is given to the rights of the people affected. Even more disturbing is the fact that pursuing this end effects both sides of the creative effort: the producers and the consumers. Considering the resemblance these actions have to past overreaching legislative efforts crafted to "protect" certain industries, it's rather disconcerting to see the possibility of these same actions being used to destroy a creative industry simply because certain people don't care for the product. ]]>
if we're going to have any morality around here, we've got to ditch a fe https://beta.techdirt.com/comment_rss.php?sid=20120904/17400920273
Thu, 1 Dec 2011 19:59:40 PST Spotify Finally Becomes A True Platform: Now Let's See Some Innovation Mike Masnick https://beta.techdirt.com/articles/20111201/03541116940/spotify-finally-becomes-true-platform-now-lets-see-some-innovation.shtml https://beta.techdirt.com/articles/20111201/03541116940/spotify-finally-becomes-true-platform-now-lets-see-some-innovation.shtml hinted at its desire to set itself up as a platform that others could build things on top of. And it's finally become a reality. This could actually be quite cool. Just a couple months ago, we were pointing out that just "putting radio on the internet" isn't that cool, but that we need killer apps for music. Spotify as a platform will hopefully make it easier for those killer apps to happen. The current crop of apps that they launched with are pretty ordinary, but I'm excited to think what comes next. Things I'd love to see: Turntable.fm (still the most addictive and coolest "social music" service out there) integrated directly into Spotify) as well as integration with things like TopSpin or Bandcamp. Right now there are options to do ticketsales, but what if you could build in ways to let people buy merch... or, better yet, connect with the artist directly via Spotify? And those are the obvious ones. The real killer app is probably going to take us all entirely by surprise. This is, by the way, yet another reason that short-sighted artists and labels are going to regret dropping out of Spotify. You have to be where the killer apps are or you're going to get left behind. ]]> build-in-cwf+rtb https://beta.techdirt.com/comment_rss.php?sid=20111201/03541116940 Thu, 13 Oct 2011 16:00:00 PDT Does Google Have What It Takes To Be A Platform, Rather Than A Product, Company? Mike Masnick https://beta.techdirt.com/articles/20111013/02371616330/does-google-have-what-it-takes-to-be-platform-rather-than-product-company.shtml https://beta.techdirt.com/articles/20111013/02371616330/does-google-have-what-it-takes-to-be-platform-rather-than-product-company.shtml being a true platform company that had a much more open setup, which did much more to encourage developers to build on top of it. Over the years, occasionally I've repeated that point. And while Google has done a few things at the margin, it still has always seemed to resist becoming a true platform. There are, certainly, some folks inside Google who get this, and I seem to hear from a bunch of them any time I bring this up. But the company has a history of having trouble really opening up to outside developers.

So it's really interesting to see this "internal" note from Google employee Steve Yegge, that he accidentally posted publicly via Google+. It's a very detailed and honest criticism of the company's attitude on certain things, but not done to slam Google, but rather to push Google to change. It's getting tons of attention, and Yegge removed the post, but has allowed others to keep up a reposted version. He's also pointed out that Google PR was careful not to pressure him to take down the post, noting that employees are free to express their opinions.

Some have been reading it as an insider's "attack" on Google, but I don't see that at all. It seems like a call to action from someone who thinks the company is missing the boat on being a platform. Yegge spends a lot of time talking (very openly) about his prior experience working at Amazon, and about how Jeff Bezos got the "we need to be a platform" religion big time nearly a decade ago, and effectively forced the entire company to focus on that as job number one. While Yegge criticizes many problems with Amazon, he does recognize that such a vision has put Amazon in a good position (along with others who have clearly embraced being "the" platform: Facebook, Apple and, almost by accident, Microsoft).

The key part of the post, which is what many people are focusing on, is where Yegge criticizes Google+, and how it wasn't designed as a platform, whereas its main direct competitor, Facebook, has clearly embraced being a platform in a very meaningful way.
Google+ is a prime example of our complete failure to understand platforms from the very highest levels of executive leadership (hi Larry, Sergey, Eric, Vic, howdy howdy) down to the very lowest leaf workers (hey yo). We all don't get it. The Golden Rule of platforms is that you Eat Your Own Dogfood. The Google+ platform is a pathetic afterthought. We had no API at all at launch, and last I checked, we had one measly API call. One of the team members marched in and told me about it when they launched, and I asked: "So is it the Stalker API?" She got all glum and said "Yeah." I mean, I was joking, but no... the only API call we offer is to get someone's stream. So I guess the joke was on me.

Microsoft has known about the Dogfood rule for at least twenty years. It's been part of their culture for a whole generation now. You don't eat People Food and give your developers Dog Food. Doing that is simply robbing your long-term platform value for short-term successes. Platforms are all about long-term thinking.

Google+ is a knee-jerk reaction, a study in short-term thinking, predicated on the incorrect notion that Facebook is successful because they built a great product. But that's not why they are successful. Facebook is successful because they built an entire constellation of products by allowing other people to do the work. So Facebook is different for everyone. Some people spend all their time on Mafia Wars. Some spend all their time on Farmville. There are hundreds or maybe thousands of different high-quality time sinks available, so there's something there for everyone.

Our Google+ team took a look at the aftermarket and said: "Gosh, it looks like we need some games. Let's go contract someone to, um, write some games for us." Do you begin to see how incredibly wrong that thinking is now? The problem is that we are trying to predict what people want and deliver it for them.
This part rings incredibly true. I know that when Google+ launched, I liked it as a program, but asked people about APIs, because it needed to better integrate into my workflow -- and was told that that would be coming "sometime later." And while I still mess around with Goolge+, it's largely become an afterthought to me, because it just lives off in its own separate world, rather than integrating well. There are still features I like, but until developers have a chance to dive in and make it useful... it just doesn't feel like a necessity.

But there's a bigger lesson in this, beyond Google's continued platform-itis. And it goes back to the issue of cargo cult copying -- a topic I've discussed a number of times. People seem to think it's easy for companies (especially big companies) to "copy" products of their competitors. In fact, with Google, many people think it's so easy that there are antitrust investigations going on. But Google+ and the points that Yegge raise remind us, yet again, that while copying the basic "features" of a product may be possible, really recreating what makes it tick and what makes it successful is difficult.

It's easy to copy the superficial. It's difficult to copy the soul.

With Google+, the company built a really nice copy (with some clear improvements) of Facebook, the product -- which is the superficial, public-facing part. But it completely missed the boat on Facebook, the platform -- which is the real soul of what makes Facebook such a powerhouse. Google certainly can get there. And, in the back of my mind, I'd always assumed that was exactly the path they were on. But remembering that post from 2004, and the lack of any sustained, involved effort within and across Google to be a platform, combined with this post from Yegge, again makes me wonder if Google just doesn't recognize the importance of being a platform.

I've argued in the past that one big achilles heel for Google is its awful reputation when it comes to customer service, but it's lack of deeply ingrained platform-focused thinking may represent a much bigger threat. ]]>
the-challenge-is-(still)-on https://beta.techdirt.com/comment_rss.php?sid=20111013/02371616330
Mon, 2 Nov 2009 22:22:00 PST Live Nation Working To Turn Website Into More Of A Platform Mike Masnick https://beta.techdirt.com/articles/20091029/1817596724.shtml https://beta.techdirt.com/articles/20091029/1817596724.shtml make it more of a platform. Both artists and fans will be able to upload concert footage, as well as various community features (wikis, reviews, Twitter streams, fan Q&As and more). It increasingly seems like Live Nation is trying to enable a platform where fans and artists can connect, and on which fans can buy (mainly concert tickets, but other things as well). It's a smart move, but I wonder whether or not Live Nation ends up competing with a band's own web presence. What could be cool is if Live Nation also makes it so an artist can integrate many of these features into their own site as well. In the meantime, though, we're once again seeing why now is a great time to be a musician. There are so many different services that help enable artists to both connect with fans and set up business models. ]]> this-could-be-interesting https://beta.techdirt.com/comment_rss.php?sid=20091029/1817596724 Thu, 2 Jul 2009 15:59:00 PDT Why Is It So Difficult To Understand The Difference Between A Platform And A User? Mike Masnick https://beta.techdirt.com/articles/20090702/0218595433.shtml https://beta.techdirt.com/articles/20090702/0218595433.shtml Google India is being blamed for content written by bloggers on Blogger. First, Blogger is run by Google, not Google India, so the lawsuit is doubly misdirected -- but, more importantly, Google itself cannot be responsible for what someone writes using its tool. That's like suggesting that Bic is responsible for what you write with its pens. The case involves a guy who was upset about what some bloggers wrote about him -- so of course, he had to sue Google. What's amazing is that the judge seems to have initially bought this as reasonable. It barred Google from hosting any blog that "defamed" this guy. Google has responded by trying to explain the basics of the internet to the judge and how it's impossible for Google to figure out if someone is defaming someone else using its software. ]]> head-scratcher https://beta.techdirt.com/comment_rss.php?sid=20090702/0218595433 Thu, 2 Jul 2009 01:04:11 PDT Forget Suing Google, Now It's Craigslist That's A Target For Misplaced Lawsuit Mike Masnick https://beta.techdirt.com/articles/20090701/1113135423.shtml https://beta.techdirt.com/articles/20090701/1113135423.shtml Google when a competitor puts up an ad that references their own trademarks. This is misguided in any number of ways: first, as long as the ad itself is not confusing such that the reader (or a moron in a hurry reader) would think that the ad is from the original company rather than the competitor, there's not likely to be a trademark violation. More importantly, even if there is a trademark violation, it should not be Google's liability, since they're simply the service provider. The liability (if there is any) would be on whoever created the ad. Mostly, the courts have gotten this right -- though, sometimes they've gotten confused. Either way, those lawsuits keep getting filed.

And now, it appears, they're spreading. Dave Barnes alerts us to the news that a similar lawsuit has been filed against Craigslist. The lawsuit was originally filed in a Texas state court, but has been transferred to a federal court -- but not before the state court banned Craigslist from posting any more ads with those trademarked words. Considering that Craigslist does not pre-screen posts to its site, it's not at all clear how that's even possible. And, considering that trademarks only cover use in commerce in a specific context, it would be way too onerous to insist that Craigslist could not allow the phrases "Call First," "First Call Properties," or "Call Us First," in any context whatsoever.

Hopefully, the federal court is quick to dismiss Craigslist from the suit. Unfortunately, since trademark claims don't have a section 230 or DMCA safe harbor, it may be a little more involved than some other cases. But common sense, once again, dictates that Craigslist should not be the liable party here and should not be responsible for policing the text of posts. To make the claim even more ridiculous, since Craigslist doesn't charge for the ads in question, it's difficult to see how Craigslist could be found to have been using these words "in commerce." The lawsuit also alleges libel against Craigslist -- which should get thrown out quite quickly under section 230. It's too bad that the trademark claim might be a bit more involved. ]]>
no-surprise-really https://beta.techdirt.com/comment_rss.php?sid=20090701/1113135423
Fri, 27 Mar 2009 16:44:00 PDT Two Companies That Should Know Better Shut Down Helpful 3rd Party Apps Mike Masnick https://beta.techdirt.com/articles/20090327/1536034280.shtml https://beta.techdirt.com/articles/20090327/1536034280.shtml shut down helpful third party apps, we're seeing a number of stories about other companies doing something similar. First up is Last.fm, which has apparently started blocked a bunch of third party apps that had been using undocumented calls to stream content from Last.fm. Last.fm (now owned by CBS) was in a bit of a quandary, because its licenses with the major record labels (there they go again, blocking innovation) forbid streaming except in specific circumstances -- so these third party apps "broke" the agreement. But... that's not quite true, because the agreements are between Last.fm and the labels, not the third parties. Last.fm has now specific requirement to block others from creating apps. So, yes, Last.fm has every right to do this, and I'm sure the labels were demanding it do this, but it still doesn't make it a very smart move. Those third party apps were making Last.fm more valuable. Blocking them hurts the overall value and pushes people to go in search of other services that are more consumer friendly.

This move also comes right after Last.fm's recent decision to charge for streaming outside of the US, Germany or the UK. This also has folks up in arms -- and is driving away users in droves to other solutions. Last.fm has plenty of competitors out there, and working hard to make its own service less usable and less reasonable isn't going to help keep users around.

Meanwhile, a bunch of folks have sent in the story of how DVD rental kiosk operator Redbox has pressured a third party to takedown its Redbox iPhone app. The app was apparently pretty cool, making use of the phone's GPS to tell you where the nearest kiosk is, and letting you reserve the movie you want. There is some speculation that Redbox is upset that the app also pulls a list of promotional codes, allowing some people to rent movies for free -- but that's a misguided concern. If that's the real issue, then they should just change how their promotional codes work because (of course) the codes are still available for anyone to search and use online. Shutting down the iPhone app doesn't fix that at all.

Still, it seems that both companies should know better. Having third parties build apps that make your services more useful is a sign of success, and should be encouraged, not threatened and shut down. We live in an age where too many people focus on using intellectual property as a club to block any use -- even when those uses are helpful in making your core product even more valuable. ]]>
bad-news https://beta.techdirt.com/comment_rss.php?sid=20090327/1536034280
Wed, 11 Mar 2009 05:15:00 PDT The Guardian Follows The NY Times In Making News A Platform Mike Masnick https://beta.techdirt.com/articles/20090311/0106204067.shtml https://beta.techdirt.com/articles/20090311/0106204067.shtml the Guardian, in the UK, has opened up an API and is sharing data in such a way that others can build programs on top of the news. This is fantastic -- and follows on a similar move last month by the NY Times. It appears that both the NY Times and the Guardian really are pushing the boundaries of recognizing that being an online newspaper these days needs to be about a lot more than delivering the news.

Perhaps even more interesting (though, getting much less attention) is the companion bit of news from some editors at the Guardian -- who are pointing out that they hope and pray each day that the NY Times gives into temptation and starts trying to charge for news... because it will create a huge opening for the Guardian to create a much larger online audience. This is what plenty of people have been pointing out for years: if clueless newspaper execs decide to start charging for news, it just opens the door wide for smarter news organizations to stay free and accumulate a much larger audience. ]]>
good-job https://beta.techdirt.com/comment_rss.php?sid=20090311/0106204067
Thu, 5 Feb 2009 21:02:00 PST NY Times Turning News Into A Platform Mike Masnick https://beta.techdirt.com/articles/20090205/0345593659.shtml https://beta.techdirt.com/articles/20090205/0345593659.shtml platforms, rather than monolithic "news delivering" services. Over the past year or so, a group of digitally savvy folks at the NY Times have shown one way that can work. Their latest move? To turn the NY Times news articles into a true platform. They've released an API for news, allowing others to actually build useful tools on top of the NY Times' news articles. Contrast that to, say, GateHouse Media, which recently sued the NY Times for trying to build useful tools on top of GateHouse's content.

Of course, just because there are some folks on the digital side who "get it" at the NY Times, it doesn't mean management has quite figured things out yet. At the same time as releasing this API, the paper's Executive Editor, Bill Keller is talking about trying to lock up their content and charge people for it, again. Yes, the newspaper needs new and innovative business models, but by now it should know that trying to charge for such content simply isn't a sustainable model. There's too much competition out there (which the NY Times discovered already when it tried and failed to charge for content a few years back). There are things that the paper can charge for -- but basic online content isn't one that will be successful. ]]>
smart https://beta.techdirt.com/comment_rss.php?sid=20090205/0345593659
Tue, 4 Mar 2008 12:36:19 PST If Facebook's Platform Is A Strategic Mistake, It's In Facing The Wrong Direction Mike Masnick https://beta.techdirt.com/articles/20080303/095750415.shtml https://beta.techdirt.com/articles/20080303/095750415.shtml Facebook's platform strategy is a strategic mistake which got me thinking. I disagree with the author of that piece, David Gal, who claims that the platform strategy is a mistake because it "squanders" rather than helps the core asset of Facebook, which is the community of people. That's difficult to believe, as the platform itself is what's created numerous applications within Facebook that have made the network itself more valuable to those members because it actually gives them something to do with all of their friends, rather than just connect to them. So it's difficult to see how Gal reaches his conclusion. His suggestion that there are just too many applications being developed doesn't really matter, as it's the top applications that are the ones that people find useful, and which they use to add value to the overall network itself.

However, the article did get me thinking about whether or not Facebook has made a strategic mistake with its platform strategy. When the Facebook platform strategy was first announced, it made a lot of sense. We've been waiting and waiting and waiting for someone to build out a true "web platform" (and remain amazed that Google has repeatedly ignored the opportunity). However, while the Facebook platform strategy may have made sense initially, it's way too inwardly focused. That is, it's been entirely focused on having people build applications within Facebook to get access to its users. What would have been a lot more interesting and a lot more powerful is the ability to build applications for outside of Facebook that would leverage the power of the people inside Facebook. While I'm sure the short-term view is that Facebook needs to keep people locked in, the long-term benefit needs to be making something that's really useful -- and so far, it's not clear the Facebook Platform has really reached that stage.

As such, perhaps it's not too surprising that many of the more successful Facebook apps to date have really just been focused on games and music, rather than anything all that productive. Turning the community inside out, so that it can take part in activities outside of just the Facebook arena could be a lot more interesting. Right now, Facebook's Platform seems designed to keep people in Facebook so that advertisers get value. But the real opportunity is in using the people in the community to do something of value and to provide value back to those users as well. Hopefully, that will be the next stage of growth that we see out of the Facebook platform, or expect to see people start to drift elsewhere. ]]>
not-open-enough https://beta.techdirt.com/comment_rss.php?sid=20080303/095750415