Techdirt's think tank, the Copia Institute, is working with the Trust & Safety Professional Association and its sister organization, the Trust & Safety Foundation, to produce an ongoing series of case studies about content moderation decisions. These case studies are presented in a neutral fashion, not aiming to criticize or applaud any particular decision, but to highlight the many different challenges that content moderators face and the tradeoffs they result in. Find more case studies here on Techdirt and on the TSF website.

Content Moderation Case Study: Usenet Has To Figure Out How To Deal With Spam (April 1994)

from the the-original-content-moderation-case-study dept

Summary: In the early 1990s, facing increased pressure from the commercial sector who sensed there might be some value in the nascent ?Internet,? the National Science Foundation began easing informal restrictions on commercial activity over the Internet. This gave rise to the earliest internet companies — but also to spam. Before the World Wide Web had really taken off, the place where a great deal of internet communication took place was Usenet, created in 1980, which was what one might think of as a proto-Reddit, with a variety of ?newsgroups? dedicated to different subjects that users could post to.

Usenet was a decentralized service based on the Network News Transfer Protocol. Users needed a Usenet reader, from which they would connect to any number of Usenet servers and pull down the latest content in the newsgroups they followed. In early 1994, a husband and wife lawyer team, Laurence Canter and Martha Siegel, decided that they would advertise their legal services regarding immigration to the US (specifically help with the infamous ?Green Card Lottery? to get a green card to the US) on Usenet.

They hired a programmer to write a perl script that posted their advertisement on 5,500 separate news groups. While cross-posting was possible (a single post designated for multiple newsgroups), this particular message was posted individually to each newsgroup, which made it even more annoying for users — since most Usenet reader applications would have recognized the same message as ?read? in different newsgroups if it had merely been cross-posted. Posting it this way guaranteed that many people saw the message over and over and over again.

It is generally considered one of the earliest examples of commercial ?spam? on the internet — and certainly the most ?successful? at the time. It also angered a ton of people. According to Time Magazine, Canter and Siegel faced immediate backlash:

In the eyes of many Internet regulars, it was a provocation so bald-faced and deliberate that it could not be ignored. And all over the world, Internet users responded spontaneously by answering the Spammers with angry electronic- mail messages called “flames.” Within minutes, the flames — filled with unprintable epithets — began pouring into Canter and Siegel’s Internet mailbox, first by the dozen, then by the hundreds, then by the thousands. A user in Australia sent in 1,000 phony requests for information every day. A 16-year-old threatened to visit the couple’s “crappy law firm” and “burn it to the ground.” The volume of traffic grew so heavy that the computer delivering the E-mail crashed repeatedly under the load. After three days, Internet Direct of Phoenix, the company that provided the lawyers with access to the Net, pulled the plug on their account.

It wasn?t just Usenet users. Immigration lawyers were also upset in part because Canter and Siegel were asking for money to do what most people could easily do for free:

Unfortunately, it also provided an opportunity for charlatans to charge exorbitant fees to file lottery entries for hopeful immigrants.

In truth, all it took to enter the drawing was a postcard with your name and address mailed to the designated location.

Canter and Siegel, a husband-and-wife law firm, decided to join the lottery frenzy by pitching their own overpriced services to immigrant communities.

The two were unrepentant, later claiming they made over $100,000 from the advertisement. They quickly set up a new company called ?Cybersell? to do this for others — and signed a contract to write a book for HarperCollins originally called “How To Make A Fortune On The Information Superhighway.”

Decisions to be made by Usenet server providers:

  • Would they need to start being more aggressive in monitoring and moderating their newsgroups?
  • Would it even be possible to prevent spam?
  • Should they even carry news groups that allowed for open contributions?

Decisions to be made by ISPs:

  • Should they allow Canter and Siegel to use their internet access to spam newsgroups?
  • How should they handle the backlash from users angry about the spam campaigns?

Questions and policy implications to consider:

  • What is the boundary between allowed commercial speech or advertising and spam? How do you distinguish it?
  • Is it possible to have distributed systems (as opposed to centralized ones) that don?t end up filled with spam?
  • What are the legal implications of spam?

Resolution: Canter and Siegel remained a scourge on the internet for some time. Various service providers were quick to kick them off as soon as it was discovered that they were using them. Indeed, many seemed willing to talk publicly about their decisions, such as Netcom, which shut down their account soon after the original spam happened and after Canter and Siegel had announced plans to continue spamming:

NETCOM On-Line Communications has taken the step of cancelling the service of Laurence Canter of Canter and Siegel, the lawyer commonly referred to as the “Green Card Lawyer”. Mr. Canter had been a customer of NETCOM in the past. He had been cautioned for what we consider abuse of NETCOM’s system resources and his systematic and willful actions that do not comply with the codes of behavior of USENET.

Mr. Canter has been widely quoted in the print and on-line media about his intention to continue his practice of advertising the services of his law firm using USENET newsgroups. He has also widely posted his intention to sell his services to advertise for others using the newsgroups. We do not choose to be the provider that will carry his messages.

That link also has notices from other service providers, such as Pipeline and Performance Systems, saying they were removing internet access.

Others focused on trying to help Usenet server operators get rid of the spam. Programmer Arnt Gulbrandsen quickly put together a tool to help fight this kind of spam by ?cancelling? the messages when spotted. This actually helped establish the early norm that it was okay to block and remove spam.

As for Canter and Siegel, they divorced a couple years later, though both kept promoting themselves as internet marketing experts. Canter was disbarred in Tennessee for his internet advertising practices, though he had already moved on from practicing law. Cybersell, the company they had setup to do internet advertising, was apparently dissolved in 1998.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Content Moderation Case Study: Usenet Has To Figure Out How To Deal With Spam (April 1994)”

Subscribe: RSS Leave a comment
15 Comments
This comment has been deemed insightful by the community.
Ehud Gavron (profile) says:

History ...

The only point of this post that touches on content moderation is the decisions on the part of Usenet administrators (‘admins’) on figuring out how to combat what we now know as spam. These occurred in the real world, unlike the timeline in this article.

The end result is bots that removed SPAM based on a "spam score" which varied from bot to bot. These would analyze Usenet postings and when the [mostly] same content was posted to different newsgroups, too many newsgroups, etc. (depending on THAT particular bot’s settings) the postings were "removed" by a fraudulent Usenet posting pretending to be from the author retracting the post. [That was the only protocol option at the time]

The commercial Internet was formed in 1993 when SprintLink convinced the Commercial Internet Exchange (CIX) to allow non-Internet companies to connect. Quickly formed thereafter other interconnects such as MAE-West and MAE-East followed. The rest is history.

The rest of the bad dates are below.

Shana Tova.

Ehud

UUCP was standardized in 1988.
Usenet was created in 1989.
Neither of these is "the early 1990s".

The "Internet" was created in 1973.
The NSFnet was created in 1985.
Neither of these is "the early 1990s."

By 1986 the NSF Supercomputer centers were online. The NSFnet was linked to ARPAnet, and the Internet (or "nascent Internet", whatever that is) was there.
Neither of these were in "the early 1990s."

Sir Tim Berners-Lee created the HTTP protocol and the world-wide web in 1989.
This was not in "the early 1990s."

While Canter & Siegel did their "green card lawyers spamming the globe thing later" the first Internet spam was in 1978.
This was not in 1993.
https://www.edn.com/1st-spam-email-is-sent-may-3-1978/

The first Usenet spam was not C&S in April 1994. It was someone else in January 1994.
https://en.wikipedia.org/wiki/Newsgroup_spam

The first "commercial" Usenet spam,[2][4] and the one which is often (mistakenly) claimed to be the first Usenet spam of any sort, was an advertisement for legal services entitled "Green Card Lottery – Final One?".[5] It was posted on 12 April 1994, by Arizona lawyers Laurence Canter and Martha Siegel, and hawked legal representation for United States immigrants seeking green cards.

Cybersell, the company they had setup to do internet advertising, was apparently dissolved in 1998.
Try 1997. Also don’t confuse Cybersell Inc (AZ) with Cybersell Inc (FLA).

Anonymous Coward says:

Re: Pedant ...

You’ve pedanted yourself into a corner. For all intents and purposes, the internet became a "thing" for the general public in the early 90s. Yes, it existed prior to that but it was not generally available. Even once it was it took time for the tools to be developed and made available for people to actually connect to the network.

Many of your "facts" are distorted. To pick just one example, UUCP and Usenet were strictly dial-up distribution networks, primarily via BBSes, when first introduced. It wasn’t until the introduction of NNTP that Usenet became available on the internet. Though RFC 977 was introduced in 1986 it took years for it to become widely implemented to the point that people moved away from BBS Usenet to internet Usenet (technically called UUCPNet). Right, the early 90s.

Nothing in the article says any of this was invented in the early 90s. Only that it took until then for the scum to crawl out of the woodwork to take advantage of it. It’s odd that someone as pedantic as yourself has such poor reading comprehension.

Ehud Gavron (profile) says:

Re: Re: Ad hominems are the last refuge of the needy

Many of your "facts" are distorted.
Like which facts?

To pick one example, UUCP and Usenet were strictly dial-up distribution networks…
Yes, and that’s not a distorted fact. The "Internet" as available to the masses in 1993 was also dial-up, and we all got free coffee coaster 3.5" diskettes, then CDs with AOL, Netcom, etc. That is neither distorted nor does it change teh nature of the fact.

It wasn’t until the introduction of NNTP that Usenet became available on the internet
Narp. Usenet newsgroups were available long before NNTP. NNTP just made it convenient to use a remote server instead of UUCP to a local store.

Nothing in the article says any of this was invented in the early 90s… reading comprehension… yada yada whine whine.

Sorry we read the same English words but understood them differently. Kind of like how you likely read the Wikipedia NNTP article and didn’t get past the first part.

Gotta go — my modem is tying up the line.

E

Thad (profile) says:

Re: History ...

In the early 1990s, facing increased pressure from the commercial sector who sensed there might be some value in the nascent “Internet,” the National Science Foundation began easing informal restrictions on commercial activity over the Internet. This gave rise to the earliest internet companies — but also to spam.

The phrase "in the early 1990s" in the article does not refer to the standardization of UUCP, the creation of Usenet, the creation of the Internet, the NSFnet, or the WWW. It refers to the rise of commercial activity on the Internet and an increase in spam.

It is generally considered one of the earliest examples of commercial “spam” on the internet — and certainly the most “successful” at the time.

The article does not describe the C&S spam as either "the first spam" or "the first Usenet spam". It merely describes it as "one of the earliest" and "the most successful at the time".

You could have saved a lot of time "correcting" the article if you’d read it more carefully. As usual, you insist on being a pedant but aren’t actually very good at it.

(By the way, "HTTP protocol" is redundant; it expands to "hypertext transfer protocol protocol".)

Ehud Gavron (profile) says:

Re: Re: History ...

You quote the original article as if it’s gospel. It’s not.

It is generally considered one of the earliest examples of commercial “spam” on the internet — and certainly the most “successful” at the time.
WHO generally considered [past?] this?

HTTP protocol is redundant…
So is SCUBA gear. So is SIM card. So is PIN number. Welcome to English.

E

Scary Devil Monastery (profile) says:

Re: Re:

"Now you can’t get disbarred unless your murder half the bar association, or shit in a judge’s coffee, apparently."

To be fair I think it’s only a matter of time before someone like Liebowitz leaps onto the judges podium and squats over their cup, pants around his ankles, in order to test that hypothesis.

US judges may end up having to pay for their "laissez-faire" approach to courtroom conduct with uninvited floaters.

Rekrul says:

The last time I actually tried to browse any of the binary newsgroups, they were drowning in spam. One popular group used to get about 500,000 posts a day. The last time I tried to download headers for that group, there were about a million posts an hour.

Yes, I know, most "true" Usenet users look down on the binary groups, but the last time I looked at any of the discussion groups, they were filled with spam as well.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...
Older Stuff
15:43 Content Moderation Case Study: Facebook Struggles To Correctly Moderate The Word 'Hoe' (2021) (21)
15:32 Content Moderation Case Study: Linkedin Blocks Access To Journalist Profiles In China (2021) (1)
16:12 Content Moderation Case Studies: Snapchat Disables GIPHY Integration After Racist 'Sticker' Is Discovered (2018) (11)
15:30 Content Moderation Case Study: Tumblr's Approach To Adult Content (2013) (5)
15:41 Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems (2)
15:47 Content Moderation Case Studies: Coca Cola Realizes Custom Bottle Labels Involve Moderation Issues (2021) (14)
15:28 Content Moderation Case Study: Bing Search Results Erases Images Of 'Tank Man' On Anniversary Of Tiananmen Square Crackdown (2021) (33)
15:32 Content Moderation Case Study: Twitter Removes 'Verified' Badge In Response To Policy Violations (2017) (8)
15:36 Content Moderation Case Study: Spam "Hacks" in Among Us (2020) (4)
15:37 Content Moderation Case Study: YouTube Deals With Disturbing Content Disguised As Videos For Kids (2017) (11)
15:48 Content Moderation Case Study: Twitter Temporarily Locks Account Of Indian Technology Minister For Copyright Violations (2021) (8)
15:45 Content Moderation Case Study: Spotify Comes Under Fire For Hosting Joe Rogan's Podcast (2020) (64)
15:48 Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020) (6)
15:48 Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020) (9)
15:35 Content Moderation Case Study: Facebook Removes Image Of Two Men Kissing (2011) (13)
15:23 Content Moderation Case Study: Instagram Takes Down Instagram Account Of Book About Instagram (2020) (90)
15:49 Content Moderation Case Study: YouTube Relocates Video Accused Of Inflated Views (2014) (2)
15:34 Content Moderation Case Study: Pretty Much Every Platform Overreacts To Content Removal Stimuli (2015) (23)
16:03 Content Moderation Case Study: Roblox Tries To Deal With Adult Content On A Platform Used By Many Kids (2020) (0)
15:43 Content Moderation Case Study: Twitter Suspends Users Who Tweet The Word 'Memphis' (2021) (10)
15:35 Content Moderation Case Study: Time Warner Cable Doesn't Want Anyone To See Critical Parody (2013) (14)
15:38 Content Moderation Case Studies: Twitter Clarifies Hacked Material Policy After Hunter Biden Controversy (2020) (9)
15:42 Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017) (1)
15:31 Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020) (8)
15:40 Content Moderation Case Study: Knitting Community Ravelry Bans All Talk Supporting President Trump (2019) (29)
15:50 Content Moderation Case Study: YouTube's New Policy On Nazi Content Results In Removal Of Historical And Education Videos (2019) (5)
15:36 Content Moderation Case Study: Google Removes Popular App That Removed Chinese Apps From Users' Phones (2020) (28)
15:42 Content Moderation Case Studies: How To Moderate World Leaders Justifying Violence (2020) (5)
15:47 Content Moderation Case Study: Apple Blocks WordPress Updates In Dispute Over Non-Existent In-app Purchase (2020) (18)
15:47 Content Moderation Case Study: Google Refuses To Honor Questionable Requests For Removal Of 'Defamatory' Content (2019) (25)
More arrow