What Safe Harbor is and isn’t and what Section 230 does and does not protect.
Featuring Tom Merritt.
MP3
Please SUBSCRIBE HERE.
A special thanks to all our supporters–without you, none of this would be possible.
Thanks to Kevin MacLeod of Incompetech.com for the theme music.
Thanks to Garrett Weinzierl for the logo!
Thanks to our mods, Kylde, Jack_Shid, KAPT_Kipper, and scottierowland on the subreddit
Send us email to feedback@dailytechnewsshow.com
Episode Script
Safe Harbor
Free pass
Government subsidy
CDA 230
Are you confused?
Don’t be.
Let’s help you Know a Little more about Safe Harbor.
Let’s start with the legal concept of a Safe Harbor. It’s a safety net to try to prevent misapplication of a rule.
A common example is a rule against reckless driving. You might declare a safe harbor for anyone driving less than 10 miles an hour. If you’re going that slow it’s not reckless, you can stop evaluating it.
For the internet, the concept of safe harbor revolves around who is liable for what’s posted online. If I run a blog and someone posts a comment on my blog that is libelous, you might think that the commenter is responsible for what they posted but the law may see my blog as the “publisher” of that comment and hole me responsible. Safe Harbor is the idea that given certain conditions you can host comments from other people and you are not held liable for this postings. In the US that principle is enshrined in the Communications Decency Act Section 230 of 1996.
So if you hear Safe Harbor and CDA 230 used interchangeably, that’s why.
An early example is Smith vs California decided by the Supreme Court in 1959. In that case a bookstore-owner in Los Angeles was prosecuted for having an obscene book in his store. The Supreme Court ruled that a bookstore is a distributor who can’t be excited to review every bit of content before it is sold. So the bookstore owner should only be liable if they knew or should have known that what they were distributing was illegal. The court said that without this protection, a safe harbor if you will, bookstores would limit their offering sot books they had inspected which would limit access to books that were not illegal.
The Safe Harbor discussion as it applies to the internet starts in the early 1990s with legal cases against CompuServe in 1991 and Prodigy in 1995.
By that time the case law was pretty clear. If you were a publisher, you were responsible for the content you published no matter who wrote it. If you were a distributor, you were not responsible for things written in the stuff you distributed. The publisher was.
CompuServe had a policy of not trying to moderate content. So in Chubby Inc vs Compuserve the court deemed Compuserve a distributor and therefore immune from liability.
However Prodigy employed moderators to validate content. So in Stratton Oakmont, Inc. v. Prodigy Services Co. Prodigy was deemed a publisher, since they had editorial control over what was published, and therefore liable for comments from its users.
If left as is this would encourage internet companies NOT to moderate content.
Congress at the time was preparing the Communications Decency Act, part of the Telecommunications Act of 1996. This act would make knowingly sending indecent or obscene material to minors a criminal offense.
If enacted this law would require internet companies to block indecent or obscene content and therefore make them publishers under the Stratton law and thus make them responsible for ALL content on their platform.
Companies would have to vet every comment and every post.
That led two House Representatives, Republican Chris Cox and Democrat Ron Wyden, to write a section of the CDA that allowed internet companies to moderate content without becoming publishers.
It ended up being twenty-six words long
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
That became Section 230 of Title 47 often called Section 230 of the CDA.
Oddly the rest of the CDA no longer exists. The anti-indecency portion that spurred Section 230’s creation was challenged by the ACLU and the Supreme Court rules in 1997 that the anti-indecency sections were unconstitutional, but left section 230 standing.
So the cause for the need of section 230 was then removed but left 230 on its own.
Of course Section 230 was challenged as well in 1997 in Zeran vs. AOL. An AOL user sued AOL for failing to remove, in a timely manner, libelous ads posted by other AOL users that connected his home number to the Oklahoma City bombing. In its decision, the United States Court of Appeals for the Fourth Circuit wrote “It would be impossible for service providers to screen each of their millions of postings for possible problems. Faced with potential liability for each message republished by their services, interactive computer service providers might choose to severely restrict the number and type of messages posted. Congress considered the weight of the speech interests implicated and chose to immunize service providers to avoid any such restrictive effect.”
It also wrote that Section 230 “creates a federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service.”
This ended up as a stricter protection for a distributor than the 1959 case. Instead of having to take it down once you know about it. Internet services were given a broader shield.
And that became the principle justification for CDA 230.
Now the section itself doesn’t give internet companies a free pass.
For a court to determine that you qualify for the immunity of Section 230 you must meet three criteria.
First the company must be a provider or user of an interactive computer service. CDA 230 does not apply to people writing letters. That one is pretty straightforward.
Also you have to be accused of being the publisher or speaker of the harmful information. The one’s pretty straightforward too. You can’t blame Facebook for something written on Twitter.
The third one is the key. You must not be the information content provider of the harmful information. Twitter is responsible for its own posts.
There are also other exceptions. Companies are not immune from federal criminal liability and intellectual property claims. The intellectual property claims part was codified int he Digital Millennium Copyright Act of 1998. That’s a whole separate topic.
In addition in April 2018, Congress passed a law exempting service providers from Section 230 immunity if they knowingly facilitate or support sex trafficking.
In recent years other exemptions to section 230 have been proposed. Some want Section 230 to require social networks deal with propaganda, fake news, terrorism or hate speech. Some want section 230 to only apply to politically neutral platforms.
This last argues that social networks can moderate under 230 to remove content from certain points of view. Since this leads to a platform having a political perspective, it should be considered a publisher.
The bill that has got the most momentum recently is the Eliminating Abusive and Rampant Neglect of Interactive Technologies or EARN IT act, sponsored by republican senators Lindsey Graham and Josh Hawley and Democrat Senators Dianne Feinstein and Richard Blumenthal.
The bill would create 15-m,ember government commission made up of administration officials and industry experts to establish best practices for the detection and reporting of child exploitation materials. Internet service that did not follow the practices recommended by the commission risked losing CDA 230 immunity. That bill is-opposed by the Internet Association among others for fear that the commission might recommend backdoors to encryption. The Senators sponsoring the bill deny any intention to add back doors.
To sum up. Section 230 was meant to encourage internet companies to allow open communication on their platforms by freeing them from liability for what other people post. This led to forums, chat rooms, social networks and more.
This is because without immunity the scale of the task of moderation would be too tall to provide protection from that liability. So to encourage at least some moderation and open platforms Section 230 was needed.
But the scale problem didn’t go away. And that has led to illegal content flourishing both for content not protected by 230 like intellectual property infringement and content that is protected but companies want to reduce because of public pressure, like child pornography.
What happens next is anybody’s guess.
But I hope this helps you understand what Safe Harbor is and isn’t and what Section 230 does and does not protect.
In other words I hope now you feel like you Know a Little More about Safe Harbor.