Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>"No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider"

That's how it's written. It provides protection for hosting controversial content posted by users, based on the fact that the host didn't write it, endorse it, or promote it.

If the host has an active hand in doing those things, they are no longer merely hosting. Same goes for surpressing or censoring.

The "another content provider" means the host is not involved in determining the content. But treating posts differently based on that content is not meaningfully different than an editor choosing what to publish.



There is no clause matching your assertion in the rather short law passed 25 years ago and no case law asserting your erroneous interpretation. I'd ask you to provide such but we both know there is none. Neither editorial selection nor removal has anything to do with 230 immunity. The law is working as written and intended.

It isn't merely a grey area. The law was deliberately intended to support editorial selection and even deletion of content offensive to the platform owners sensibilities while explicitly preserving their protection from liability for their users content. This is supported by a plain reading of the text and the words of the men who wrote it.

First the law

https://www.law.cornell.edu/uscode/text/47/230

> No provider or user of an interactive computer service shall be held liable on account of—

> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

>(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]

Now the words of Ron Wyden

>Republican Congressman Chris Cox and I wrote Section 230 in 1996 to give up-and-coming tech companies a sword and a shield, and to foster free speech and innovation online. Essentially, 230 says that users, not the website that hosts their content, are the ones responsible for what they post, whether on Facebook or in the comments section of a news article. That's what I call the shield.

> But it also gave companies a sword so that they can take down offensive content, lies and slime — the stuff that may be protected by the First Amendment but that most people do not want to experience online. And so they are free to take down white supremacist content or flag tweets that glorify violence (as Twitter did with President Trump's recent tweet) without fear of being sued for bias or even of having their site shut down.

Your positions isn't merely incorrect it is directly counterfactual. It's an example of replacing actual known reality with the counterfactual in order to obtain a rhetorical advantage.

It is more advantageous to start from the lie that platform owners have somehow been violating or abusing the law outrageously and something must be done about it compared to the actual reality that the law is working as intended and you wish to change it because there is both a general feeling that existing laws and privileges ought to be enforced and that new laws need be examined before being enacted.

The prevalence of people of a particular political stripe to literally just making up alternative facts in response to inconvenient reality is deeply challenging to productive dialogue because its impossible to start out with a reasonable basis for discussion one must instead rewind to figure out which part of your fellows assumptions are based on purely fabricated reality. I expect you have taken it as a given that the line you were fed was based on reality. You are mistaken. I strongly encourage you to read both the laws and the words of the man who cowrote it.


>obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable

The intent is clear from this list. It even implicitly allows violence as long as it's not excessive. It does not include anything about truth, or difference of opinion, or statements that aren't "backed by the science".

It also requires a "good faith" interpretation, which would be irrelevant if hosts were allowed to act on any criteria they wish.


Otherwise objectionable is so broad a fig leaf it could shade a continent. Just because they called out excessive violence does not mean a platform need tolerate any violence.

Good faith is only relevant to immunity from suits by people whose content was moderated not in terms of liability for other parties communications which is inviolate in all cases.

A finding of bad faith would then pertain exclusively to the particular act or acts of moderation and would not effect the platforms immunity for either other acts of moderation or for other parties speech. There is no act by which a companies 230 protection may be in general dissolved.

Furthermore absent 230 protection for act or acts of moderation those moderated would have little just cause to bring anything but fraudulent frivolous lawsuits because you in truth have little right to be heard on someone else's website. That is to say finding bad faith would only open the door to lawsuits it wouldn't be a cause in and of itself.

Say I banned you from my site because I don't like blond people but lied and said it was for breaking site rules.

I might be acting in bad faith but as you have no particular right to access or post content on my site so the judge would find my bad faith may cost me protection under 230 but nothing else.

This is probably why there wasn't a notable finding of bad faith that helped the blocked party between 1996 and 2019.

In 2019 we finally had Malwarebytes v Enigma wherein Enigma sued Malwarebytes labeling it's software as a possibility unwanted program and advising it's users not to install it. The 9th circuit found it was in bad faith discouraging competition.

If your issue is Facebook and Twitter silencing some viewpoints this is hardly encouraging as there are few parallels.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: