Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The solution to this seems pretty straightforward: force social media companies to choose to identify as a "platform" or a "publisher", rather than a mix of the two that gets to claim the most convenient aspects of both.

If they choose "platform", then they can take no responsibility for content posted, but also allow all content, and only remove content when it is required of them by the legal system (when the content is illegal and has been reported as such)

If they choose "publisher" then they are free to censor, "deplatform", delete, or restrict posting of anything they wish, but if someone posts something illegal, they take their share of legal responsibility for publishing it.



So if I run a large chess forum, and want to delete posts that aren't about chess, I would have to choose "publisher"?

And if some user X posted something about a dispute they were having with chess tournament organizer Y, and Y felt this was untrue and decided to sue X for libel, I'd also have responsibility for that alleged libel because I'm the publisher?

Before anyone says that this is a ridiculous scenario, that's pretty similar to that happened with Prodigy's bulletin boards before the CDA. Prodigy had content guidelines for their boards, moderators that moderated to enforce those guidelines, and software that screened for offensive language.

Someone posted a message that someone else felt was libelous, and that someone else sued both the poster and Prodigy. The court ruled that because Prodigy enforced content guidelines and had filters for bad language, they were a publisher and responsible for the content of all the messages.

It was that case that was one of the main inspirations for the CDA.

Before that, the way the case law was shaping up there were only going to be two options for an online forum.

1. If the forum does not want to be liable for what its users say, it cannot moderate or restrict their content, except as required by law.

2. If the forum places any restrictions on content, then it is essentially in the same position as a newspaper or a book publisher, and to keep its legal risks under control it is probably going to have to do a similar level of fact checking.


Very much yes. If you have time to go through all posts to make sure they're about chess, you have time to go through all posts to make sure they're not libel.

Maybe the model for internet forums should be nonprofit private clubs, rather than profit-making enterprises. If all members bought equity and a voting share in the administration of the forum, there would already be legally established means of determining liability for acts committed within the club. Is collective ownership of a commons such a crazy idea? You could still make your money as a subcontractor of the club providing technical services for hire, but instead of being a (benevolent) dictator for the users, you would be subject to the users. Shifting ownership would shift liability.

The essential ethical problem of the current situation, even from the days of people complaining that Youtube's primary business was illegally sharing copyrighted contet, is that liability was being shed to the users while all profits were going to the owners. Maybe that should just not be possible in future.


> If you have time to go through all posts to make sure they're about chess, you have time to go through all posts to make sure they're not libel.

I can tell if a post is about chess with just a quick skim to a very high degree of accuracy.

Telling if a post is libelous will often require in depth investigation of its claims, often requiring emailing or calling people involved in whatever the post was about and tracking down and interviewing witnesses.


Not only that but someone can sue you even if something isn’t libelous. They just have to believe it is


> Very much yes. If you have time to go through all posts to make sure they're about chess, you have time to go through all posts to make sure they're not libel.

Relevant criteria is not whether something is liable. It’s whether someone might sue because they think it’s libel. At minimum you probably need a lawyer to evaluate claims, and even then you’re not sure you’re safe. When a group has assets it becomes more likely target.

> Is collective ownership of a commons such a crazy idea?

Legally yes. There is no way to shift liability from the club to an individual member who made a post under the model you’re describing. Liability would be for the site owner i.e. the whole club


> There is no way to shift liability from the club to an individual member who made a post under the model you’re describing. Liability would be for the site owner i.e. the whole club.

If they only let rich people into the club it might work. The club could require members to agree to contracts that require the member to indemnify the club for any costs and damages associated with any lawsuits against the club over that member's posts.

Technically, that doesn't really shift liability to the member but if the member is going to pay the club's damages and the club's legal costs it's almost as good. That only works if the member can actually pay, hence the "only let rich people" in part.


True. That leads to adverse selection though: the clubs self select as juicy targets laden with assets. Further, only the rich can speak.

I doubt those making these proposals upthread want these consequences. For whatever reason people really don’t think their proposals through. (Not referring to you)


You seem to be confidently supporting a case that this is an impossible structure because only the rich would be allowed to speak. You don't seem to be making any distinction between the situation the club would be in from the situation that individuals are currently in.

The club being liable is no different from an individual being liable, which is the current situation, other than that the club has the ability to dissolve. Again, if libel law is broken, fix it. Somehow your alternative to the unthinkable outcome of only the rich being able to speak is to exclusively immunize the rich from the consequences of speech that they have direct and complete control over.


People don’t sue random Internet commentors for libel.

But imagine a form for them exclusively rich people. They have all taken out and Schoenes that will cover lawsuits of indefinite liability. Millions or billions. And if you find anything suitable on anywhere in the form you are guaranteed by the structure of the club that someone will make good to pay it if you win.

That’s the structure that you were arguing for. It’s a strange structure.

But I am not arguing that only the rich can speak. Regular users don’t face any real risk of libel for Internet comments.

The risk would arise for the private club because it has a target on his back: the money is here.

As a reminder the reason regular people couldn’t speak under your proposal is that in normal cases of form would itself be liable for our comments. So it would be uneconomical to run a forum due to the legal risk


> Liability would be for the site owner i.e. the whole club

I don't see a problem with that. The club should have vetted their members better, and since it didn't, it should be dissolved and its assets handed to the wronged party.

If libel law is broken, fix libel law. If a site distributes misinformation (that is not protected by law) that harms someone, someone should be responsible.

Indemnification of massive tech companies and not individuals is just a giveaway.


> And if some user X posted something about a dispute they were having with chess tournament organizer Y, and Y felt this was untrue and decided to sue X for libel, I'd also have responsibility for that alleged libel because I'm the publisher?

What's the precedent for TV and books? Are the TV networks or book publishers included in libel lawsuits? Not answering your question either way, just gathering information for now.


Books aren't user-generated content – everything in a published book can be easily vetted.

Neither is TV, for the most part. You have guests on talk shows who you can't control what they say, but you are also specifically choosing those people, so placing the blame on the publisher (TV Network) in that case makes a fair amount of sense.

With an online forum where anyone can sign up and start posting, you don't have that.


This depends on jurisdiction.


So if I run a large chess forum, and want to delete posts that aren't about chess, I would have to choose "publisher"?

No. Any such law could easily have provisions that exclude moderation for off-topic content (which would be defined carefully so as to include spam).

It would mean you can't delete "trolling", "bullying", on-topic flamewars and other categories of things you might want. To do that you'd have to indeed become a publisher. And then you'd need to respond to complaints quickly and take down libellous content, or become responsible for it.


Great! I'll just run a forum on "everything chess that isn't trolling, bullying, or flamewars".


A law like this could only apply to websites operating at a certain scale, exceeding some revenue threshold, owned by publicly traded companies, or some other proxy to get only the sites we’re talking about here. This is just off the top of my head too, I’m sure much smarter people can craft a framework much better than OP’s which is already a very good one.


Ok, so the situation right now: you make a web forum, and delete posts that are about your friends doing bad things but leave posts about that organizer because you don't know then and don't bother to look into reports that it is all, in fact, libel, because that's work and you aren't liable.

Meanwhile, you take down posts of people talking about how the tournament did not offer gender neutral bathrooms or how there aren't enough female chess masters, because that's "political" and off topic, but then turn a blind eye towards people harassing women on the forum and require them to keep begging you with the flag buttons to clean up your mess.

The result of this is just that the powerful majority gets something akin to a working filter and all the marginalized people don't just get lip service to their issues: they end up being unable to even bring up their complaints, because those complaints are divisive and often themselves against some terms of service banning meta-conversation (a common enough issue that this website--Hacker News--has such an indefensible clause).

This is all just not OK: you don't get to have immunity and have control. The idea that you should be able to restrict some of what happens but somehow not be even sort of responsible for everything else that happens is the kind of position you can only hold if you are part of the powerful majority, whatever it might be (cishet white males, corporate stooges, western prudes... take your pick) :/.

So, I guess, "congratulations tzs": you are demonstrating loud and clear that you think the legal position of running a chess forum that actively distorts the world view of everyone using it is more important to defend than the rights of groups like breast feeding mothers (a group that is routinely thrown off of websites and deplatfored even as alt-right groups are empowered).

(Meanwhile, the entire premise that people should get to sort of look like a platform without really being a platform is robbing the world of actual platforms, and it is ridiculous: the existence of things that almost work like platforms for the majority of powerful people means that getting people to tolerate the pain of using fully distributed systems is by and large off the table :/. Either these large websites should offer real platforms or they should be forced to not pretend to do so, so we can actually launch and use distributed systems instead.)


> you think the legal position of running a chess forum that actively distorts the world view of everyone using it is more important to defend than the rights of groups like breast feeding mothers

Wait, are you claiming chess forums are known to often discriminate against breastfeeding mothers? Or did you quietly jump from the specific to the general case? If so, are you saying Internet communities tend to discriminate against them? Hell, the closest I've seen is people IRL complaining about breastfeeding on public transit/other small, closed, public spaces. Justified or not, how is this infringing on any rights and how is any of this related to the Internet? How would one even know someone is a breastfeeding mother on a pseudonymous mostly text-based platform?

> The idea that you should be able to restrict some of what happens but somehow not be even sort of responsible for everything else...

But is that the case? If you don't restrict porn etc., you get hit by children's safety laws. If you don't remove ISIS videos you get hit by anti-terrorism laws. Same for hate speech, although I believe you're liable only if you refuse to take it down after being ordered to. (basing this on a mix of US and EU laws since I forger which are which). These things apply to "platforms" as well. But besides that, you can't have a perfectly neutral platform anyways. If you want the platform to survive, it needs to be usable. A chess forum that's full of politics is not usable and neither is a cooking forum full of nazis.


The danger here, it seems to me, is that once you make it clear that the people saying that there aren't enough female chess masters have a legal right to be on this forum and cannot have their posts thrown out as "political," the people saying that men are naturally more suited to the game of chess and women should get back in the kitchen will rub their hands in glee and realize that they now have a legal right to be on every other chess forum, because their posts, too, are merely "political" and not libelous. They know how to toe that line - doing so a core competency of internet trolls (see also "hide your power level"). That is, the tradeoff of saying that the powerful majority no longer gets to regulate forums and must give the marginalized a voice is saying that the marginalized no longer get to regulate forums either and must give the powerful majority a voice - and the powerful majority is, after all, powerful, and will take advantage of this too.

So the practical result is that only people with the resources to moderate their forums to the point that they're willing to accept liability will run moderated forums. That requires either significant resources to run the moderation program (and a chilling effect on what gets approved) or significant resources to deal with liability and American-style lawsuits. The breastfeeding mother who's setting up a blog for her three closest friends to comment on will technically be fine, but she can no longer grow her community beyond her three closest friends. The powerful (whether that's "big companies" or "socially powerful races" or whatever) are happy to keep restricting what happens on their platform and take on liability because that liability will affect them less.

I don't have a solution for this, and I find your argument mostly compelling, but I don't know that it gets us where we want to be either. I'm not really comfortable with "The price we pay for keeping MRAs from derailing forums is allowing people to say that feminists are derailing forums," but "The price we pay for making sure feminists can speak is making sure MRAs can speak" feels equally uncomfortable.

(BTW - I upvoted both of you because I think you both made good points.)


Do you have a clue what MRA's truly are?


No replies, just negs. I assume at the moment that you have a wrong conception of MRAs and are attributing the MRA label to certain other types of groups.


The solution is to anonymize and decentralize the internet, and we've been saying it for decades.


The technical underpinnings of the internet are decentralized well enough. The reality is that these aren't technical problems, and any technical workaround will ultimately become circular. We have to come up with the necessary legal and social innovations that will allow people to cope.


That's a slightly different solution from just decentralizing the internet, which 'saurik advocated above. (I'm in favor of decentralization on principle, to be clear, I'm just not sure it solves this problem.)

The problem with anonymizing the internet is that it prevents forming real-world communities. If you're forced to remain hidden so that you escape scrutiny, the breastfeeding mother has no way of finding others in her city beyond her three existing friends, because none of them want to be known as the dissident who supports breastfeeding. There's still value to the online communities, yes, but this seems like a significant abdication of the power of the internet to help society.

The problem with decentralizing the internet without anonymizing it is that any legal restrictions remain. If there's a law that says that Twitter can't kick breastfeeding mothers off their website without taking legal responsibility, that same law will say that my Mastodon instance must federate with every alt-right-but-toeing-the-line Mastodon instance if I don't want to take legal responsibility for everything that shows up on my Mastodon instance.


>...that same law will say that my Mastodon instance must federate with every alt-right-but-toeing-the-line Mastodon instance if I don't want to take legal responsibility for everything that shows up on my Mastodon instance.

This seems highly unlikely to me. What if you are running a different version of the codebase or something from those instances? v2.0.1special. Even leaving aside constraints like that, volume of inbound traffic, etc. are direct costs to federation with other instances that I doubt laws would be willing to force you to pay.

I wonder if there are examples of this having been done in the past. Microsoft's antitrust case is the closest that comes to mind, but they were operating from an absolutely dominant standpoint at the time. If Mastodon/ActivityPub ever reaches that level, I'd consider it...a win?


The law doesn't have to force you to pay it - it just has to say, either federate with everybody or nobody. All it needs to do is say, if you're accepting posts from other people, you need to do so in a non-discriminatory way.

(Providers like Twitter would love this law because it would kill decentralized systems. In fact this would allow Jack to "decentralize" Twitter https://twitter.com/jack/status/1204766078468911106 while making sure no decentralized systems run by the powerless can compete. Everyone else can technically run a decentralized system, but Twitter is the only one worth using. It's the perfect regulatory capture.)

Sure, you can probably get by for a bit by just happening to fail to peer with Nazis, but you would immediately lose the ability to make a public shared block list with reasoning, a la https://github.com/dzuk-mutant/blockchain/blob/master/list/l... . You'd also have trouble successfully peering with non-Nazi instances - if you publish the protocol changes in v2.0.1special so other reasonable people can federate with you, the Nazis will just apply those patches too.


It is already both (for the most part). The problem is that people will always prefer centralization. From shopping malls, capital cities, credit cards... to Internet platforms, the unavoidable fact is that centralization makes things easier, cheaper and more convenient.

Even federated-by-design technologies like email have slowly turned into mostly centralized services. Just having to pick an email provider is too much work for most people. Gmail is good and free, so everyone uses that. As long as servers cost money and big corps can offer free things to get user share, decentralised services are doomed to never become mainstream.


That didn't make the star-chans better; quite the opposite in fact.


It did from a certain point of view, which is kind of an unspoken point. Many of the people who argue that allowing anything but absolute and unfettered freedom of speech is a slippery slope towards fascism are not doing so from a neutral or academic point of view. Rather, they want to push the Overton window of societal acceptance for ideas currently considered intolerable by mainstream society by guaranteeing exposure to certain forms of political speech and propaganda.


Isn't the powerful majority you describe one any individual can enter into by becoming the administrator of their own site?

The scenario you describe sounds like it makes for a pretty terrible chess site, and I bet people would like an alternative.


The only solution is to kill all small independent boards and only allow everything to be handled by Facebook et al who are sufficiently armed with lawyers.

Death to independent free speech publishers.


Yes, I would like the recourse of suing you for libel in this situation.

And if you would like to protect against that risk you should consider including a "flag" button or something on each post where I, member of public, can get ahold of you personally, without needing to create an account.


That seems like a good idea but it's simplify the issue way too much.

> but also allow all content,

Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...

Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.

> but if someone posts something illegal, they take their share of legal responsibility for publishing it.

That's also kind of impossible. The law evolved to consider that impossibility to look at every piece of content, and this is why the DMCA exist. Look at Youtube which try to filter their content much further than the law currently require, they have HUGE teams of moderators, multiple tens of thousands, with some of the best kind of neural network, working on this and yet it fail so often.

The world isn't binary, we need a bit of both.

It could be an interesting experiment though to allow legally the kind of platform you suggest. Someway to protect website owner from any legal retaliation. It would most probably look like 4chan, but still interesting.


>> The world isn't binary, we need a bit of both.

I believe the point the GP is making is that social media sites play both sides when it's in their best interest and neither when it isn't. So they moderate in the name of "public safety" when it suits them, but can't catch everything so they hide behind "impartial utility" to avoid responsibility.

They aren't walking a line, they are switching sides when it's in their best interest.


> They aren't walking a line, they are switching sides when it's in their best interest.

Yeah my comment kind of mentions how that's just not in the interest of website owner thus won't happens.

Since when does business works toward anything else than their best interest? That's kind of the whole point of that.


Oh I'll definitely agree I'm oversimplifying it, and there's room for improvement. But I think the core issue remains: if they are allowed to control discourse by choosing who can speak and who cannot, then they must be held responsible for who they allow to speak, and they need to be transparent about it.

And to your last point: as a decently frequent user of 4chan, that's pretty much what I like about it, though there absolutely IS moderation on most boards. People's minds go to /b/ and /pol/ when they think of 4chan, but there are several niche hobby communities on other boards which thrive in a (relatively) low-moderation, low-interferance. setting.


> if they are allowed to control discourse by choosing who can speak and who cannot, then they must be held responsible for who they allow to speak, and they need to be transparent about it.

Wouldn't this idea in essence eliminate the possibility of moderation of any kind? In order to do any moderation, every post by every user would have to be manually reviewed. This obviously doesn't scale, so in effect the law would eliminate the ability for website owners to moderate content on their own site. Also, what happens to sites like reddit and github where moderation is a feature offered by the product? How would people be able to find a moderated community if that's what they wanted? The idea seems totally unworkable.


Hey root_axis, I see you pop up pretty frequently on threads like this where people really don't understand the consequences of forcing companies to host speech and removing their legal protections if they decide to moderate. Personally I've seen the same dialog hash out so many times it's become exhausting to even reply to them.

I just want to thank you for continuing to fight for what's right.


haha, probably a sign I am spending too much time on HN, but it does surprise me to see this suggestion pop up so often, especially on this site which happens to be a quintessential example of moderation as part of the production. Imagine what dang's workload would be like if he had to hand review every one of these comments!


This obviously doesn't scale

That’s not true though. What you mean is that the economics of moderation are not convenient to achieving your desired outcome. YouTube for example could easily afford to review uploaded content before publishing it. They just wouldn’t make as much profit as they would like to.


More than 300 hours of video are uploaded to YouTube every minute, the economics of comprehensive manual review are not "inconvenient" they are wholly implausible.


You’ve got it backwards. That much gets uploaded because it’s unmoderated. The vast majority of it is junk that everyone knows would never pass any sort of quality filter, so no one would bother to upload it.


That makes no sense. YouTube has no policy against "junk" videos, there is no reason why individual users would change their upload habits except for a tiny minority that knowingly upload violating videos.


No, there is that instant dopamine hit of uploading a video and seeing the likes in real time. Whereas if it was, upload a video and 6 months later it might be approved, people would be more diligent about it. This is very obvious human nature.


Relatively low-moderation? Half of the boards ban for political discussion in its entirety.


> . If you don't moderate, you'll get any kind of content, including spam, troll, etc..

There is a reasonable solution to this. Give people the tools to self moderate.

Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer.


> Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer.

The internet already works this way, if you don't like the moderation policies of a website you have the freedom to use a different website.


> The internet already works this way,

I am referring to platforms.

No, most platforms do not work this way.


>Things like reddit, for example, work pretty great, in that if you don't like a community, then you can go to a different one, which moderation rules that you prefer

What you have described here is exactly how all websites work. If you don't like the site moderation you are free to use a different site, just like on reddit.com except you change a few more characters in the URL bar.


> What you have described here is exactly how all websites work

Not within the platform, no.

It is easy to create another subreddit. And if you create another subreddit, you have full access to all the same Reddit infrastructure as every other redditor.

I am talking about access to the platform.

> Except you change a few more characters

No, you would not have access to all the Reddit infrastructure, and access to all the cross site stuff, using the same Reddit account.

It is pretty easy to move across Reddit, and within Reddit, and get all the advantages of it. You don't get all those advantages, if it is another website.

Other people can't use, for example, the same mobile Reddit app, to access your website, and would have to download a new app.

They can't log in using the same account. They can't keep track of their posting history, all through the same user link.

There are numerous examples like that. There are lots and lots of benefits to using the actual Reddit website, compared to using a different website.

So no, you cannot just create a new website, and get all of the very significant benefits that you would get, from having it all on Reddit, using the same infrastructure, and account, and mobile app, and posting history, and follower list, ect,ect ect, for example.


> Not within the platform, no.

What does this sentence mean? No what? On what platform?

> It is easy to create another subreddit

It is easy to create another website.

> if you create another subreddit, you have full access to all the same Reddit infrastructure as every other redditor

So what? There is no distinction from a moderation perspective. If you don't like how the mods run a subreddit you can use a different subreddit, same as any other forum on the internet.

> So no, you cannot just create a new website, and get all of the very significant benefits that you would get, from having it all on Reddit, using the same infrastructure, and account, and mobile app, and posting history, and follower list, ect,ect ect, for example.

Yes you can just create a new website or use a different. You don't "own" the users of reddit.com, there is no reason why one should be entitled to the infrastructure or the users.


> It is easy to create another website.

You can't get all the same benefits of having it all on reddit. Things like being on the same mobile app, and having the same user account.

> Yes you can just create a new website or use a different. You don't "own" the users of reddit.com, there is no reason why one should be entitled to the infrastructure or the users.

This is called a barrier to entry. regardless of who "owns" all of these benefits, it is still something that a person does not get, if they simply create a different website.

The missing benefits, would make "creating another website", significantly less useful, and are the ones that I mentioned before. You would not be on the same mobile app, would not have the same follower list, would not have the same user account, ect.

These are huge benefits that one would not get if they simply created a different website. Who "owns" it, does not change the true fact that these benefits are large, and you would not get them if you merely created a new website.

> There is no distinction from a moderation perspective

Yes there is. The difference is that if you create a new website, you don't get all those significant benefits that I talked about. That is the distinction that I am talking about.


But how does that work as a regulation?

It's hard enough to get community moderators to sign up to do unpleasant work for no money, how do you expect to get anyone to do it after you impose liability for getting it wrong?

Actually, maybe there's something to this: Impose no liability on anybody for moderating as long as they're not moderating more than ten million users. If you are, the only way to avoid liability is to be a common carrier. Then you can actually have community moderation, but you can't have Zuckerberg deciding what a billion people don't get to see.


> how do you expect to get anyone to do it after you impose liability for getting it wrong?

I am suggesting that there would no platform wide moderation, but would instead be things like public block lists that users could voluntarily subscribe to.

IE, on Twitter, anyone could publish a "spam account list" or whatever, and people could choose whatever their preferred block list is.

Or they could choose not to follow any block lists, if they so desire.

Some blocklists might only block spammers, another might block Donald Trump, and another might block anyone who posts any swear words at all, and individual people would choose how they would like to view their content.


Until Reddit decides your community is not OK and will quarantine you like https://www.reddit.com/r/TheRedPill/ or outright ban you like https://www.reddit.com/r/watchpeopledie

Self moderation only works up to a certain extend.


Give people the tools to self moderate.

In the old days on Usenet we had “killfiles”. It worked pretty well.


Killfiles did not work well, and are part of why Usenet died. Long-time members will have good killfiles and a relatively good experience, yes, but new members start with getting every message. Building up a killfile involved a long process of deciding who was worth reading, which is a large investment when you've just started getting into a channel.


Having killfiles dictated by a central authority is completely missing the point. In the old days everyone made up their own minds.


That's the way the system had handled it for ages. Common carriers vs publishers. Tech companies decided that didn't apply to them, and they could have the best of both worlds.


Social media is a bit different than, say, the phone system, in its ability to widely broadcast things.

It's different from traditional publishers in that there are orders of magnitude more 'publishers', rather than, say, a newspaper and a few TV stations per city.

It's a tricky problem.


> Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...

If you create your own Twitter account and post a bunch of spam there, no one will follow you; so it shouldn't matter. You can also limit someone's access to "discovery" mechanisms without deleting their content or preventing them from posting entirely so even their opted-in followers can't see their content.

The core problem is when social media companies create mechanisms for users to bother each other without knowing each other, and there "span" is the tip of the ice berg: people routinely abuse and harass other people--which should be construed very very broadly: if you had a child who died in a school shooting and someone likes to keep reminding you of it as they take glee in your pain, that is obviously abuse and harassment, and yet it isn't "illegal" and clearly isn't "spam", so is almost always considered "totally fair game"--these websites don't remove that content or punish people for it, and yet they remove photos of people breast feeding, as if that is some crime against humanity.

What forcing websites to be platforms would do is fix social media by causing the people who make these websites to reconsider features that should probably not have existed in the first place.


> Almost no platform survive without a bit of moderation. If you don't moderate, you'll get any kind of content, including spam, troll, etc...

There is moderation and there is censorship. Nobody is against filtering spam. Trolling is fine in my opinion. Just let the user have the option of blocking who they want to block an follow who they want to follow.

> Add that to the fact that you'll get people that will just push to boycott such kind of platforms, and thus you'll no longer have much possible ways to make this kind of platform exist.

If that was true, twitter, reddit, facebook, google wouldn't exist in the first place.

> That's also kind of impossible.

No it is not. Publishers don't find it impossible. Platforms don't find it impossible. If it was impossible, telcoms and publishers wouldn't exist.

> It could be an interesting experiment though to allow legally the kind of platform you suggest.

We already had this kind of platform.

> It would most probably look like 4chan, but still interesting.

No, it would look like 2009-2013 twitter, reddit, facebook, etc.


> There is moderation and there is censorship. Nobody is against filtering spam. Trolling is fine in my opinion. Just let the user have the option of blocking who they want to block an follow who they want to follow.

In my experience, most people are not interested in sifting through mountains of garbage just to pick out a few morsels of a decent conversation. If you let trolls and bad-faith actors persist on your site, soon those people will be the only folks who are left.


> In my experience, most people are not interested in sifting through mountains of garbage just to pick out a few morsels of a decent conversation.

If that was true, HN would be infinitely more popular than reddit.

> If you let trolls and bad-faith actors persist on your site, soon those people will be the only folks who are left.

No. If you let users block trolls and bad-faith actors, they go away.

Once again, if you were right, twitter, reddit, facebook, etc wouldn't have grown to what they are today.


> If that was true, HN would be infinitely more popular than reddit.

Not sure that was the best example. /r/programming is kind of notorious for being HN on a few-hour tape delay with a substantially diminished quality of conversation and fewer comments in general. But it's kind of a moot point because...

> twitter, reddit, facebook

All of these social networks are moderated to one degree or another. In fact, this entire post was spawned because of a Twitter moderation decision, and it is nowhere near the first time that this even happened.

More importantly, none of these social networks gained popularity because of lack of moderation. Twitter became popular because you could potentially win the lottery and talk to a famous person. Reddit became popular because Digg refugees needed somewhere to go and it had pornography on top of that. Facebook became popular because you could keep up with your buddies from college and everybody had real names and faces attached to them.


> No. If you let users block trolls and bad-faith actors, they go away.

This is so profoundly untrue that Twitter had to stop creating "egg" avatars for users who did not have them because the number of sock-puppet accounts made them block-on-sight.


> If that was true, HN would be infinitely more popular than reddit.

Reddit communities live and die by the strength of their moderation. Sure, Reddit as a whole is mountains of garbage. But the beauty (if that's the word) of the subreddit system is that to folks who want to talk about communism, hating women and minorities is garbage, and to folks who want to hate women and minorities, communism is garbage, and they both get the experience they want.

Reddit's popularity is due to the fact that a) people have multiple interests and so they want to hop communities with low activation energy (same high-level reason that GitHub got popular over individual git hosting sites: you already have an account) and b) there is some correlation between being a "bad-faith actor" across communities, regardless of their specific moderation worldview (e.g., neither /r/GamersRiseUp nor /r/FULLCOMMUNISM is interested in V1agr4), and so "you have some karma at all, regardless of source" is a useful filter.

> Once again, if you were right, twitter, reddit, facebook, etc wouldn't have grown to what they are today.

All of these systems put work into blocking abusive participants site-wide (including real humans who are very carefully and intentionally spewing vitriol) and are increasingly automatically blocking them.


> Reddit communities live and die by the strength of their moderation.

Hence why I wasn't against moderation. I'm against censorship. I'm all for limiting "communism" subreddit to the topic of communism ( moderation ). However, I'm against the communism subreddit censoring people saying nasty things about stalin or what have you ( censorship ).

Like how politics, atheism and other popular subreddits used to be open platforms for people to express how they truly feel. Until the shift happened and they turned into censored hellholes.

> All of these systems put work into blocking abusive participants site-wide (including real humans who are very carefully and intentionally spewing vitriol) and are increasingly automatically blocking them.

No. All of these systems put into censoring people they disagree with. If truly "spewing vitriol" was the reason, then politics, worldnews, twoxchromosome, atheism and every major sub would be banned.

As long as the "vitriol" was pertinent to the topic, it should be allowed. After all, that's the point of the voting system right? If you don't like it, vote it down.

The 2009-2013 social media was great because everyone got to spew their vitriol so it evened things out. Now the vitriol is so concentrated that you have shitholes like politics and the_donald. Funnily enough, one is quarantined and the other isn't.

Moderation is okay. Censorship isn't.


I don't agree with your core conceit of delineating between moderation and censorship, but this threw me:

> As long as the "vitriol" was pertinent to the topic, it should be allowed. After all, that's the point of the voting system right? If you don't like it, vote it down.

Voting systems as implemented by many popular sites are moderation/censorship via mob rule, and I'm surprised that you advocate for it.

I actually prefer having actual moderators to having a post voted down because five random people disagreed with my opinion and wanted to hide it in a attempt to control the narrative of the comment thread.

That's a problem even this site doesn't manage to avoid. Heck, look at your posts; in this comment thread, people are downvoting you in an attempt to hide your opinion, and I don't even agree with you.


Wait, what? 2009-2013 Twitter, Reddit, and Facebook were moderating content. We never had the kind of platform you're talking about. The closest was 4Chan, and even 4Chan heavily moderated individual boards. Even platforms like Gab still have moderation today.

Forums, usergroups, mailing lists, blog comments, etc... have always been moderated for spam, trolling, abuse, and just bad actors in general.

> Nobody is against filtering spam.

Repeal Section 230 and I give you 1 year, tops, before advertisers start making the case that filtering spam is censorship. After all, who decides what is and isn't spam? Advertisers wouldn't waste their time posting spam if people weren't clicking on it, so clearly the content is relevant to some people. Who are you to say that those advertisers shouldn't be able to reach their audience?


> Wait, what? 2009-2013 Twitter, Reddit, and Facebook were moderating content.

But not censoring. You could pretty much say and do anything on those platforms except for illegal content.

> Forums, usergroups, mailing lists, blog comments, etc... have always been moderated for spam, trolling, abuse, and just bad actors in general.

Which is different from censoring.

> Even platforms like Gab still have moderation today.

Gab always had moderation.

Either people haven't used 2009-2013 twitter, reddit, facebook, etc or people are pushing some heavy revisionist history here.

Reddit, especially branded itself the "free speech platform" in that time period.

I'm okay with moderation, I'm against censorship. For example, I'm all for a sports subreddit/community limiting the content to sports. And I'm for the users saying anything they want about the sports topic, even if it offends people.

See the difference?

It's funny how every response to me was by people who intentionally confused moderation with censorship.

And locking the wikileaks account isn't moderation, it's censorship.


What do you think the difference is between moderation and censorship?

Because Reddit/Twitter/Facebook in 2009-2013 didn't just remove illegal content. They removed tons of legal content too. They removed spam. Facebook removed pornography. Reddit in particular allowed individual subreddits to moderate/censor basically on any criteria whatsoever. If you went into a random forum in 2009 about dogs and started spouting nonsense about how we should all eat dogs, you would get kicked off of that forum. They wouldn't patiently hear out your controversial point-of-view.

Go back and read some of the usenet threads from this time period, there are people getting banned just as a joke; the paradigm of 'benevolent dictators' running forums was already pretty widely accepted.

What definition of censorship do you have that doesn't include removing explicit content, self-promotion, and off-topic posts?


> There is moderation and there is censorship. Nobody is against filtering spam.

This position has all the integrity, defensibility, and internal logic of "I can't define pornography, but I know it when I see it". Moderation and censorship are the same concept. It's just that "moderation" is metaphysically good, and "censorship" is metaphysically bad.


> No it is not. Publishers don't find it impossible. Platforms don't find it impossible. If it was impossible, telcoms and publishers wouldn't exist.

The problem is neither of them holds the middle ground.

Telecoms allow everything. Some of it is bad. Not having the bad stuff filtered is not great, but that's okay specifically because it can be filtered by somebody else. You don't need Comcast to do spam filtering on your email because Gmail can do it.

Publishers allow almost nothing. Most of what they publish is first-party. The editor of The New York Times can have their article published in The New York Times, but you generally can't.

Neither of those entity types primarily host user-generated content. Which of them do you propose is the appropriate model for moderating YouTube or Reddit?


> Which of them do you propose is the appropriate model for moderating YouTube or Reddit?

Neither? People are proposing we develop a new model that reflects the new situation we face. Sounds reasonable to me…


But then what's the new model? There are three options.

You can have a platform which is completely unmoderated, but then it's overrun with spam.

You can demand a level of accuracy which is impossible at the scale of a many-to-many communications platform (which de facto prohibits them).

Or you let people try to moderate them, have them filter out 95% of the garbage, and don't punish them for the 5% they inevitably miss.


> Or you let people try to moderate them, have them filter out 95% of the garbage, and don't punish them for the 5% they inevitably miss.

Yes? That sounds great? That's how it currently works today?


That's the one you take away if you require a choice between having any moderation at all and having a safe harbor, because without the safe harbor the 5% they get wrong subjects them to liability.


> There is moderation and there is censorship

All moderation is censorship, though not all censorship is moderation.


Pretty sure you have that backwards


No, moderation is censorship of user-submitted content by forum operators (whether owners or some other kind of host) or their agents. Censorship not directed at user-submitted content or carried out by other entities (e.g., the state) is not moderation.


It would definitely look like 4chan. 4chan is a straightforward example of unrestricted speach. And its not a bad place, just different, but most people wouldn't enjoy being there.

Even places like this maintain a certain form of discourse by threats of bans.


Twitter, Reddit, and Facebook all have moderation.


Can we assume you’ve never moderated any forum, or the comments on any web site?

Any site owner automatically removes thousands of spam comments. But these aren’t illegal. Your system would either:

1. Force sites to publish all legal spam, or

2. Face liability if anyone sneaks illegal content in. In practice all sites would need to censor all comments until they passed human approval. At big sites would probably need approval from lawyers.


Platforms would need to allow themselves to be DDOS... Not gonna happen. It won't stop alt-right talking heads with no clue about the world to promote their "platform/publisher" agenda.


This is roughly some people's interpretation of the situation in the U.S. prior to 1996, following the Cubby and Stratton Oakmont cases.

https://en.wikipedia.org/wiki/Section_230_of_the_Communicati...

The legislators who created §230 thought these incentives were bad, and wanted service operators to be able to choose to place themselves somewhere other than these two endpoints.


Did providers really have the ability to claim to be "platforms" before? If so, then I guess I am super anti- section 230 (which I have always been somewhat uncomfortable with, despite the EFF insisting that I should like it... but their arguments frankly always felt like "without this we don't have platforms", and I agree we should have platforms and would not want a world without platforms; in some sense I feel like the correct solution is force everyone to be a publisher and then you get people building distributed systems to have platforms, so no one can be said to own or control them).


> Did providers really have the ability to claim to be "platforms" before?

Possibly!

https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

> despite the EFF insisting that I should like it...

I should mention that I work there and I like §230 very much (although I'm not doing much work on it at the moment). One idea that I think is helpful here is that there are so many intermediaries that are involved in allowing you to communicate with someone.

https://www.eff.org/free-speech-weak-link

People are already applying all kinds of pressure on each of those intermediaries; with weakened §230 protections, more of them would also be threatening litigation.

> in some sense I feel like the correct solution is force everyone to be a publisher

I also like this at some level, and I remember when I had my own web site hosted on my own desktop computer. (Some of my online communications are still hosted by my friend.) But I don't tend to think tinkering with intermediaries' incentives about content is the thing that will get us there, because there are so many other practical advantages that people have perceived in the more centralized services.


> If they choose "platform", then they can take no responsibility for content posted, but also allow all content, and only remove content when it is required of them by the legal system (when the content is illegal and has been reported as such)

If you require the content to be illegal for a platform to remove it and there is a consequence for being wrong, that's a problem since the determination outside of platform competency, which encourages them not to act even against likely-illegal content.

Section 230 was adopted specifically to address this problem, that attempts to constrain socially undesirable content (even if aimed at illegal content) risked making online providers liable for all content under the model applies to traditional media which was viewed as not functioning at web scale, at least without inhibiting innovation greatly (of course, it's pure coincidence that Section 230 support is eroding when there are now huge incumbents that have an advantage from drastically making it more expensive to scale up new challengers.)


What would Hacker News be? If someone posts spam comments on HN are the mods allowed to delete them? If someone writes a false comment is HN liable for that false comment?


All that would happen is the platform and the moderation would be separated, and the moderation optional. There would be a 4chan-like hacker news platform, and hacker news modlist that hides problematic comments.

Hacker news already operates like this in a small way with shadowbanned accounts. Users can choose whether to see shadowbanned comments or hide them from view, what level of censorship they prefer.

Platforms only being platforms wouldn't turn everything into spam and screeching. It just means you'd be able to see that if you wanted to.


I only use HN clients where the default is the sensible thing, see everything.

Curious what the website uses as a default for no logged in account/new accounts - anyone fill in the blank for me?


Showdead is off by default. You can test it in private/incognito mode.


Welp, there goes the respect I had.

Censorship by default, not just for crybabies who are afraid of reading things they don't like.

Depressing. But, that's what I've come to expect on this channel.


That’s a good question. Presumably you could draw some line to distinguish between small communities and enormous ones (tens of millions of monthly active users, for example), or distinguish between social media and forums.


I think the big platforms would love to be treated as platforms only. But the problem is that the platform owner is the best positioned to police the content. If they have to rely on the legal system to take things down, there is going to be a lot more nefarious activity that goes untouched. And I don't think the public or our politicians have the stomach for that.


Because they don't give tools to law enforcement agencies to enforce their local laws. As a platform, it would be within their power and their right to do so. I think that is preferable to them doing the enforcing themselves.


> give tools to law enforcement agencies to enforce their local laws

Is that really a world you want to live in?


We already live in that world. It's just that we have another master on top of that.


Not quite. Facebook can kick you off their platform, but they can't bring you up on charges and throw you in prison.


Well the whole discussion becomes moot from that point of view.


> If they have to rely on the legal system to take things down, there is going to be a lot more nefarious activity that goes untouched.

The DMCA does a reasonably good job of handling that exact problem for copyright infringing content, while still providing due process.

Edit: I have no idea why this is being downvoted, what I’ve said is completely true. If you’re mad about YouTube’s terrible policies, they’re not related to DMCA. They have their own completely seperate (and highly oppressive) system for handling infringement claims (in addition to their DMCA obligations).


This DMCA?

https://twitter.com/JRhodesPianist/status/103692924465446092... https://www.eff.org/takedowns

The DMCA is fundamentally flawed and is a terrible model to base any future system on.


The DMCA is fine. It’s everything services providers to in addition to DMCA that causes problems. The DMCA process is very simply:

1. You receive a takedown notice

2. If it’s not valid, you submit a declaration that it’s not

3. Your content is restored and the claimant now has to take you to court if they still think it’s infringing

4. If you lied on your counterclaim, your now also guilty of perjury

The only thing it’s really missing is an imminent threat of perjury for frivolous claims (there also some obvious areas to make the process more efficient). But as far as a simple way of managing offending content, while still providing due process, the general format is quite good. It’s other unrelated policies that tend to get people the most riled up.

https://www.publicknowledge.org/blog/universal-music-group-a...


> 4. If you lied on your counterclaim, your now also guilty of perjury

In theory, yes. In reality, no. "I was not aware to the best of my knowledge..." because the DMCA claimant never vetted things, they just employed a bot to spam claims all over the internet.

It wasn't a "knowing" lie. So it's effectively without consequence or fall back.

Hint: how many people have been charged with, let alone convicted of perjury in the history of the DMCA?

Answer: None. Even in the small small small minority of counterclaims that have ended up in court, with the most egregious lack of standing, the worst that has happened is (and even this only in a single digit number of times) awarding of costs.

Those are pretty good odds if you're a content creator (or copyright troll).


Copying is a fictional crime.


> force social media companies to choose to identify as a "platform" or a "publisher"

Who is going to "force" them? Governments? That ask them to censor and censure people in the first place?


Governments aren’t homogeneous. In the US there’s plenty of energy (especially among conservatives) to restrict social media giants’ right to censor, at least with respect to political censorship.


You wrote that comment on a moderated, zero revenue forum! You really want HN to be liable for every post here? Where would you post that opinion if they shut it down instead?


Seems like one could craft policy that treats social media giants differently than relatively small forums, no?


Sounds like this decision may be made for them, see previous discussion on one method which may be used to ban E2E https://news.ycombinator.com/item?id=22202110


What is to stop a company from choosing to be a platform, and then use dark patterns to censor certain views? IE Google placing a competitor's sites on page 2 or reddit hiding a certain community from its popular page.


In this case, if it were regulated and there was evidence they were doing that, a lawsuit could be brought against them.


Laws.


First question: what would you make different between your proposed system and the (somewhat notorious) DMCA claim system that we have right now? Do you think that the DMCA claim system is a good model for this? Or are there additional safeguards you would put in place to stop platforms from just removing anything they get a request for? Would you make filers liable for frivolous takedown requests? If so, does that mean that takedown requests can't happen anonymously? We already have a system in place where anybody can request anything get taken down for copyright reasons, and unsurprisingly, it is widely abused for censorship.

Second question: when you see platforms like 8Chan taken down today, do you think that's wrong? Should companies like Cloudflare be forced by the government to leave those sites up, even when they're advocating for completely despicable things or crossing a line into outright threats? If Cloudflare isn't going to be forced by the government to leave those sites up, why would anyone willingly host the platforms that can't do any moderation at all, even of open hate speech, rampant spam/scam advertising, pornography, or borderline illegal stalking/threats? To push that question a step farther -- when you look at GMail's auto-rejection and sorting of spam, do you think that's wrong?

Third question: do kids get to have user-generated content hosted anywhere? They can't realistically go on 8-Chan, so is the idea that they won't join forums until they're old enough that they feel comfortable in environments filled with Nazis? Bear in mind that this includes communities like Miiverse. Nintendo isn't human-moderating every Miiverse post before it goes public. There's no world where they willingly open themselves up to the kind of liability you're talking about, those platforms would just be shut down.

Final question, and I do genuinely mean this as a question, not as a dismissal or as a request to go away: why are you on Hackernews right now, given that Hackernews fits very squarely in the category you're saying shouldn't exist? Hackernews is heavily moderated (way more heavily than Twitter), but it also allows people to post crazy stuff. The moderation mostly happens after the fact, which would open the owners up to liability. Would Hackernews be a better community if submissions/posts weren't moderated? Would Hackernews be a better community if every post you made went into a human-moderated queue, and you had to wait (at least) 2-3 days before it became publicly visible? Bear in mind, Hackernews is largely moderated by something like 3 people, so even in the best case scenario they're definitely going to need to hire more and get a bit more aggressive about advertising and fees.


This would indeed be a good thing. I wonder if there's a push to do that already somewhere?


I believe this proposal would more or less be the DEFAULT were it not for Section 230 of the Communications Decency Act of 1996.

Section 230 has had both good results (bloggers aren't going to get in trouble because someone posted something illegal in their comments) and bad results (twitter/youtube/paypal were all allowed to nuke Alex Jones from orbit simultaneously, removing his ability to distribute content, communicate with followers, and accept payments).

I don't like or agree with Alex Jones (he's a wacko, but people have made some funny videos out of his freakouts), but people have to remember: if it can happen to one person, the same could happen to anyone. People praised this "deplatforming" because they didn't like the target. But this is essentially praising these sites for crippling someone's career without any oversight.


> twitter/youtube/paypal were all allowed to nuke Alex Jones from orbit simultaneously, removing his ability to distribute content, communicate with followers, and accept payments

That is not a bad result.


It's a convenient result in the short term for those of us on the left who think that Alex Jones' views are utterly repugnant, but ultimately these sorts of deplatforming actions are counterproductive. Alex Jones' fans will no longer be exposed to rational arguments from non-fans on the same platform. They'll instead become increasingly trapped in their own filter bubbles, and now have even more of a martyr complex. The cure for bad speech is good speech in an open forum.


Alex Jones' fans will no longer be exposed to rational arguments from non-fans on the same platform. They'll instead become increasingly trapped in their own filter bubbles, and now have even more of a martyr complex.

This is a common argument, and yet every study of conspiracy theories shows this isn't the case. Conspiracy theories thrive on reach, and in the absence of reach they tend to die out.


> if it can happen to one person, the same could happen to anyone.

A guy stands on a hill holding high an umbrella in the middle of a thunderstorm and is then struck by lightning. "If it could happen to one person it could happen to anyone." Technically true, but you have a lot of control over the odds. Being a racist harassing villain is the common thread shared amongst almost everyone who has been deplatformed so far. Stay out of that territory, and stay off hills during storms. You'll be fine.


Limiting deplatforming is taking away Twitter/YouTube's etc 1st amendment rights.


Would you say the same exact thing about the phone companies?

Phone companies are platforms in the same exact way. Do you believe their 1st amendment rights are being taking away, because they can't "deplatform" phone calls from people they don't like, or from their competitors?


If the choice of deplatforming someone is a 1st amendment speech act (which I am ok with), then the choice to not do so needs to be a prosecutable offense (at which point you are just arguing that everyone should be a "publisher"). (Then, if websites can't manage to exist in this legally-consistent regime, maybe we finally get research into distributed systems in order to have true "platforms".)


Did you miss a "not" there or are you saying deplatforming should be a requirement?


That is a solution only valid for US and for the current legislation since it was introduced in 2006. It can be changed at any time.

For example other countries may use of laws (EU Arricle 13, for instance) to bend social media [1], and Facebook has accepted to have Marlène Schiappa’s (our Minister of Equality) agents in-house. Law is the embodiment of the current political power balance, not a fixed referential to stake culprits upon. And for the moment, social media are the power in the balance, so it’s hard to use the law against them.

[1] https://juliareda.eu/2019/12/french_uploadfilter_law/


So you're saying that Social media sites have more power than the elected government of a country, and you're arguing that's a a GOOD thing? Or am I misinterpreting that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: