> Respectify is not an engine for monoculture of thought, but in fact intends to assist in the opposite while encouraging in healthy interaction along the way.
We don’t want to monitor or enforce saying specific things. We want people to be able to speak, but understand how others will hear them.
All those times people talk past each other. Or are rude but don’t realise it. Or are rude but don’t care (and should because it’s a human on the other end.) Or the worse people who intentionally say something awful and… just maybe can learn a bit about what they’re saying.
I get your fear. I think I’ve seen AI used for bad quite a bit. I hope, given the tech isn’t going away, we can use it to make things a bit better. That’s the goal.
Intent is immaterial if the output doesn’t match. The very nature of the product in attempting to coach commenters to argue in the “correct” way goes against your stated goals. This will encourage the kind of algo-speak self-censorship now common on TikTok etc, just more effectively because it at least tries to explain the rules.
I get that objection, and we are certainly very uninterested in that becoming the norm. The idea, of course, is to try to prevent comments that we want prevented and that aren't helpful.
Different bloggers and different communities are going to define that differently. That is why we are making a good-faith effort at allowing sites/people/groups to tweak this as desired.
Are you able to tune the AI such that it guides towards liberalism and rejects postmodernism? The point of postmodernism is to problematise language and results in unwelcoming nannying like the example above. I suspect the AI only knows to do that as a function of the pervasiveness of postmodernism and its offshoots in academia and society more broadly.