Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Users flagged it and there were several reports that it seems likely to be LLM-generated.

Please email us (hn@ycombinator.com) to communicate with the mods. We don't get alerted to mentions of usernames and we don't get even close to seeing every comment, especially after a thread has gone from the front page.



It feels like this article was flagged by users who sell AI stuff, because they dont like its content - so they tried to censorship it with some excuse.

This article shows flaws with AI driven development.


I didn't flag it, but I am certain it is fully or almost-fully LLM composed. I haven't engaged with it carefully enough to know if I agree with it, because just from a skim it seems so fully LLM-synthesized I am not going to bother.


Do you think it is LLM-generated or not? And how are you making that assessment?


Why do you switch the subject?

I think you should care more about bad actors potentially brigading HN, instead of asking me random questions if I think if this article was written by AI or not.

For me the article is front page worthy (in fact it had quite a lot of upvotes) as it brought an interesting point of view and an interesting discussion.

I dont care if it wqs written by a human, or maybe rewritten by some tool. Substance over form!

For me it looks that the AI-proponents were unhappy with this article so they mass flagged it. Why they dont like it? Because this article has a heavy anti-AI stance.


The point is that the topic doesn’t matter, and whether you or I like the content doesn’t matter. The HN community and moderation team have come to a consensus that only original human-authored writing has a place here.

There are only 30 places on the front page, and thousands of submissions each day trying to take one of them. It’s reasonable for the audience to expect that a post that has made it to the front page has had sufficient effort invested in it to be deserving of that place.

Something else I’ve noticed (just today, in part due to this subthread): people are far less inclined to feel negatively towards an LLM-generated article or comment if they agree with it. We need to consciously resist being influenced in this way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: