Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The AI backlash is interesting to watch. It’s both right and wrong.

It’s right that slop is real and a lot of present AI is crap, and it’s right about the risks to employment and the economy. But when it comes to the first, it’s wrong that this won’t get better. AI is advancing rapidly.

When it comes to economics, AI is just bringing to a head long standing fundamental problems with wealth distribution and fairness that have been building for a long time. AI might be the straw that breaks the camel’s back, but the camel isn’t looking so hot. Before AI hit big we already had two generations priced out of housing.

The story on journalism and public discourse is, I think, similar. AI might be the last nail in that coffin but the coffin was already closed and the man was already dead. Social media algorithms and man-made disinformation killed public discourse more than a decade ago. Any solution to those problems should also help the slop problem.

My point is that all these problems were nearing crisis level without AI and would still demand solutions without AI. AI might actually help by making them no longer possible to ignore.

Meanwhile all this criticism totally ignores what AI will make possible and is already making possible. Typical human negativity bias.

 help



I think text/code genAI still have space to grow, but it's the same space fixed image generation models had two years ago. The usage grew, the costs diminished, the accuracy slightly increased, but the weirdness will stay, and gross mistakes are still there, at a similar rate.

This is not true in the slightest. Accuracy and weirdness are clearly getting better and better for both images, ai and articles.

Remember will smith rating spaghetti? If you ignore the trend line then that is utter blindness.

That being said you can make an argument that AI will flatline in the future. But your evidence for that does not lie in past progress as the past shows a trend line that is clearly upwards.


I said accuracy was sightly better . The improvement isn't that high though, not enough for me to update my images.

Weirdness is still there (when you want photorealism), and i think the rate of gross mistakes (at least for stablediffusion) is around the same ~ one in ten, since at least a year (i don't remember what i generated in 2024, but i think it was around the same too). And it still can't get historical generation right. I haven't seen once accurate clothing and architecture when you want to go historical scenes. It's great for fantasy though, which is my main usage, so i didn't wrote about it, but it definitely something that hasn't improved at all. If you want to argue its sightly less weird, i can listen to it (i think it's more desensitization more than anything, but you can have arguments). If you think the rate of gross mistake went from 1/10 to 1/12, maybe you're right and i'm too negative. But historical mistakes in scenes just never change, it is still extremely wrong.



Fixed images. As in static images. Of course i'm not talking about Sora or videos. I was talking about static images. Stablediffusion and other image generation peaked after two years. My original point was that static images peaked in 2024 and improvement have been minimal since then. "fixed images".

> Meanwhile all this criticism totally ignores what AI will make possible and is already making possible.

I'm not ignoring it, I simply haven't seen compelling evidence of the hype.


the backlash presented on this site is specifically against Microsoft's AI crap. way to over-generalize

There are ways to advance rapidly without causing problems for large numbers of people. LLMs are being forced everywhere and the result is a mess. Being serious about a promising new technology usually means going slow and emphasizing what tools and practices work for particular contexts. Instead we are all being told to use the latest because we will be left behind. It doesn't make sense to be slinging slop everywhere and then complain about journalism and public discourse because that is an option that was rejected.

> it’s wrong that this won’t get better. AI is advancing rapidly.

Even assuming this is true for the high-end productivity models (like Claude Code), how does this change anything about the main argument?

The AI integrations which are getting forced on me in every single app aren't using those newer models: I try them every few months just-in-case and they are always complete garbage. The people generating SEO blogspam sure aren't going to pay for the fancy models either. And it's not like the herders of the AI bots vomiting comments all over social media care about quality. Unless you are an active ChatGPT / Claude / whatever subscriber, most of your day-to-day interaction with AI will always be with the cheapest slop you can imagine - and that's what the backlash is primarily against.

Besides, I don't actually care whether you are stuffing dog poo or delicious tomatoes in my mailbox, I want it to stop! I never asked for it, and it's ruining all the stuff I actually want delivered to my home. Those integrations are at best as annoying as the "Chat with a sales consultant!" and "Let our site send you notifications!" popups: even if your AI were to achieve AGI, I would still want them gone.


Is it really complaining about quality of AI? The dangerous part is that slop will be harder to detect.

The anxiety surrounding AI-generated "slop" mirrors the frantic warnings of late 15th-century clerics who viewed the printing press as an engine of spiritual decay. Johannes Trithemius, a prominent Benedictine abbot, famously argued that monk-scribes should not abandon their pens, fearing that printed books were ephemeral, error-ridden toys that would undermine the sanctity of scripture and the discipline of the mind. He believed that the sheer volume of cheap, mechanical texts would drown out genuine wisdom and lead to a permanent decline in the quality of human thought.

History shows he fundamentally misunderstood the human capacity for adaptation. Rather than succumbing to a sea of printed garbage, society developed sophisticated new filters. We invented the modern bibliography, the peer-review process, the concept of a "trusted publisher," and the critical literacy skills required to navigate a world where information was no longer a rare luxury. Humans have an innate drive to seek out signal over noise. Just as the chaos of the early printing era eventually gave way to the Enlightenment, our current struggle with synthetic content will likely trigger a new evolution in how we verify truth and value human insight.


Manuscript could contain handwritten errors and of course there could be misprints due to wrongly selected types but content wasn't generated out of nowhere. Unless we're talking about asemic or automatic writing due to some... "spiritual" influence.

The key here is human thought as you said. Whether these books were written by clerics or printed by the press these were still containing human produced substance. It's not a fair comparison.


That exact stance (+scribes' financial interests) prevented printing press to be used in the Ottoman Empire widely for more than 200 years

I think his legacy is about stegography and cryptography. I think he relied on handwritten volumes and couldn’t adapt his cryptographic techniques.

"Generating slop is totally fine because we'll eventually develop anti-slop filters" isn't exactly the most convincing argument, you know.

Besides, your link between the "chaos of the early printing press" and the start of the Enlightenment is very forced. The Greek philosophers did plenty of critical thinking after all, and they had no need for a printing press. I see absolutely zero reason why the current AI bubble will inevitably result in an Enlightenment-like period, nor why AI would be a hard requirement for one.


The frontiers of mathematics is already incorporating AI and people like Terrance Tao are documenting the progress of AI. At the very least the current best mathematician in the world only does this because he has predicted an opposite conclusion to you.

So when you say zero reason, I have to tell you that your absolutist stance is blindness. There are many reasons why it can happen, and many reasons why it can’t.


Incredibly valid opinion. Many people disagree but this is an extremely possible future for AI.

There is also a darker future where AI improves to the point where it’s no longer slop. It produces quality code, texts, and books that are better and in a fraction of a second after one misspelled prompt. Given the past trajectory of AI, this is the more likely outcome.

The other outcome is AI flatlines. This is as good as it gets. In which case the future you predict may come to pass.


Yeah,,, we want QUALITY slop! And we want it NOW!

You want to understand the backlash?

Then stop looking at benchmarks and start looking at mirrors.

For a certain kind of engineer, programming was never just a skill. It was the quiet proof that they were capable in a world that rarely offers certainty. You learn the syntax. You master the abstractions. You tame chaos into structure. Machines obey. Systems bend. The invisible becomes tractable. Over time, that competence hardens into identity.

You are not just someone who writes code. You are someone who understands.

Now imagine watching that understanding become commonplace.

An autocomplete becomes a collaborator. A collaborator becomes a generator. The thing that once required years of apprenticeship begins to appear in seconds on a screen. Imperfect, yes. Crude in places. But undeniably moving.

If your sense of self is braided tightly with that craft, you don’t experience this as a tool upgrade. You experience it as erosion.

And the mind does what minds have always done when erosion threatens something sacred. It fortifies. It searches for certainty. It assembles a narrative strong enough to stand against the tremor.

“It’s hype.” “It produces slop.” “It makes more work than it saves.” “It can’t really think.”

In specific contexts, those statements are accurate. Anyone who uses these systems seriously knows their limits. But watch how quickly narrow truths are inflated into sweeping conclusions. Watch how nuance evaporates. Watch how the exceptions are framed as the rule.

When identity feels endangered, skepticism becomes absolutism.

This is not unique to engineers. It is not even unique to this century. Every community that binds meaning to belief has faced the same reckoning. When a worldview sustains status, belonging, and purpose, evidence alone does not dislodge it. The mind will reinterpret what it sees before it surrenders what it is.

Community is preserved first. Truth negotiates for second place.

On Hacker News, coding is not merely economic activity. It is status, tribe, hierarchy. It is the signal that separates builders from observers. And when the boundary blurs, when machines cross it, something deeper than workflow efficiency is unsettled.

People sense the shift long before they articulate it. They feel the ground change texture beneath their feet. The language of critique becomes sharper. The confidence more brittle. The dismissals more absolute.

Because what is being questioned is not only whether the tools work. It is whether the years invested in mastery will still confer distinction.

There are legitimate risks. There is genuine mediocrity flooding the market. There are economic consequences that deserve sober attention. But beneath the surface of many reactions lies a quieter fear: if this continues, what makes me exceptional?

That question is rarely spoken aloud. It doesn’t need to be. It hums beneath the arguments.

History is unkind to identities built on exclusivity. The printing press unsettled scribes. The camera unsettled portrait painters. The spreadsheet unsettled accountants. Each time, the first instinct was to defend the old boundary. Each time, the boundary moved anyway.

What we are witnessing is not the death of programming. It is the democratization of parts of it. And democratization always feels like diminishment to those who built their lives around scarcity.

It is easier to call the tide fake than to admit it is rising.

That does not mean the critics are fools. It means they are human. They are protecting something that once protected them. They are guarding the altar that gave them worth.

But tides do not negotiate with altars.

They arrive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: