Hacker Newsnew | past | comments | ask | show | jobs | submit | julianlam's commentslogin

Before you start to feel smug about this and think you're above it.

I've read three blog posts in as many days where their authors quietly reflect that Claude is so good that it has effectively hijacked their own decision making processes when they weigh the value of starting a project.

Do they embark on it, or hand it over to Claude, even if the process is mind-numbing and you learn nothing?


It's baked in as deep as it can go.

Use a different protocol.


> Just a few years ago, AI essentially could not program at all. In the future, a given AI instance may “program better” than any single human in history. But for now, real programmers will always win.

For how long? Do I get to feel smug about this for 10 days, 10 weeks, or 10 years? That radically changes the planned trajectory of my life.


These posts are just programmers trying to understand their new place in the hierarchy. I'm in the same place and get it, but also truisms like 'will always win' is basically just throwing a wild guess at what the future will look like. A better attitude is to attempt to catch the wave.

TFA's author is literally saying it may happen. He's using AI so he already caught the wave. He's augmenting himself with AI tools. He's not saying "AI will never surpass humans at writing programs". He writes:

" At this particular moment, human developers are especially valuable, because of the transitional period we’re living through."

You and GP are both attacking him on a strawman: it's not clear why.

We're seeing countless AI slop and the enshittification and lower uptime for services day after day.

To anyone using these tools seriously on a daily basis it's totally obvious there are, TODAY*, shortcomings.

TFA doesn't talk about tomorrow. It talks about today.


To be fair, the author phrased his point poorly in a way that invites confusion:

> "But for now, real programmers will always win."

"for now ... always", not a good phrasing.


Why would you expect it to? It probably predates these concepts. /s

I've noticed that too. I thought I was just posting particularly bad takes.

This site makes my browser choke.

Reader mode was the only thing that made it readable.


I was looking for this comment - it makes my browser choke as well! I thought it was just my tablet, but interesting that others see that too.

Doctorow recently spoke (remotely) at FediMTL in Montreal. He laid out the stakes as usual, but added that in Canada, we are legally not allowed to reverse engineer competitor APIs, and we gave up this right as part of some free trade negotiation.

"Free trade" is one of those euphemisms that seldom is free and only sometimes concerns trade.

It's mostly about leverage to create a stronger position for negotiations.


tasteless wafers?


In the sense that there's no upside to violating the norm, yeah, that too.


> Considering they trained their model on open-source software, the least they could do is give it to open-source maintainers for free with no time limit.

Why? The resulting code generated by Claude is unfit for training, so any work product produced after the start of the subsidized program should be ignored.

Therefore it makes sense to charge them for the service after 6 months, no? Heh.


What do you mean it's unfit for training? It's a form of reinforcement learning; the end result has been selected based on whether it actually solved the need.

You need to be careful of the amount of reinforcement learning vs continued pretraining you do, but they already do plenty of other forms of reinforcement learning, I'm sure they have it dialed in.


> One example of this was a malformed authentication function. The AI that vibe-coded the Supabase backend, which uses remote procedure calls, implemented it with flawed access control logic, essentially blocking authenticated users and allowing access to unauthenticated users.

Actually sounds like a typical mistake a human developer would make. Forget a `!` or get confused for a second about whether you want true or false returned, and the logic flips.

The difference is a human is more likely to actually test the output of the change.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: