Believing in the capabilities of _upcoming_ LLMs that you have never actually used shows that you buy into marketing and hype very easily. No one really knows what the future will look like and there's an equally plausible one where post-subsidy token economics become impossible to justify for most use cases.
To answer "what better way," clearly using the skills regularly is much better. Letting them atrophy for potentially multiple years and then trying to resurrect them repeatedly doesn't seem like a recipe for maintaining sharp skills to me.
That's definitely optimal, but I don't think a lot of people are going to have that opportunity. It's not really in the short-term interest of a company to have people spending time on that.
I think you should be very picky about generated PRs not as an act of sabotage but because very obviously generated ones tend to balloon complexity of the code in ways that makes it difficult for both humans and agents, and because superficial plausibility is really good at masking problems. It's a rational thing to do.
Eventually you are faced with company culture that sees review as a bottleneck stopping you from going 100x faster rather than a process of quality assurance and knowledge sharing, and I worry we'll just be mandated to stop doing them.
It's disappointing that this is clearly being downvoted due to disagreement - it's a valid perspective. We have very little evidence of the overall impact of aggressively generating code "in the wild" and plenty of bad examples. No one knows what this ends up looking like as it continues to meet reality but plenty are taking a large productivity improvement as a given.
>Here are some well known names who are now saying they regularly use LLM's for development. For many of these folks, that wasn't true 1-2 years ago:
This is a huge overstatement that isn't supported by your own links.
- Donald Knuth: the link is him acknowledging someone else solved one of his open problems with Claude. Quote: "It seems that I’ll have to revise my opinions about “generative AI” one of these days."
- Linus Torvalds: used it to write a tool in Python because "I know more about analog filters—and that’s not saying much—than I do about python" and he doesn't care to learn. He's using it as a copy-paste replacement, not to write the kernel.
- John Carmack: he's literally just opining on what he thinks will happen in the future.
You're going to get a lot of "skill issue" comments but your experience basically matches mine. I've only found LLMs to be useful for quick demos where I explicitly didn't care about the quality of implementation. For my core responsibility it has never met my quality bar and after getting it there has not saved me time. What I'm learning is different people and domains have very different standards for that.
Honestly I don't think so. An essay like this is more than just content, it's an experience for the reader. I value the time I got to spend with it and feel I came way with value that a summary or condensed version would just not have had.
>I had actually just been told by management this last week that I need to become AI 'fluent' as part of future performance evaluations and I have been deeply conflicted about it.
I hear this and FWIW, if there aren't very specific things being asked of you, using AI as a stack overflow replacement as the OP admits to doing is as "AI fluent" as anything else in my book.
>The rent-a-brain aspect is more acutely alarming. And I will be blunt here: It sure does seem like the prolonged use of LLMs can reliably turn certain people’s minds into mush...
>Stop me if you’ve heard this one before: “After [however long] using AI coding assistants, there’s no way I’m going back!” You know, I don’t doubt that this is true. Because I’m not sure some of the people who say this could go back. It reads like praise on the surface, but those same words betray a chilling sense of dependence.
reply