Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As you pointed out the examples in the blog post are not an LLM failure. The real failure is asking too little.

Engineers think "the LLM can handle the simple code change, but if I ask for too much it'll fall over." Wrong. Modern LLMs can easily handle a 50-line function plus 50 lines of detailed comments explaining assumptions, performance implications, and what changes would invalidate this approach.

But most engineers are either asking for solutions without enough context or failing to ask the LLM to document its assumptions.

Then they're shocked when they have to reverse engineer out that the code assumes 100 users when they have 100k, or why it's doing individual API calls when they needed batch processing.

Most engineers have never seen good comments, so they don't know they can ask LLMs to write them.

The default LLM comment is just English pseudo-code: "this function takes a user ID and sends them a notification." Completely useless. But that's because most engineers have never experienced comments that explain trade-offs, performance implications, or future system evolution.

Writing clear technical explanations is genuinely difficult. Almost no one does it well. So when you ask an LLM for "comments," you get the same terrible pattern you've seen everywhere else.

But you can literally ask for explanations of assumptions, performance characteristics, and scenarios where this approach would break. The LLM handles it perfectly. You just have to know that's even possible. Makes the code review so much easier.

Most engineers don't, because they've never seen it done.

[1] https://peoplesgrocers.com/en/writing/asking-llms-the-right-...



This seems to be the default for Gemini 2.5 Pro now




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: