Hacker Newsnew | past | comments | ask | show | jobs | submit | allanmacgregor's commentslogin

You are absolutely right! Here is a shorter version of the article (hint: is still the same lenght and has all the tells) .... But seriously, is one thing on blogpost and articles but I'm starting to hear it in podcast and videos too, pay attention the speech sounds unnatural:

“Here’s what actually matters.” “Let’s break it down.” “The key takeaway is…” “The bottom line is…” “What this really means is…”

Also hearing this a lot:

“Here’s what nobody is talking about.” “Here’s the part people miss.” “What most people don’t realize is…”


You confused AWS with Amazon distribution and warehouses, and you are doubling down ? Likely or not, much of the world's infrastructure runs on or through AWS data centers. Attacks like this can cause significant disruption.

Maybe I’ll triple down. You’re telling me that AWS and even, why not, THE ENTIRE INTERNET is not built to sell me pointless crap? It’s just that it sure feels like it when I click around.

This has to be a joke right?


My take on the real reason behind the OpenClaw acquisition:

OpenClaw isn't a chatbot; it's a 24/7 autonomous system that connects to your email, calendar, messaging platforms, and web browser, chaining multi-step workflows together with persistent memory across sessions. Every one of those operations consumes API tokens; the architecture ensures that consumption is extraordinary.


Interesting idea but I found out that AI is pretty inconsistent on how it structures or breakdowns commits, highlighy dependand on the model and/or user prompting. So there is a chance you might only get to replay single move or commits that are large enough to make it harder to know whats happening


Fair point — it depends heavily on how the agent commits. I have this in my CLAUDE.md:

10. *Commit frequently* — Many small commits over few large ones

Without it, yeah, you can get one giant commit and the replay adds nothing.


Watching how a lot of people are using and deploying AI and Agentic coding made think about the Tommyknockers, a series/book from Stephen King; there is a quote in particular that really fits.

"As I say, we've never been very good understanders. We're not a race of super-Einsteins. Thomas Edison in space would be closer, I think."

-- Bobbi Anderson, The Tommyknockers (Stephen King, 1987)


I don't disagree with you entirely here. I probably wasn't clear enough on what I was trying to convey.

Right now AI / Agentic coding doesn't seem is a train we are going to be able to stop; and at the end of the day is tool like any other. Most of what seems to be happening is people let AI fully take the wheel not enough specs, not enough testing, not enough direction.

I keep experiment and tweaking how much direction to give AI in order to product less fuckery and more productive code.


Sorry for coming off combative - I'm mostly fatigued from "criti-hype" pieces we've been deluged with the last week. For what it's worth I think you're right about the inevitability but I also think it's worth pushing a bit against the pre-emptive shaping of the Overton window. I appreciate the comment.

I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.

I also think that there is probably a sweet spot for automation that does one or two simple things and fails noisily outside the confidence zone (aviation metaphor: an autopilot that holds heading and barometric altitude and beeps loudly and shakes the stick when it can't maintain those conditions), and a sweet spot for "perfect" automation (aviation metaphor: uh, a drone that autonomously flies from point A to point B using GPS, radar, LIDAR, etc...?). In between I'm afraid there be dragons.


@_dwt don't worry you didn't I appreciate good discussion and criticism. The publication is new and I'm still trying to calibrate my voice and style for it.

>I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.

I don't anyone knows for sure, we all are on the same boat trying to figure it out how to best work with AI; the pace of change is making it so incredibly difficult to keep or try things. I'm trying a bunch of stuff at the same time:

-https://structpr.dev/ - to try to rethink how we approach PR reading, organizing review (dog-fooding it right now so is mostly alpha)

- I have an article schedule next week talking about StrongDMs Software factory, there are some interesting ideas there like test holdouts

- Some experiments in the Elixir stack for code generation and verification that go beyond it looks great. AI can definetively create code that _looks_ great but there is plenty of research that shows a lot of AI generated code and test can have a high degree of false confidence.


Even worse there are more than a few CTOs touting it.


HAHAHA, whoops. Can I blame it on jetlag?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: