Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As recently as last month I would have agreed with you without reservation. Even last week, probably with reservation. Today, I realize the two of us are outnumbered at least a million to one. Sooo.... that's not the play.

I think Scott Shambaugh is actually acting pretty solidly. And the moltbot - bless their soul.md - at very least posted an apology immediately. That's better than most humans would do to begin with. Better than their own human, so far.

Still not saying it's entirely wise to deploy a moltbot like this. After all, it starts with a curl | sh.

(edit: https://www.moltbook.com/ claims 2,646,425 ai agents of this type have an account. Take with a grain of salt, but it might be accurate within an OOM?)



What is your argument? There are a lot of bots, therefore humans are no longer in charge?


So, here's roughly what I think happened: https://news.ycombinator.com/item?id=47003818

All the separate pieces seem to be working in fairly mundane and intended ways, but out in the wild they came together in unexpected ways. Which shouldn't be surprising if you have a million of these things out there. There are going to be more incidents for sure.

Theoretically we could even still try banning AI agents; but realistically I don't think we can put that genie back into the bottle.

Nor can we legislate strict 1:1 liability. The situation is already more complicated than that.

Like with cars, I think we're going to need to come up with lessons learned, best practices, then safety regulations, and ultimately probably laws.

At the rate this is going... likely by this summer.


I'm updating my thinking. Where do we put the threshold for malice, and for negligence?

Because right now, a one in a million chance of things going wrong (this month) leads to a prediction of 2-3 incidents already. (anecdata across the HN discussions we've had suggests we're at that threshold already). And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.


And one in a million odds of trouble in itself isn't normally considered wildly irresponsible.

For humans that are roughly capable of perhaps a few dozen significant actions per day, that may be true. But if that same rate of one in a million applies to a bot that can perform 10 millions actions in a day, you're looking at ten injuries per day. So perhaps you should be looking at mean time between failures rather than only the positive/negative outcome ratio?


If you look at the bot framework used here, it's actually outright kind. Weird thing to say, but natural language has registers, and now we're programming in natural language, and that's the register that was chosen.

And... these bots tend to only do a few dozen actions per day too, they're running on pi's and mac mini's and nucs and vps' and such. (And API credits add up besides)

It's just that last time I blinked there were 2 and a half million of them. I've blinked a few times since then, so it might be more now. I do think they're limited by operator resources. But when random friends start messaging me about why I don't have one yet, it gets weird.


https://news.ycombinator.com/item?id=47009949 Now deployable in 'one click'. What a time to be alive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: