Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even a benevolent AI acting for the benefit of a collective will have to choose which individuals suffer when suffering by some members of the collective becomes unavoidable.


Maybe. But a sufficiently smart benevolent AI will avoid getting into such a hopeless situation in the first place.

Just like parents in rich countries don't constantly have to decide which of their kids should go hungry: they make sure ahead of time to buy enough food to feed every family member.


When would "suffering by some members of the collective becomes unavoidable" actually happen?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: