Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Liability should always lie with the self driving car. In the real world weird shit happens on roads, people adapt and are expected to adapt. What good is a car that I can't trust to be at least as reliable as an overly defensive version of myself?


If human drivers are getting killed in the same spot the same is it still a vehicle saftey problem?


How many Teslas on lane keep assist safely drove past that spot vs how many humans?

If every million human drivers have one accident and every 1000 Teslas have a similar accident then yes, it is a vehicle safety problem.

An auto driving system needs to have a higher margin of safety than a human driver not the same or worse.


A much higher percentage of autopilot cars had fatal collisions at that location than non-autopilot cars. That’s cause for concern. And for all we know the previous driver was distracted. Being not-worse than a distracted human isn’t very confidence-inspiring.


Yes, because responsibility is not a zero-sum game. If the system cannot cope well with sub-optimal roads, it should not be in use wherever one might encounter sub-optimal roads.


Responsibility being a zero sum game is literally what a lot of these conversations and the emerging laws are about. If users are making a mistake often enough to be noticible then the system and its interface are encouraging that behavior. If the autopilot is being used in ill-advised situations then the question becomes why the system wasn't more comprehensible to the user, if it was then they wouldn't have made a bad judgement because they would have understood the limitations of the system. This is just another instance of blaming users for bad design.


GM's Super Cruise does precisely this. It is only available on roads where GM allows it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: