Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Very true. And in terms of cloud computing, it would mean that alerts and notifications and limits are worth absolutely nothing if it's on the customer to set them up in the correct way for every scenario imaginable. Which is nearly impossible. The tomato supplier's human alerting system is a catch-all-system which would be easily implementable as well.


Yeah - if you look at Troy's graphs they're already calculating an average bandwidth and the alert he's configured has a threshold ~1/50th his current level.

Trying to set a hard number limit ahead of time is hard (estimating how much you'll use, don't want to set a number too low and get cut off plus cloud cost structures can be really hard to get your head around) but that basic level of anomaly detection should be there by default.


> estimating how much you'll use, don't want to set a number too low and get cut off plus cloud cost structures can be really hard to get your head around

Easy way of avoiding this: Don't use shitty hosts that make you pay per GB served and shut you down once you hit your cost limit. Instead get limited by the available bandwidth you have, and clients will just access your server slower rather than being fully denied access.


Who does that, though? I'm including things like 95th percentile in "pay per GB served", but you're painting a pretty broad brush if you class a host as shitty if they won't give you a switch port and not care whether you're sending 2 packets per fortnight or maxing it out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: