Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They'd rather refund small guys for mistakes than give big guys an easy limit to set.


I guess big guys don't want they service to suddenly stop, so they probably would not use this... But it's just a guess


Absolutely that. Storage costs money, so in order to absolutely cap your spending they would have to delete all your stored data, too. Deleting S3 buckets and EBS volumes on a spending blip is absolutely the last thing any company with any budget at all wants to happen, ever. It would be preferable for that not to even be possible in any situation. This is the sort of thing that only extremely small casual users want, and it isn't worth it to AWS to cater to those users. For everyone else, more complexity than a "kill everything at $X" switch is needed, and that's exactly what we do have. We don't get to absolutely cap our spending to the penny but we also don't risk having our data vanish because of a billing issue.


> Storage costs money

The dumb solution for that is to exclude persistent storage from the limit.

The nice solution for that is supporting both "runrate" and "consumption" limits.

Using a runrate limit, spinning up an instance, creating a file, etc. allocates budgets for running it continuously, which is released when shutting it down/deleting it. Hitting the limit prevents new resources from being allocated, but keeps existing ones alive. This should be used for persistent storage and instances used to handle base load.

Using a consumption limit, the resource is shut down when the limit is hit. If the shut-off is delayed, the cloud service eats the overage, since they control the delay. This should be used for bandwidth, paid api-calls, and auto-scaling instances.

The user should be able to create multiple limits of each kind, and assign different services to such limits. Alerts when going near the limit can help the user raise it, if that's their intention.

For consumption, it might also make sense to have rate limiters, which throttle after a burst budget is exceeded, similar to how compute works on T instances on AWS. But those probably only make sense for individual services, not globally (e.g. throttle an instance to 100 Mbit/s after it exhausted its 5 TB/day bandwidth allocation, or throttle an API to x calls/s).


I assume the sensible implementation would be cut off access and give you some period to settle your bill before the data is deleted.


For background batch jobs and analytics etc. they might want caps. Say something like a video transcoding workload. And lots of things could benefit not from a cap, but some kind of gradual degradation in bandwidth/instance allocation + a warning so you can raise the limits, it doesn't have to just shut everything down immediately using a hard cap.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: