> While estimation seems to be a shibboleth of certain types of project management, the fact that implementation time estimates are rarely accurate - and often significantly wrong - suggests that they are not as vital as we’re lead to believe.
I disagree with your conclusion. Just because we're terrible at estimating doesn't mean it's not important.
"100% uptime is rarely achieved, which means it's unnecessary and unimportant".
> Code (implementation) is an integral alert part of the software design and planning process, and you can’t know how long something will take until that’s complete.
McConnel talks about it in "Software Estimation". The only way to accurately predict anything is to base it on your previous experience, which I allude to. Unfortunately (or fortunately, depending on how you look at it), even projects that looks similar on the surface are more frequently dissimilar than similar.
Regardless it's useful to be able to answer the "how long" question at least a coarse level to see if it's worth doing something.
I understand that we disagree, but Eric Evans also talks about how implementation is an inescapable part of design, as does Jack Reeves. So it follows, for me at least, that estimates can never be accurate.
Recently posted on HN was a link to a post at Lunar Logic where estimation is basically broken into 3 categories: “NFC” (no clue), “TFB” (too big) or “1”. They explain why here [1].
This has been my experience over more than 30 years of building products. The duration of any given feature/story is effectively unknowable, but we can make a decent guess as to it’s achievability, we can get a decent idea of the rate at which stories are completed.
The main problem I have with more detailed estimation is that the estimation process itself consumes significant engineering resources which IMO should be assigned to actual engineering.
To your specific points:
> "100% uptime is rarely achieved, which means it's unnecessary and unimportant".
This is specious. As an industry our estimates tend to be wildly wrong, and despite decades of dealing with “the software crisis” they are not improving. My position is that this means that we’re spending a lot of time estimating and then more time dealing with them being wrong. It’s just a waste.
> Regardless it's useful to be able to answer the "how long" question at least a coarse level to see if it's worth doing something.
This seems to be the wrong way to do it. Surely the approach should be to ask what value this “something” has to the business, and then determine if it’s technically feasible within that value.
But that would mean more work for the executives, and quite frankly this, in my direct experience, is not the kind of responsibility they like to take on.
I disagree with your conclusion. Just because we're terrible at estimating doesn't mean it's not important.
"100% uptime is rarely achieved, which means it's unnecessary and unimportant".
> Code (implementation) is an integral alert part of the software design and planning process, and you can’t know how long something will take until that’s complete.
McConnel talks about it in "Software Estimation". The only way to accurately predict anything is to base it on your previous experience, which I allude to. Unfortunately (or fortunately, depending on how you look at it), even projects that looks similar on the surface are more frequently dissimilar than similar.
Regardless it's useful to be able to answer the "how long" question at least a coarse level to see if it's worth doing something.