Hacker Newsnew | past | comments | ask | show | jobs | submit | a-walker's commentslogin

> Stop estimating. I have analysed every team I have been on that use estimation. Those teams have been 99% incorrect. In my experience, it does not work. If you need dates, I would recommend a more modern approach like forecasting.

What's the difference between estimating and forecasting? Seems very binary as written, but at some point don't you have to look forward and make some assumptions on complexity/effort required to solve a problem and the general time it takes to solve them in order to do a forecast?


Forecasting looks like this: https://medium.com/expedia-group-tech/monte-carlo-forecastin...

It’s running through simulations based on previous team behavior to give a range.


Don’t you still have to correctly estimate the amount of work (time) to complete each task in your backlog?

You can apply all the “science” you want, but at the bottom of it all you have engineers holding up a card with a “4” on it when the scrum dude says “add date picker to the Foo widget.”


When I did this last, we incorporated size of epic instead and looked at number of tickets on average. You still have to break it down but you can take into account the number of added tickets given a large enough sample size.


You still need story point estimates on the previous items, or some reason to believe that previous stories were of similar size (which is also a form of estimate).


This sounds like it's written by someone who's only been in teams that do point estimations. Those are, almost by definition, 100 % incorrect because even small variations mean you're not done exactly when you said you'd be done.

If you estimate in properly calibrated 90 % intervals, you'll be correct 90 % of the time, and this is something you can verify continually.


My company right now does OKRs with priority levels for each task. If you make it a P0 then you're saying it will get done this quarter baring really unexpected circumstances (ie: shit happens, we understand). P1 may not get done. P2 is even less likely to get done. This creates a natural distribution by holding the date constant and varying the work done by that date. Personally I find this more reliable and easier to communicate than holding the work constant and varying the date of each item.


To be honest, I also like that approach better, because in my experience the budget tends to be relatively fixed, but the work amount is often fairly flexible.


I’m interested but not totally sure what you mean here. Do you have a source or name for this intervals/calibration stuff?

I’ve always had issues when people are using hours instead of points. The one time we properly used points on a team it went really well.


A good reference is Douglas Hubbard's How to Measure Anything. The idea is that most people can calibrate their sense of probability to be fairly accurate, by intentionally adjusting for some common biases. (In particular, if you prevent anchoring and availability bias, and actively engage system 2 and your loss aversion, you get a long way. There are specific techniques to do that in the book.)

This is really easy to practise and test too, since all you need are a bunch of questions where the answer is known, but you are uncertain of it. Then you can guess a range in which you are 90 % certain the true answer lies, and after 10 such questions, you should be correct roughly 9 times.

Points are a way to do this, but I prefer engineers who are calibrated. It's very powerful to ask someone for a calendar day when something will be done (or really just about any other question about anything), and be able to be reasonably confident in the span they respond with.


I’m a developer who’s been doing this for a loooong time and I can’t imagine being able to accurately guess with 90% confidence the calendar day I’ll have a particular task done. And the tasks I work on are different enough that a feedback loop about how much I misguessed would be useless for error-correcting future estimates. Maybe that works for factory-like tasks like how long it tasks to add a REST endpoint for a CRUD action?


No, it works for anything. The key that people misunderstand is that if the task is really uncertain, you're allowed to give a really wide range. In fact, you're supposed to -- that's the only way you can be correct 90 % of the time.

As a concrete example, I recently estimated "somewhere between 1 week and 4 months" when asked about an semi-well specified feature because I felt there's a 5 % chance it's done in less than a week, but even under pessimistic assumptions, there's about a 5 % risk it takes longer than 4 months.

When asked about a larger, unspecified projects a few years ago, I didn't hesitate to respond "between three months and 15 years". Sometimes that's the best you can do due to the uncertainties involved. It sounds useless, but it's really useful to have a quantitative measure of the uncertainty.


That hologram looks real.


Love this. Been a consistent pain for us just to make as simple map you can toss out to someone for a report and then forget about. Will be interesting to see how it plays out - arcgis recently launched a sort of consumer oriented mapping service - wonder how far they're wanting to go.


People use google earth desktop for this too


I'd imagine the audiences are quite different in their needs. We focus on the real estate of data centers and I think see multiple angles.

We have sales people and analysts that just need to make a basic map, call out some data points, and make it look good. Feels like Felt is a great tool for that.

We have larger needs where we need to do more complex analysis and visualize the relationships of larger data sets geographically - that's what we're looking to ArcGIS for.

It's been a bit of a search to find an affordable tool for the first use case and am glad to see someone in the space doing it.


I think that's spot on.

Curious, is there anything else about your needs pushing you towards ArcGIS rather than, say, QGIS?


I've always wondered this myself. I ended up taking a course on financial valuations. My novice takeaways were there were two approaches:

1. An intrinsic, detailed "bottoms up" approach by projecting future cash flows and discounting their value back to the present day. There might be 2 stages, the first years of explicit growth assumptions and the second along some kind of long term growth rate.

2. A market based, "top down" approach where you find comparable transactions and make adjustments for different levels of investment, leverage, to try to get an apples to apples comparison.

In either case, you also factor in gains you'd get from a strategic acquisition like eliminating redundant departments. Compare this to an acquisition by a PE firm, that doesn't do anything other than buy and sell equity in companies.

What I realized was it wasn't a science. Sure it deals mainly with numbers. And from an outside party you think it's this really rigorous, matter of fact assessment. But there's lots of areas where there are just guesses, albeit with a lot of money.


Did you do this in a product or agency environment? What was the typical talent bar for a green new hire and what was your financial arrangement with them?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: