Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that in practice most of the time it won't affect your choice of algorithms. But it may be helpful in understanding how your run time will actually scale as your data size increases.

Also, I could conceive it being useful in odd cases if there are choices between two algorithms: Algorithm 1: O(N) memory access operations, O(N log^2 N) cpu operations Algorithm 2: O(N log N) memory access operations, O(N log N) cpu operations

If we assume memory accesses are O(1) time, then Algorithm 1 is faster, but if we assume memory access are O(sqrt(N)) then Algorithm 2 is faster.



Here's the problem, the sqrt(N) assumption is based on completely random access, where most accesses bust all the caches. What will matter more, when it comes to the cost of memory in real applications, is the actual size of your dataset in relation to the cache sizes, and whether your access is sequential across a region of memory (an array) or random (a scrambled linked list).

In practical terms, the plateaus of the stair step graphs matter more than the general trend of the line.


Unless you're concerned about worst case scenario, where the last part is definitely not a plateau ever. You can reduce it to a Turing machine fetching you data sequentially.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: