Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Here's why we don't: YOU CAN'T.

But that’s one of the main points of the article: You don’t have to. Asymptotically, you won’t be better than O(sqrt(N)) because physics does not allow it. You can put all the different caches in you want, you can buy the most expensive RAM there is, you can buy the best interconnect systems for the fanciest NUMA cluster you can find: You won’t beat O(sqrt(N)) if you wish to go to arbitrary N.

Of course, if you bound your problem size, you may well get O(1) access times and for some applications, it may well be sensible to consider the coefficient associated to the square-root scaling small enough to neglect it, but if you truly want to talk about algorithmic complexity (and not just the scaling of one particular implementation for one particular range of valid N), you should take that additional square root into account.



The one exception is possibly quantum computation in some cases.


I don’t know of a mechanism in quantum mechanics that would allow to pack information more densely (or even infinitely dense as would be required by O(1) random access times), what did you have in mind?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: