Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A complexity measure is a measure of complexity over a model. Mostly we talk about input size models, but we can (and do) talk about other models. It isn't a question of complexity scaling vs. time scaling, but of complexity scaling over an input size model vs. a machine architecture model.

There are tons of well-researched machine architectural models for complexity analysis. In the case of the article, any one of the many external memory (EM) or hierarchical memory models would be appropriate. These can capture latencies and bandwidth of hierarchical memories and storage, and in some cases even eviction, contention, partition, etc.

This isn't some esoteric point. In high performance computing, distributed algorithms, or even systems & architecture it's pretty much expected that if you talk about complexity, you should do so under some sort of EM model.



Well stated. That's a much better way to say it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: