You'll always eventually expand past the limits of whatever storage you're using. It's clearly impossible to build totally unlimited storage with constant access time, and you'll always eventually need larger storage (to disk, to other machines, to other data centres, etc), at the cost of speed.
Take a look at [part two](http://www.ilikebigbits.com/blog/2014/4/28/the-myth-of-ram-p...), where he runs through this in more concrete depth to back his claims up. The theoretical physical limit for accessing N bits of data is O(sqrt(n)), no matter how you do it, just from the speed of light.
It's counter-intuitive, but this really is true in the general case, not only in the regions shown on that graph.
I don't think it's all that counter intuitive. You just need to consider that information transfer is limited by speed of light - it naturally follows that access times must grow with size of data unless you are able to pack the information infinitely dense.
It's a great series for reminding us of that, though, and illustrating it well and actually putting numbers on it.
If this were the only factor, access time would actually grow with the cubic root of N, because you could arrange memory in a sphere, which grows with r^3.
The ultimate theoretical limit is the berkenstein bound, which implies that the information content of a region is bounded by its area, not its volume. This is where r^2 comes from.
I get that there's additional constraints - my point was simply that intuitively even without thinking through or knowing about additional ways the communication is constrained, you'll arrive at the necessity of an increase in latency just with the knowledge of the limitation of speed of light alone
Using Schwarzchild radius of a black hole as the limiting case of a sphere of radius r and then using that information to determine something about arbitrary spheres of radius r? Unorthodox.
While this is true, the analysis in part one is really bad and that sort of incorrect hand-wavy line fitting should be discouraged.
Part two is much better and is based on sound principles. The only complaint there is that it's a bit misleading; if you arrange your memory in a sphere, you get access times as the cube root of N for a very long time before you start running into information-theoretical limits based on the area of the memory region.
Just that, present day memory is arranged in 2d surface, not in 3d volume. And there are issues like cooling, so we don't know if r^3 scaling memories will happen in the future.
You'll always eventually expand past the limits of whatever storage you're using. It's clearly impossible to build totally unlimited storage with constant access time, and you'll always eventually need larger storage (to disk, to other machines, to other data centres, etc), at the cost of speed.
Take a look at [part two](http://www.ilikebigbits.com/blog/2014/4/28/the-myth-of-ram-p...), where he runs through this in more concrete depth to back his claims up. The theoretical physical limit for accessing N bits of data is O(sqrt(n)), no matter how you do it, just from the speed of light.
It's counter-intuitive, but this really is true in the general case, not only in the regions shown on that graph.