Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From recent coverage of Open Compute on /.:

RAM Sled: Facebook wants to replace the leaves and run it on a RAM sled with between 128 GB and 512 GB of memory, and for $500 to $700 per sled. Only a basic CPU would be needed. Total queries would be 450,000 to 1 million key queries per second. http://slashdot.org/topic/datacenter/how-facebook-will-power...

Not really any way to know what Facebook is up to here: 512 GB is normally done via 4 sockets, 4 channels, 2 dimms per channel, 16GB dimms; whether FB intends on disrupting any of the factors in this equation for getting mass memory on a single system will be interesting to see.

It has been rather shocking to me that FB-DIMMS & other approaches which allow a lot of RAM to be chained to one another- really deep channels- hasn't seen any widescale success. RAM is cheap enough: would that we could plug in a lot of it.



If you just care about capacity and cost you can build a system with a $20 ARM SoC for every two DIMMs. The ultimate is BlueGene-style packaging with an SoC on each DIMM, but Facebook's probably not ready for that.


Other direction you could go would be to put a bunch of the RAM on a peripheral bus (e.g, PCIe).


Similarly, I've been waiting to see an extended modern version of the iRam Box[0], but it doesn't seem to be coming. The 4GB limit is useless now, but imagine the same with hundreds of gigabytes.

[0]: https://en.wikipedia.org/wiki/I-RAM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: