Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As for rvm in production:

Superior package management is the reason that small teams of ops people can manage huge deployments.

In the bad old days of early Linux or traditional UNIX one had to hand-build and deploy those dozens of libraries that go scrolling by when you "apt-get install libxml2". Worse, the only way to really figure out the dependency tree was a recursive operation that involved picking a library, downloading it, attempting to build it, waiting for it to fail, figuring out what other library was missing, downloading that library, attempting to build it, waiting for it to fail, figuring out what library was missing...

Inevitably, the grad student that wrote one of the libraries somewhere in this dependency soup would have moved on, and the web page at http://morlock.iscs.random.edu/~gradguy would disappear. The next step would be to fire up an ftp client and start digging around sunsite.unc.edu and some surviving yggdrasil mirror in Australia with a banner that said "PLEASE DO NOT USE THIS SERVER IF YOU ARE OUTSIDE OF AU/NZ, WE ARE PAYING $10 PER MEGABYTE ACROSS INTERNATIONAL LINKS."

Once the software was compiled you then had to subscribe to every mailing list for each piece of software and keep an eye for security notices. If the project didn't have a mailing list you had to keep an eye on the relevent USENET groups where something might get mentioned. If the project had neither you just had to visit the FTP site every so often and see if there was a new version. If the project had a changelog you could read that and hopefully make a decision as to whether it was necessary to upgrade. If it didn't have a changelog you had to diff the old source and the new source and try to figure out what the implications were.

Package management put an end to this insanity. The Linux Filesystem Standard was an important part of this, as well.

rvm turns its back on 20 years of progress and goes takes us back to ad-hoc what-the-fuckery.

A proper Linux distribution is an exercise in distributed responsibility. Instead of saddling every sysadmin in the world with the individual responsibility of making all of these decisions, expertise is allowed to accrete with the individual packagers. The operator evaluates the quality of the distribution and trusts the packagers to do the right thing.



So you're saying that using distro packages saves the sysadmin a lot of work because a lot of maintenance will be offloaded to the distro maintainers. Fair enough.

So what do you do if the specific Ruby version you need isn't packaged?

What's that I hear? Compile from source? How exactly is that any better than RVM? Every single piece of criticism you shouted against RVM applies just as much, if not MORE, to tarball compilation.


You fix your code so that it runs against distro-packaged Ruby.


What if the distro-packaged Ruby contains a bug, and the distro does not provide an update?

What if the distro only provides 1.9 but you need 2.0 features?

It sure is easy to wave your hands and accuse people of being insane.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: