Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just make a consistent snapshot of your data (I'm using UFS snapshots), point Tarsnap at it, and you're good to go.

You're using the --snaptime option, right? It's necessary when you're backing up a filesystem snapshot in order to work around a race condition with them -- if a file is modified, the filesystem snapshot is created, and then the file is modified again, all within a single time quantum, it can trick Tarsnap into thinking that the file hasn't been modified later (which triggers an optimization of "this must be the same blocks as it was last time" in place of the usual "read the file and split it into blocks" behaviour).

Finally, compression and deduplication is amazing:

Well, if we're going to be posting statistics here...

                                         Total size  Compressed size
  All archives                               269 TB           121 TB
    (unique data)                            177 GB            72 GB
That's 269 TB of data backed up from my laptop, deduplicated and compressed down to 72 GB. This is what I get for taking a backup of my entire home directory every hour...


> You're using the --snaptime option, right?

Yep, emailed you about this in fact, and appreciated your detailed response.


Ah, that was you -- I remembered sending an email about snaptime recently but couldn't remember who it was to (and HN user names don't always correlate anyway...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: