View Single Post
Old 10th June 2008
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 696
Default

How snapshots are implemented depends on which filesystem you are using.

For UFS, if you take a snapshot of a filesystem, it creates a hidden .snaphot directory in the root of the filesystem. Anytime a file in that filesystem is edited, a copy of the original is put in the .snapshot directory. I haven't played with UFS snapshots, so not sure how to access files in snapshots.

For ZFS, if you take a snapshot of a filesystem, you give it a name (usually based on the date, to make it easier to sort/view the list of snapshots). Since ZFS is copy-on-write (it writes files out to new locations on the disk, then updates the directory pointer to point at the new file, then later sets the previously used blocks to unallocated), when a file is updated, the original's directory pointer is added to the snapshot, and the new file is written to a new location on disk. You can either mount the snapshot by name to access the files, or you can navigate through the .zfs/ directory tree hierarchy. It's quite powerful stuff.

How we have things setup is like so:
/storage is a massive zpool covering 10 500 GB drives, configured as raidz2
/storage/remote-servers is a zfs filesystem with gzip-9 compression enabled
/storage/remote-servers/<servername> is a zfs filesystem, one for each server

Each night, the backup server runs a script that connects it to multiple servers via ssh, runs rsync against that server and the corresponding <servername> directory, then takes a snapshot of the /storage/remote-servers filesystem once the run is complete. The snapshots are named using the date in YYYYMMDD format.

This way, we have daily archives of all the files on each of the servers. If we need individual files to restore, we just mount the snapshot for the day in question, and go into the <servername> directory, find the file, and copy it to wherever it needs to be.

To restore a server, we boot it with a Knoppix or Kanotix CD, partition the drives, mount the drives/partitions, then run rsync off the backup server to put everything back to where it was. Takes about 1 hour to restore a server (3x 400 GB drives in RAID5, ~250 GB of data).

Each server uses about 1 GB of data in each daily snapshot (as only changed files are included).

Our previous method was to use DAR to do a full backup on Sunday, and incremental backups Mon-Sat, and rotate into a lastweek/ directory on Sat, to keep two weeks of backups. Restoring individual files took up to 3 hours, and restoring a full server was a full-day affair. Plus, each server took up about 80 GB on Sun, and up to 10 GB for each day of the week, times two for the previous weeks' archives.
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote