Thread: ZFS
View Single Post
  #4   (View Single Post)  
Old 18th July 2008
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
Join Date: May 2008
Posts: 696

We're using it for our off-site backups servers.

The test server has a 2x500GB gmirror(8) / and an 8x500GB raidz2 pool for /usr, /var, /usr/local, /usr/ports, /usr/src, /usr/obj, and (the most important one) /storage.

The server connects via ssh to remote systems, runs rsync through the ssh connection to a directory in /storage named after the server, then creates a snapshot of /storage named after the date. That way, we have a live copy of every single file on the server, along with daily snapshots of the same.

Restoring a server is a simple matter of booting off a LiveCD, partitioning/formatting the drives, mounting the partitions, and running rsync against the backup server. Takes under an hour to restore a server with 400-ish GB of data.

The live server that I'm building in the next week will use 2x2GB CompactFlash drives for / with 12x400 GB and 12x500 GB drives, all in one large, 10 TB raidz2 pool.

When we first installed the test server, I didn't do any tuning. Took less than a week to lock it up. Then I did some kmem and arc tuning and disabled zil and prefetch via loader.conf. Took 2 weeks to lock it up again. Did some more tuning via loader.conf and set the zfsrecord size to 64 K. Been running without issues for about a month now.

Doing full server backups for 6 secondary schools and 1 elementary school, the daily snapshot averages 1 GB. We figure, even with all 100 servers being backup up, we should be able to keep at least 30 (possibly as much as 90) days of backups instead of the current two weeks we keep.

The really nice thing about zfs is that the snapshots are live. You don't have to mount them to access the files in them. So you can browse around the snapshot dirs until you find the file you need, without mucking around with mount/umount.

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote