|
FreeBSD General Other questions regarding FreeBSD which do not fit in any of the categories below. |
|
Thread Tools | Display Modes |
|
|||
ZFS
Just a report more than a question. Over the last few weeks I have put it into production on about 3 servers + plus mine at home.
All 3 have just been a mirror setup just for the data running on HP ProLiant ML115 G5's all are used just for samba shares. With between 20 to 30 users during working hours which is no great work load I know. But I have yet to see one single error or hear of any complaints on the 3 servers. Mine at home is an old i386 with 1gb of ram which I ran with out tweaks. When I rsynced 160gb of data to it from a windows pc it would stop responding and reboot. After applying the tweaks from http://wiki.freebsd.org/ZFSTuningGuide vm.kmem_size="512M" vm.kmem_size_max="512M" in /boot/loader.conf I can hammer it as hard as I like and no more problems The main downfall of zfs is it just works and takes minutes to setup so theres no chance to play with it much |
|
|||
Quote:
Now that does sound worth doing. Funny you should mention zfs send" and "zfs recv a friend on irc just suggested i go play with that. How is the off site stuff going ? We have a few offsite servers that I would like to backup to the main office. At the moment I use rsync but it would be nice to have other options |
|
||||
We use rsync to backup out remote servers (82 so far) to the backup box running FreeBSD+ZFS, taking a snapshot of the backup directory each night. We do full rsyncs of the entire remote system (everything except /proc, /sys, /tmp, and /dev), into separate directories for each server. That way, we can do full system restores as well as single-file recoveries.
After the initial sync, which can take a few days across an ADSL link, the follow-up syncs take between 30 seconds and 2 hours, depending on the amount of data that has changed and the speed of the link. Our backup script is configured such that it starts 1 backup process per remote site, waiting 7 minutes before starting the next process. That process then runs rsync for each server at the site in sequence. So we can have up to 15-ish rsync processes running at a time. (In theory, we could have 55 running at once, as that's the number of remote sites we have, but they tend to finish fast enough that we never have more than around 15 running.) As for the off-site replica of our backup box, I'm still running tests with the ZFS send/recv stuff. The atomic nature of the snapshot feature can be a bit touchy. For example, trying to do a # zfs send storage/backup@20080805 | ssh remoteserver "zfs recv storage/backup@20080805 would fail on me. Turns out, that was combining 4 snapshots, and trying to send 1.2 TB of data across the network. The SSH connection would fail after 36-ish hours, then the box would spend about 12 hours deleting all the data transferred so far. Breaking the process up into "zfs send pool/snapshot > file", "scp file remoteserver:", and "zfs recv pool/snapshot < file" steps makes it a lot smoother. Less automated, but more reliable.Now I'm in the process of brining the replica box up-to-date with the 100+ snapshots we've taken so far. |
Thread Tools | |
Display Modes | |
|
|