View Single Post
Old 30th June 2008
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 696
Default

We're using ZFS on a test backup server at the moment, doing rsync backups of multiple remote servers, and snapshotting the filesystem every night.

Hardware:
  • Tyan h2000M motherboard using ServerWorks 1000+nVidia chipsets
  • 2x dual-core Opteron 2200-series @ 2 GHz
  • 8 GB DDR2-667 ECC RAM (4 GB per CPU socket)
  • 3Ware 9650SX-16ML SATA RAID controller (PCIe) with 256 MB cache
  • 12x 500 GB SATA HD (16 MB cache each)
  • 5U rackmount case from Chenbro
  • 1300 watt 4x redundant power supply (can run on just 1)

OS:
  • 64-bit FreeBSD 7-STABLE from late May/early June

Drive setups:
  • 2x 500 GB HD in gmirror RAID1 for the OS
  • 8x 500 GB HD in raidz2 mounted as /pool
  • /pool/storage is the base directory for all zfs filesystems
  • /storage/remote-servers is the base directory for all backups
  • /storage/remote-servers/<servername> is the backup directory for each individual server that is backed up
  • every night, a snapshot is made of /storage/remote-servers
  • /storage is a standard ZFS filesystem
  • /storage/remote-servers is a gzip'd ZFS filesystem

For a month's worth of data, for 5 secondary school servers and 1 elementary school server, there's 412 GB of disk used. Each daily snapshot takes up just over 1 GB of space. (The servers we are backing up are NFS/terminal servers for the schools. All the computers in these schools are diskless workstations that boot off the network, and access all programs off the server, and store all data on the server.)

So far, we have been very impressed with both FreeBSD 7 and ZFS. We're really looking forward to FreeBSD 8 and the newer ZFS. The only issue we've had so far with the current v6 of ZFS is that you can't add devices to a raidz/raidz2 zpool. Once you create the pool, it's set in stone. You can replace drives with larger ones, but you can't add new devices. On the bright side, the server we're going to be running this on live, has 12x 400 GB HD and 12x 500 GB HD already in the case (and we're looking at getting a pair of 2 GB CompactFlash disks to use for the OS, so we'll have the full 10 TB to use for ZFS).

We've managed to lock up the test server 3 times since we started using ZFS. The first time, was due to not doing any kind of kernel/ZFS tuning (I wanted to see how long it would run). The second time was a few weeks later (recompiled the kernel and world, tuned the kernel via /boot/loader.conf). The third time was a week ago. After that, I did some more kernel tuning and ZFS tuning. Hasn't locked up since (knock wood).

If it locks up once or twice a month, it won't be a big deal. It's just the backup server, the backup scripts run from that server, so we'll notice and it won't be down long.

The current tuning settings we're using are:
/boot/loader.conf:
  • kern.hz="100" slow the CPU timers down a bit
  • kern.maxvnodes="500000" allow the system to have more open files (multiple parallel rsyncs can touch a lot of files in a short time)
  • vm.kmem_size="1610612736" give the kernel 1.5 GB of kmem (there's a current max of 1.9 but things get unstable above 1.5)
  • vm.kmem_size_max="16110612736" see above
  • vfs.zfs.arc_max="2147483648" give ZFS a max of 2 GB of RAM for filesystem cache
  • vfs.zfs.prefetch_disable="1" disable ZFS prefetch as its known to cause issues

ZFS settings:
  • set recordsize for /pool to 64K (all the rest of the filesystems inherit that setting)
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote