|
|||
ZFS
I just read introduction to ZFS (also: How to install FreeBSD 7.0 under ZFS?). It looks really interesting! But is it ready for production use yet? The large memory requirement as well as the potential kernel panics if the memory parameters haven't been tuned just so are of some concern. I think I would not attempt to use ZFS in production unless the system has 8GB of memory (on an EM64T system). Anyone here had experiences with ZFS in production use yet?
|
|
|||
i wouldn't put it into production - there are several threads on this topic ... perhaps you should search this forum for 'zfs'
__________________
"No, that's wrong, Cartman. But don't worry, there are no stupid answers, just stupid people." -- Mr. Garrison Forum Netiquette |
|
|||
Nothing found with "zfs" as a keyword, but "zfs" as a tag brings up this thread. I'll read. Thanks.
|
|
||||
We're using it for our off-site backups servers.
The test server has a 2x500GB gmirror(8) / and an 8x500GB raidz2 pool for /usr, /var, /usr/local, /usr/ports, /usr/src, /usr/obj, and (the most important one) /storage. The server connects via ssh to remote systems, runs rsync through the ssh connection to a directory in /storage named after the server, then creates a snapshot of /storage named after the date. That way, we have a live copy of every single file on the server, along with daily snapshots of the same. Restoring a server is a simple matter of booting off a LiveCD, partitioning/formatting the drives, mounting the partitions, and running rsync against the backup server. Takes under an hour to restore a server with 400-ish GB of data. The live server that I'm building in the next week will use 2x2GB CompactFlash drives for / with 12x400 GB and 12x500 GB drives, all in one large, 10 TB raidz2 pool. When we first installed the test server, I didn't do any tuning. Took less than a week to lock it up. Then I did some kmem and arc tuning and disabled zil and prefetch via loader.conf. Took 2 weeks to lock it up again. Did some more tuning via loader.conf and set the zfsrecord size to 64 K. Been running without issues for about a month now. Doing full server backups for 6 secondary schools and 1 elementary school, the daily snapshot averages 1 GB. We figure, even with all 100 servers being backup up, we should be able to keep at least 30 (possibly as much as 90) days of backups instead of the current two weeks we keep. The really nice thing about zfs is that the snapshots are live. You don't have to mount them to access the files in them. So you can browse around the snapshot dirs until you find the file you need, without mucking around with mount/umount. |
|
||||
Quote:
|
|
|||
I think when OS X gets ZFS r/w support, they'll use these snapshots for time machine. I think it's a really nice feature of ZFS
__________________
"No, that's wrong, Cartman. But don't worry, there are no stupid answers, just stupid people." -- Mr. Garrison Forum Netiquette |
|
|||
Sorry to revive this thread, but: how is this new server going atm? We are also considering deploying a FreeBSD ZFS file server in a production environment and I like to hear about every success story (or horrible failure) regarding FreeBSD ZFS.
|
Tags |
zfs |
|
|