DaemonForums  

Go Back   DaemonForums > FreeBSD > FreeBSD Installation and Upgrading

FreeBSD Installation and Upgrading Installing and upgrading FreeBSD.

Reply
 
Thread Tools Display Modes
  #1   (View Single Post)  
Old 18th July 2008
Ville Ville is offline
Real Name: Ville Walveranta
Jack of Many Trades
 
Join Date: Jul 2008
Location: Texas, United States
Posts: 4
Post ZFS

I just read introduction to ZFS (also: How to install FreeBSD 7.0 under ZFS?). It looks really interesting! But is it ready for production use yet? The large memory requirement as well as the potential kernel panics if the memory parameters haven't been tuned just so are of some concern. I think I would not attempt to use ZFS in production unless the system has 8GB of memory (on an EM64T system). Anyone here had experiences with ZFS in production use yet?
Reply With Quote
  #2   (View Single Post)  
Old 18th July 2008
corey_james corey_james is offline
Uber Geek
 
Join Date: Apr 2008
Location: Brisbane, Australia
Posts: 238
Default

i wouldn't put it into production - there are several threads on this topic ... perhaps you should search this forum for 'zfs'
__________________
"No, that's wrong, Cartman. But don't worry, there are no stupid answers, just stupid people." -- Mr. Garrison

Forum Netiquette
Reply With Quote
  #3   (View Single Post)  
Old 18th July 2008
Ville Ville is offline
Real Name: Ville Walveranta
Jack of Many Trades
 
Join Date: Jul 2008
Location: Texas, United States
Posts: 4
Default

Nothing found with "zfs" as a keyword, but "zfs" as a tag brings up this thread. I'll read. Thanks.
Reply With Quote
  #4   (View Single Post)  
Old 18th July 2008
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 696
Default

We're using it for our off-site backups servers.

The test server has a 2x500GB gmirror(8) / and an 8x500GB raidz2 pool for /usr, /var, /usr/local, /usr/ports, /usr/src, /usr/obj, and (the most important one) /storage.

The server connects via ssh to remote systems, runs rsync through the ssh connection to a directory in /storage named after the server, then creates a snapshot of /storage named after the date. That way, we have a live copy of every single file on the server, along with daily snapshots of the same.

Restoring a server is a simple matter of booting off a LiveCD, partitioning/formatting the drives, mounting the partitions, and running rsync against the backup server. Takes under an hour to restore a server with 400-ish GB of data.

The live server that I'm building in the next week will use 2x2GB CompactFlash drives for / with 12x400 GB and 12x500 GB drives, all in one large, 10 TB raidz2 pool.

When we first installed the test server, I didn't do any tuning. Took less than a week to lock it up. Then I did some kmem and arc tuning and disabled zil and prefetch via loader.conf. Took 2 weeks to lock it up again. Did some more tuning via loader.conf and set the zfsrecord size to 64 K. Been running without issues for about a month now.

Doing full server backups for 6 secondary schools and 1 elementary school, the daily snapshot averages 1 GB. We figure, even with all 100 servers being backup up, we should be able to keep at least 30 (possibly as much as 90) days of backups instead of the current two weeks we keep.

The really nice thing about zfs is that the snapshots are live. You don't have to mount them to access the files in them. So you can browse around the snapshot dirs until you find the file you need, without mucking around with mount/umount.
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote
  #5   (View Single Post)  
Old 18th July 2008
jbhappy's Avatar
jbhappy jbhappy is offline
Real Name: Jeff
Port Guard
 
Join Date: Jun 2008
Location: MI, US
Posts: 30
Default

Quote:
Originally Posted by Ville View Post
Nothing found with "zfs" as a keyword, but "zfs" as a tag brings up this thread. I'll read. Thanks.
see also http://www.daemonforums.org/showthread.php?t=966
Reply With Quote
  #6   (View Single Post)  
Old 18th July 2008
corey_james corey_james is offline
Uber Geek
 
Join Date: Apr 2008
Location: Brisbane, Australia
Posts: 238
Default

Quote:
Originally Posted by phoenix View Post

The really nice thing about zfs is that the snapshots are live. You don't have to mount them to access the files in them. So you can browse around the snapshot dirs until you find the file you need, without mucking around with mount/umount.
I think when OS X gets ZFS r/w support, they'll use these snapshots for time machine. I think it's a really nice feature of ZFS
__________________
"No, that's wrong, Cartman. But don't worry, there are no stupid answers, just stupid people." -- Mr. Garrison

Forum Netiquette
Reply With Quote
  #7   (View Single Post)  
Old 21st October 2008
hopla hopla is offline
New User
 
Join Date: May 2008
Posts: 8
Default

Quote:
Originally Posted by phoenix View Post
The live server that I'm building in the next week will use 2x2GB CompactFlash drives for / with 12x400 GB and 12x500 GB drives, all in one large, 10 TB raidz2 pool.
Sorry to revive this thread, but: how is this new server going atm? We are also considering deploying a FreeBSD ZFS file server in a production environment and I like to hear about every success story (or horrible failure) regarding FreeBSD ZFS.
Reply With Quote
  #8   (View Single Post)  
Old 21st October 2008
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 696
Default

Running quite nicely. It's doing nightly backups of 79 servers at the moment, creating a snapshot each night before the rsync run starts. For those 79 servers, we are averaging only 2 GB of changed data per night, with the occasional spike to 10 GB. We have 64 snapshots so far, using just under 4 TB of data in total. The normal (no new servers added) rsync run takes under 2 hours. When we add a new server, the run can take up to 8 hours, depending on how much data needs to be transfered.

You really need to tune the kernel memory map and the ZFS ARC size. Both of these are set in /boot/loader.conf. The max kmem size in FreeBSD 6 and 7 is 1536 MB, although few people can get it to work about 1500. For our purposes (rsync, massive reads every night) we found that you want to push the kmem as high as you can (in small increments), then set the ARC to about 1/2 of the kmem size. That gives ZFS lots of RAM to cache things in, but still leaves the kernel lots of room to play. And the more RAM you can put in the system, the better.

We experienced lockups almost daily during the tuning period. That calmed down to about once every 2 weeks while we played with how often to add new servers to the backup list (heavy network and file I/O across 20+ rsync processes will lock it up). Now we're at about 1 every 6 weeks. With a little more fine-tuning, we should be able to get to 1 every 6 months or so.

I did have to hack the /etc/rc.d/zfs script to force it to unconditionally attempt to import the pool. During all the lockups and initial tuning, we would lose the ability to import the pool automatically during boot. Adding zfs import -f <poolname> to the start function was needed. We could probably remove it now, but it doesn't harm anything to be there.

And, if you use zvols for swap, you'll have to add a swapon command to /etc/rc.local as it doesn't seem to always enable it via the rc.d scripts.

Beyond that, we are *extremely* pleased with it. Being able to browse through the .zfs/snapshot/<snapshotname>/ folder hierarchy to look for files is just awesome! No mounting needed, as zfs does it for you automatically. And doing system restores from any snapshot is just as easy.

Our next step is to build a duplicate server, house it off-site, and use the snapshot stream feature to have redundant backups.

After that we want to build some kind of web management interface around it so that people can browse through the snapshots for their servers to download files as needed.
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote
Reply

Tags
zfs

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 10:27 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content copyright © 2007-2010, the authors
Daemon image copyright ©1988, Marshall Kirk McKusick