|
Other BSD and UNIX/UNIX-like Any other flavour of BSD or UNIX that does not have a section of its own. |
|
Thread Tools | Display Modes |
|
|||
While most of us are willing to answer questions, it would be helpful if you did some initial research yourself.
I'm not an expert on the subject, but, here are some articles I found via Google. http://en.wikipedia.org/wiki/ZFS http://www.opensolaris.org/os/community/zfs/whatis/ http://www.opensolaris.org/os/community/zfs/ Also, the FreeBSD specific Wiki: http://wiki.freebsd.org/ZFS Note: Better is subjective, personally, ZFS will never replace my UFS/FFS partitions. |
|
||||
I beg to differ.
Reliability is quite important at home, where one has umpteen years of digital pics, digital video, digital music, documents, yadd yadda, etc all saved on fallible harddrives. Being able to put 3 (or more) drives into a home system, setup a raidz config across them all, and setup nightly snapshots via cron is *hugely* awesome, useful, and usable. It's the non-Apple version of Time Machine. All that's really missing, is an easy-to-use GUI management tool for creating storage pools, filesystems, and accessing files in snapshots. Once someone comes up with that, and people start using ZFS, we'll all wonder how we ever managed storage without "storage pools", "inline, infinite snapshots", and all the other fun features. And those features will start cropping up in other storage management systems and filesystems. (DflyBSD's Hammer fs has similar concepts, as does Tux3 and Btrfs in Linux-land). We're using ZFS (and rsync) on our backup server at work, doing remote backups for 35 servers. And the nightly snapshots feature has already saved our bacon twice (once for an accounting file, once for someone's e-mail account). Been running it at home for a couple months now as well. Haven't had to use the daily snaphots for anything yet, but it's nice knowing they are there, along with the raidz redundancy, for when I will need it. |
|
|||
I agree with phoenix here, the ability to have a file system that can scale the way zfs can is very useful even for home use and for the snapshots, I truly love them. Before zfs I used to have an extra disk just for storing the backups on, with zfs I can use that drive in a zpool and I'm not as worried about disk failures are I was before.
At the moment I'm using zfs for our samba server here at work with one zpool that has two mirrors. I would recommend zfs for a file server any day of the week if you have the memory and a 64bit cpu to build one. I'm planning on building a similar system for my home server as well. I'm not sure how I feel about using zfs on a desktop, mostly because all my file are always stored on a server somewhere, mainly because I keep using smaller disks for them and my larger disks always end up in a server. As for a gui, that's a very cool idea for something to be added to pcbsd or desktopbsd and maybe even for freenas. I'm sure that once there is a nice gui for zfs on desktops, zfs will be the way to go. |
|
|||
Phoenix, what kind of hardware specs are you using for your home ZFS server?
|
|
||||
* Pentium 4 3.0 GHz
* 2 GB RAM * 3x 160 GB SATA HDs * FreeBSD 7-STABLE from early Aug 2008 (before DTrace hit the tree and caused a bunch of ZFS issues) Intel chipsets with Marvel Yukon (msk) gigabit NIC * raidz1 zpool across all three drives * zfs filesystems for /usr, /usr/ports, /usr/ports/distfiles, /usr/src, /usr/obj, /tmp, /var, /home, swap * 2 GB USB thumbdrive configured as / KDE 3 installed and configured for the wife KDE 4 installed and configured for me Acting as Samba server for the laptops (win and lin) But the really fun one is the work machine. Search the forums for the specs on that baby! |
|
|||
Looking at the history of computing, it seems reasonable to assume that the most useful features of ZFS will eventually find there way into other filesystems.
__________________
And the WORD was made flesh, and dwelt among us. (John 1:14) |
|
||||
Quote:
Previously, I had each drive sliced into three: * s1 was 10 GB for the OS (3-way gmirror) * s2 was 1 GB for swap * s3 was the rest, part of the zpool After migrating to the USB drive, it was a simple zpool replace operation to regain the extra 22 GB and get back a lot of performance. |
Tags |
zfs |
|
|