View Single Post
  #7   (View Single Post)  
Old 5th November 2009
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 696
Default

Quote:
Originally Posted by windependence View Post
Thanks once again Freddie. So just to be clear, if the only thing I have is a snapshot, I'm probably out of luck if I have to rebuild the entire system, but if I just need some files from a prior date, I'm OK.
Again, yes and no. Depends on how the snapshot is made, and what is stored in the snapshot.

For example, our backup server does a full system rsync of a remote server. Then takes a snapshot. The next night, it does another full system rysnc, and takes a snapshot. Any of those snapshots can be used to restore a system (boot off LiveCD, partition/format harddrives, rysnc system from backup server directory, install boot loader, reboot).

Thus, if your snapshots are of full, complete systems then they can be used to restore full, complete systems.

Quote:
I haven't messed with zfs, but it may be worth it for use in my customer accounts. I will be building a NAS box for a client next week, and I am considering zfs as the file system. Is there anything I should or should not do in this case? We are going to have two RAID5 arrays, with 4 1.5TB drives in each. The second array will be used for backup of the first array. The production array will serve up NFS shares for our VM disk space.
To get all the benefits of ZFS, you will need to disable the hardware RAID and let ZFS manage the disks directly. If the RAID controller has an option for "Single Disk" arrays, use that, as you then get all the functionality (cache, command re-ordering, offloading, etc) of the RAID controller, but with individual disks showing in the OS. If not, you'll have to use JBOD, which turned the RAID controller into a dumb SATA controller.

For your purposes, you have a couple of options:
  • create two separate storage pools, each with a single raidz1 vdev made up of the four disks
  • create a single storage, comprised of two raidz1 vdevs, each made up of four disks, and then create two separate ZFS filesystems (one as a backup of the other)

The former provides more separation between the two, but the second provides better I/O throughput and more storage space. You also have the option to add the raidz1 vdevs to the same pool to create either a striped (RAID0) pool, or a mirrored (RAID1) pool ("zpool add" vs "zpool attach"), depending on just how much redundancy you want.

Personally, I'd look into using two separate servers. One to host the live data, one to act as a backup. Then you can either use the "zfs send" command to keep them in sync by sending snapshots between them, or use rsync to keep them in sync.

On FreeBSD, native support for NFS sharing is available via the "sharenfs" property for individual filesystems (normal NFS configuration via /etc/rc.conf). You have to install Samba to share via SMB/CIFS. And you have to install an iSCSI target port if you want to export zvols via iSCSI.

On Solaris, NFS, SMB/CIFS, and iSCSI are done natively via the sharenfs, sharecifs, and shareiscsi properties.

For NAS and SAN purposes, ZFS is very nice. For storage on a single server (direct-attached) it's useful, depending on what you are doing. On a desktop, it's a bit overkill, but still has some perks.
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote