DaemonForums  

Go Back   DaemonForums > FreeBSD > FreeBSD General

FreeBSD General Other questions regarding FreeBSD which do not fit in any of the categories below.

Reply
 
Thread Tools Display Modes
  #1   (View Single Post)  
Old 12th July 2010
DraconianTimes's Avatar
DraconianTimes DraconianTimes is offline
Security Geek
 
Join Date: May 2008
Location: United Kingdom
Posts: 37
Thanked 2 Times in 2 Posts
Unhappy FreeBSD 8, ZFS storage across four disks

I'm trying to build a new FreeBSD server for home storage. I've read the zpool and zfs man pages, but am getting a bit lost with the ZFS setup. My config is:

- 1 x 250GB disk - O/S install
- 2 x 750GB disks, 2 x 1TB disks - data

I'd like to configure the data disks so that I have 2 lots of 1TB+750GB (1.75TB) mirrored, presenting 1.75TB of usable, mirrored storage. Do I create a two zpools of 1TB+750GB, then another zpool over the top to mirror them? I found a command of
Code:
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0
- perhaps this is the 'right' way?

Can anyone offer advice?
Reply With Quote
  #2   (View Single Post)  
Old 13th July 2010
DraconianTimes's Avatar
DraconianTimes DraconianTimes is offline
Security Geek
 
Join Date: May 2008
Location: United Kingdom
Posts: 37
Thanked 2 Times in 2 Posts
Default

OK, so I've been playing around with VirtualBox to try and work this out. For hours. By way of project log, I'll add in additional info/experiences here (based on what I *think* works, if anyone sees something off, please chime in). I created five disk images (10% equivalents) to replicate my setup above, attached the four 'data' disks to a separate virtual SATA controller in the FreeBSD environment within VirtualBox. I then run:

Code:
freebsd-test# egrep 'ad[0-9]|cd[0-9]' /var/run/dmesg.boot
acpi_acad0: <AC Adapter> on acpi0
ad0: 10240MB <VBOX HARDDISK 1.0> at ata0-master UDMA33
acd0: DVDROM <VBOX CD-ROM/1.0> at ata1-master UDMA33
ad4: 1024MB <VBOX HARDDISK 1.0> at ata2-master SATA300
ad6: 1024MB <VBOX HARDDISK 1.0> at ata3-master SATA300
ad8: 750MB <VBOX HARDDISK 1.0> at ata4-master SATA300
ad10: 750MB <VBOX HARDDISK 1.0> at ata5-master SATA300
Trying to mount root from ufs:/dev/ad0s1a
This allowed me to identify my virtual disks. I then created a mirror based on the command line example I found and included in my first post. The resulting ~1.72GB volume suggests it is right...

Code:
freebsd-test# zpool create tank mirror ad4 ad6 mirror ad8 ad10
freebsd-test# zpool list
NAME   SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
tank  1.72G    75K  1.72G     0%  ONLINE  -
I then used the following to check the config, which AFAICT has setup the mirror properly using all four disks (i.e. two pairs of mirrors, RAID10)

Code:
freebsd-test# zpool status tank
  pool: tank
 state: ONLINE
 scrub: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	tank        ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    ad4     ONLINE       0     0     0
	    ad6     ONLINE       0     0     0
	  mirror    ONLINE       0     0     0
	    ad8     ONLINE       0     0     0
	    ad10    ONLINE       0     0     0

errors: No known data errors
Still tinkering, will update again later.
Reply With Quote
  #3   (View Single Post)  
Old 27th February 2011
DraconianTimes's Avatar
DraconianTimes DraconianTimes is offline
Security Geek
 
Join Date: May 2008
Location: United Kingdom
Posts: 37
Thanked 2 Times in 2 Posts
Default Config to hardware, 6+ months OK

About six months ago I applied the ZFS config I described above to the actual hardware, running FreeBSD 8.1. Current usage is:
Code:
Filesystem        Size    Used   Avail Capacity  Mounted on
/dev/ad1s1a       3.9G    266M    3.3G     7%    /
devfs             1.0K    1.0K      0B   100%    /dev
/dev/ad1s1d       9.7G     12K    8.9G     0%    /tmp
/dev/ad1s1e        58G    410M     53G     1%    /usr
/dev/ad1s1f       150G    116G     22G    84%    /var
tank              689G     22K    689G     0%    /tank
tank/Backups      689G     18K    689G     0%    /tank/Backups
tank/MyStuff      690G    1.1G    689G     0%    /tank/MyStuff
tank/Warehouse    1.6T    909G    689G    57%    /tank/Warehouse
My experience with the setup has been very good, with NFS access provided to a variety of machines over my network. The only thing I have found is that the system can be quite slow at times, I think that this is down to it being ZFS overhead etc but I am investigating this. That said, I've had no crashes or errors. I have been keeping monthly rsync'd backups to several large external hard disks just in case.

Next steps are to setup some automated monitoring of the health of the zpool etc and mail the results out to me nightly.
Reply With Quote
  #4   (View Single Post)  
Old 28th February 2011
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 699
Thanked 90 Times in 81 Posts
Default

Quote:
Originally Posted by DraconianTimes View Post
I'm trying to build a new FreeBSD server for home storage. I've read the zpool and zfs man pages, but am getting a bit lost with the ZFS setup. My config is:

- 1 x 250GB disk - O/S install
- 2 x 750GB disks, 2 x 1TB disks - data

I'd like to configure the data disks so that I have 2 lots of 1TB+750GB (1.75TB) mirrored, presenting 1.75TB of usable, mirrored storage. Do I create a two zpools of 1TB+750GB, then another zpool over the top to mirror them? I found a command of
Code:
# zpool create tank mirror c1d0 c2d0 mirror c3d0 c4d0
- perhaps this is the 'right' way?

Can anyone offer advice?
You have to use the same size disks in the each mirror vdev, so you would create 1 mirror vdev using the 1 TB disks, and another mirror vdev using the 750 GB disks:
Code:
# zpool create mypoolname mirror disk01 disk02 mirror disk03 disk04
That will create a storage pool named mypoolname. In that pool will be two separate mirrors (disk01 and disk02; disk03 and disk04). The two mirrors will automatically be striped together, giving you the equivalent of a RAID10 setup.

Once the pool is setup, then you create your various filesystems using the zfs command.
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote
  #5   (View Single Post)  
Old 28th February 2011
phoenix's Avatar
phoenix phoenix is offline
Risen from the ashes
 
Join Date: May 2008
Posts: 699
Thanked 90 Times in 81 Posts
Default

Quote:
Originally Posted by DraconianTimes View Post
About six months ago I applied the ZFS config I described above to the actual hardware, running FreeBSD 8.1. Current usage is:
Code:
Filesystem        Size    Used   Avail Capacity  Mounted on
/dev/ad1s1a       3.9G    266M    3.3G     7%    /
devfs             1.0K    1.0K      0B   100%    /dev
/dev/ad1s1d       9.7G     12K    8.9G     0%    /tmp
/dev/ad1s1e        58G    410M     53G     1%    /usr
/dev/ad1s1f       150G    116G     22G    84%    /var
tank              689G     22K    689G     0%    /tank
tank/Backups      689G     18K    689G     0%    /tank/Backups
tank/MyStuff      690G    1.1G    689G     0%    /tank/MyStuff
tank/Warehouse    1.6T    909G    689G    57%    /tank/Warehouse
My experience with the setup has been very good, with NFS access provided to a variety of machines over my network. The only thing I have found is that the system can be quite slow at times, I think that this is down to it being ZFS overhead etc but I am investigating this. That said, I've had no crashes or errors. I have been keeping monthly rsync'd backups to several large external hard disks just in case.
NFS uses a lot of synchronous writes. It's just the nature of the protocol. In order to speed up sync writes, you need to add a separate log device (SLOG or "log" vdev) to the pool. That way, sync writes are written to the log; the log says "written" and ZFS carries on; later, the data in the log is written out to the pool.

The only time data is read from the log device is during the boot process, to check for data that has not yet been written out to the pool.

Note: ZFS versions prior to v19 could not import a pool with a dead/missing log device, nor could they remove a log device, so you MUST use a mirrored log device in FreeBSD 7.x/8.x (which only have ZFSv15).

If you add a single log device to a ZFSv15 pool, and that log device dies, your pool will be unimportable. All the data is there, but you can no longer access it.
__________________
Freddie

Help for FreeBSD: Handbook, FAQ, man pages, mailing lists.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
STORAGE: benchmarks results (diskinfo) vermaden FreeBSD General 53 28th November 2010 06:06 PM
Hardware Choosing the Right Solid State Drive for Your Storage Network J65nko News 0 6th March 2010 06:04 PM
Western Digital 4K sectored hard disks J65nko News 0 16th February 2010 12:38 AM
I can't make FreeBSD floppy/boot image disks under Windows with fdimage Turquoise88 FreeBSD Installation and Upgrading 4 12th November 2008 08:39 PM
*BSD support for solid state disks? JMJ_coder General Hardware 3 27th June 2008 11:21 PM


All times are GMT. The time now is 03:41 PM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content copyright © 2007-2010, the authors
Daemon image copyright ©1988, Marshall Kirk McKusick