DaemonForums  

Go Back   DaemonForums > Miscellaneous > Guides

Guides All Guides and HOWTO's.

 
 
Thread Tools Display Modes
Prev Previous Post   Next Post Next
  #1   (View Single Post)  
Old 26th April 2012
vermaden's Avatar
vermaden vermaden is offline
Administrator
 
Join Date: Apr 2008
Location: pl_PL.lodz
Posts: 1,056
Cool HOWTO: FreeBSD ZFS Madness

0. This is SPARTA!

Some time ago I found a good, reliable way of using and installing FreeBSD and described it in my Modern FreeBSD Install [1] [2] HOWTO. Now, more then a year later I come back with my experiences about that setup and a proposal of newer and probably better way of doing it.

1. Introduction

Same as year ago, I assume that You would want to create fresh installation of FreeBSD using one or more hard disks, but also with (laptops) and without GELI based full disk encryption.

This guide was written when FreeBSD 9.0 and 8.3 were available and definitely works for 9.0, but I did not try all this on the older 8.3, if You find some issues on 8.3, let me know I will try to address them in this guide.

Earlier, I was not that confident about booting from the ZFS pool, but there is some very neat feature that made me think ZFS boot is now mandatory. If You just smiled, You know that I am thinking about Boot Environments feature from Illumos/Solaris systems.

In case You are not familiar with the Boot Environments feature, check the Managing Boot Environments with Solaris 11 Express PDF white paper [3]. Illumos/Solaris has the beadm(1M) [4] utility and while Philipp Wuensche wrote the manageBE script as replacement [5], it uses older style used at times when OpenSolaris (and SUN) were still having a great time.
I last couple of days writing an up-to-date replacement for FreeBSD compatible beadm utility, and with some tweaks from today I just made it available at SourceForge [6] if You wish to test it. Currently its about 200 lines long, so it should be pretty simple to take a look at it. I tried to make it as compatible as possible with the 'upstream' version, along with some small improvements, it currently supports basic functions like list, create, destroy and activate.

Code:
# beadm
usage:
  beadm subcommand cmd_options

  subcommands:

  beadm activate beName
  beadm create [-e nonActiveBe | beName@snapshot] beName
  beadm create beName@snapshot
  beadm destroy beName
  beadm destroy beName@snapshot
  beadm list
There are several subtle differences between mine implementation and Philipp's one, he defines and then relies upon ZFS property called freebsd:boot-environment=1 for each boot environment, I do not set any other additional ZFS properties. There is already org.freebsd:swap property used for SWAP on FreeBSD, so we may use org.freebsd:be in the future, but is just a thought, right now its not used. My version also supports activating boot environments received with zfs recv command from other systems (it just updates appreciate /boot/zfs/zpool.cache file).

My implementation is also style compatible with current Illumos/Solaris beadm(1M) which is like the example below.
Code:
# beadm create -e default upgrade-test
Created successfully

# beadm list
BE           Active Mountpoint Space Policy Created
default      N      /          1.06M static 2012-02-03 15:08
upgrade-test R      -           560M static 2012-04-24 22:22
new          -      -             8K static 2012-04-24 23:40

# zfs list -r sys/ROOT
NAME                    USED  AVAIL  REFER  MOUNTPOINT
sys/ROOT                562M  8.15G   144K  none
sys/ROOT/default       1.48M  8.15G   558M  legacy
sys/ROOT/new              8K  8.15G   558M  none
sys/ROOT/upgrade-test   560M  8.15G   558M  none

# beadm activate default
Activated successfully

# beadm list
BE           Active Mountpoint Space Policy Created
default      NR     /          1.06M static 2012-02-03 15:08
upgrade-test -      -           560M static 2012-04-24 22:22
new          -      -             8K static 2012-04-24 23:40
The boot environments are located in the same please as in Illumos/Solaris, at pool/ROOT/environment place.

2. Now You're Thinking with Portals

The main purpose of the Boot Environments concept is to make all risky tasks harmless, to provide an easy way back from possible troubles. Think about upgrading the system to newer version, an update of 30+ installed packages to latest versions, testing software or various solutions before taking the final decision, and much more. All these tasks are now harmless thanks to the Boot Environments, but this is just the tip of the iceberg.

You can now move desired boot environment to other machine, physical or virtual and check how it will behave there, check hardware support on the other hardware for example or make a painless hardware upgrade. You may also clone Your desired boot environment and ... start it as a Jail for some more experiments or move Your old physical server install into FreeBSD Jail because its not that heavily used anymore but it still have to be available.

Other good example may be just created server on Your laptop inside VirtualBox virtual machine. After you finish the creation process and tests, You may move this boot environment to the real server and put it into production. Or even move it into VMware ESX/vSphere virtual machine and use it there.

As You see the possibilities with Boot Environments are unlimited.

3. The Install Process

I created 3 possible schemes which should cover most demands, choose one and continue to the next step.

3.1. Server with Two Disks

I assume that this server has 2 disks and we will create ZFS mirror across them, so if any of them will be gone the system will still work as usual. I also assume that these disks are ada0 and ada1. If You have SCSI/SAS drives there, they may be named da0 and da1 accordingly. The procedures below will wipe all data on these disks, You have been warned.

Code:
 1. Boot from the FreeBSD USB/DVD.
 2. Select the 'Live CD' option.
 3. login: root
 4. # sh
 5. # DISKS="ada0 ada1"
 6. # for I in ${DISKS}; do
    > NUMBER=$( echo ${I} | tr -c -d '0-9' )
    > gpart destroy -F ${I}
    > gpart create -s GPT ${I}
    > gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
    > gpart add -t freebsd-zfs -l sys${NUMBER} ${I}
    > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
    > done
 7. # zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/sys*
 8. # zfs set mountpoint=none sys
 9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
14. # cd /usr/freebsd-dist/
15. # for I in base.txz kernel.txz; do
    > tar --unlink -xvpJf ${I} -C /mnt
    > done
16. # cp /tmp/zpool.cache /mnt/boot/zfs/
17. # cat << EOF >> /mnt/boot/loader.conf
    > zfs_load=YES
    > vfs.root.mountfrom="zfs:sys/ROOT/default"
    > EOF
18. # cat << EOF >> /mnt/etc/rc.conf
    > zfs_enable=YES
    > EOF
19. # :> /mnt/etc/fstab
20. # zfs umount -a
21. # zfs set mountpoint=legacy sys/ROOT/default
22. # reboot
After these instructions and reboot we have these GPT partitions available, this example is on a 512MB disk.

Code:
# gpart show
=>     34  1048509  ada0  GPT  (512M)
       34      256     1  freebsd-boot  (128k)
      290  1048253     2  freebsd-zfs  (511M)

=>     34  1048509  ada1  GPT  (512M)
       34      256     1  freebsd-boot  (128k)
      290  1048253     2  freebsd-zfs  (511M)

# gpart list | grep label
   label: bootcode0
   label: sys0
   label: bootcode1
   label: sys1

# zpool status
  pool: sys
 state: ONLINE
 scan: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        sys           ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            gpt/sys0  ONLINE       0     0     0
            gpt/sys1  ONLINE       0     0     0

errors: No known data errors
3.2. Server with One Disk

If Your server configuration has only one disk, lets assume its ada0, then You need different points 5. and 7. to make, use these instead of the ones above.

Code:
5. # DISKS="ada0"
7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys*
All other steps are the same.

3.3. Road Warrior Laptop

The procedure is quite different for Laptop because we will use the full disk encryption mechanism provided by GELI and then setup the ZFS pool. Its not currently possible to boot off from the ZFS pool on top of encrypted GELI provider, so we will use setup similar to the Server with ... one but with additional local pool for /home and /root partitions. It will be password based and You will be asked to type-in that password at every boot. The install process is generally the same with new instructions added for the GELI encrypted local pool, I put them with different color to make the difference more visible.

Code:
 1. Boot from the FreeBSD USB/DVD.
 2. Select the 'Live CD' option.
 3. login: root
 4. # sh
 5. # DISKS="ada0"
 6. # for I in ${DISKS}; do
    > NUMBER=$( echo ${I} | tr -c -d '0-9' )
    > gpart destroy -F ${I}
    > gpart create -s GPT ${I}
    > gpart add -t freebsd-boot -l bootcode${NUMBER} -s 128k ${I}
    > gpart add -t freebsd-zfs -l sys${NUMBER} -s 10G ${I}
    > gpart add -t freebsd-zfs -l local${NUMBER} ${I}
    > gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ${I}
    > done
 7. # zpool create -f -o cachefile=/tmp/zpool.cache sys /dev/gpt/sys0
 8. # zfs set mountpoint=none sys
 9. # zfs set checksum=fletcher4 sys
10. # zfs set atime=off sys
11. # zfs create sys/ROOT
12. # zfs create -o mountpoint=/mnt sys/ROOT/default
13. # zpool set bootfs=sys/ROOT/default sys
14. # geli init -b -s 4096 -e AES-CBC -l 128 /dev/gpt/local0
15. # geli attach /dev/gpt/local0
16. # zpool create -f -o cachefile=/tmp/zpool.cache local /dev/gpt/local0.eli
17. # zfs set mountpoint=none local
18. # zfs set checksum=fletcher4 local
19. # zfs set atime=off local
20. # zfs create local/home
21. # zfs create -o mountpoint=/mnt/root local/root
22. # cd /usr/freebsd-dist/
23. # for I in base.txz kernel.txz; do
    > tar --unlink -xvpJf ${I} -C /mnt
    > done
24. # cp /tmp/zpool.cache /mnt/boot/zfs/
25. # cat << EOF >> /mnt/boot/loader.conf
    > zfs_load=YES
    > geom_eli_load=YES
    > vfs.root.mountfrom="zfs:sys/ROOT/default"
    > EOF
26. # cat << EOF >> /mnt/etc/rc.conf
    > zfs_enable=YES
    > EOF
27. # :> /mnt/etc/fstab
28. # zfs umount -a
29. # zfs set mountpoint=legacy sys/ROOT/default
30. # zfs set mountpoint=/home local/home
31. # zfs set mountpoint=/root local/root
32. # reboot
After these instructions and reboot we have these GPT partitions available, this example is on a 4GB disk.

Code:
# gpart show
=>     34  8388541  ada0  GPT  (4.0G)
       34      256     1  freebsd-boot  (128k)
      290  2097152     2  freebsd-zfs  (1.0G)
  2097442  6291133     3  freebsd-zfs  (3G)

# gpart list | grep label
   label: bootcode0
   label: sys0
   label: local0

# zpool status
  pool: local
 state: ONLINE
 scan: none requested
config:

        NAME              STATE     READ WRITE CKSUM
        sys               ONLINE       0     0     0
          gpt/local0.eli  ONLINE       0     0     0

errors: No known data errors

  pool: sys
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        sys         ONLINE       0     0     0
          gpt/sys0  ONLINE       0     0     0

errors: No known data errors
4. Basic Setup after Install

1. Login as root with empty password.
login: root
password: [ENTER]


2. Create initial snapshot after install.
# zfs snapshot -r sys/ROOT/default@install

3. Set new root password.
# passwd

4. Set machine's hostname.
# echo hostname=hostname.domain.com >> /etc/rc.conf

5. Set proper timezone.
# tzsetup

6. Add some swap space.
If You used the Server with ... type, then use this to add swap.

Code:
# zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none sys/swap
# swapon /dev/zvol/sys/swap
If You used the Road Warrior Laptop one, then use this one below, this was the swap space will also be encrypted.

Code:
# zfs create -V 1G -o org.freebsd:swap=on \
                   -o checksum=off \
                   -o sync=disabled \
                   -o primarycache=none \
                   -o secondarycache=none local/swap
# swapon /dev/zvol/local/swap
7. Create snapshot called configured or production
After You configured Your fresh FreeBSD system, added needed packages and services, create snapshot called configured or production so if You mess something, You can always go back in time to bring working configuration back. mess something.

# zfs snapshot -r sys/ROOT/default@configured

5. Enable Boot Environments

Here are some simple instructions on how to download and enable the beadm command line utility for easy Boot Environments administration.

Code:
# fetch https://downloads.sourceforge.net/project/beadm/beadm -o /usr/sbin/beadm
# chmod +x /usr/sbin/beadm
# rehash
# beadm list
BE      Active Mountpoint Space Policy Created
default NR     /           592M static 2012-04-25 02:03
6. WYSIWTF

Now we have a working ZFS only FreeBSD system, I will put some example here about what You now can do with this type of installation and of course the Boot Environments feature.

6.1. Create New Boot Environment Before Upgrade

1. Create new environment from the current one.
# beadm create upgrade
Created successfully


2. Activate it.
# beadm activate upgrade
Activated successfully


3. Reboot into it.
# shutdown -r now

4. Mess with it.

You are now free to do anything You like fo or the upgrade process, but even if You break everything, You still have a working default working environment.

6.2. Perform Upgrade within a Jail

This concept is about creating new boot environment from the desired one, lets call it jailed, then start that new environment inside a FreeBSD Jail and perform upgrade there. After You have finished all tasks related to this upgrade and You are satisfied with the achieved results, shutdown that Jail, set the boot environment into that just upgraded Jail called jailed and reboot into just upgraded system without any risks.

1. Create new boot environment called jailed.
# beadm create -e default jailed
Created successfully


2. Create /usr/jails directory.
# mkdir /usr/jails

3. Set mount point of new boot environment to /usr/jails/jailed dir.
# zfs set mountpoint=/usr/jails/jailed sys/ROOT/jailed

3.1. Make new Jail dataset mountable.
# zfs set canmount=noauto sys/ROOT/jailed

3.2. Mount new Jail dataset.
# zfs mount sys/ROOT/jailed

4. Enable FreeBSD Jails mechanism and the jailed Jail in /etc/rc.conf file.
# cat << EOF >> /etc/rc.conf
> jail_enable=YES
> jail_list="jailed"
> jail_jailed_rootdir="/usr/jails/jailed"
> jail_jailed_hostname="jailed"
> jail_jailed_ip="10.20.30.40"
> jail_jailed_devfs_enable="YES"
> EOF


5. Start the Jails mechanism.
# /etc/rc.d/jail start
Configuring jails:.
Starting jails: jailed.


6. Check if the jailed Jail started.
Code:
# jls
   JID  IP Address      Hostname                      Path
     1  10.20.30.40     jailed                        /usr/jails/jailed
7. Login into the jailed Jail.
# jexec 1 tcsh

8. PERFORM ACTUAL UPGRADE.

9. Stop the jailed Jail.
# /etc/rc.d/jail stop
Stopping jails: jailed.


10. Disable Jails mechanism in /etc/rc.conf file.
# sed -i '' -E s/"^jail_enable.*$"/"jail_enable=NO"/g /etc/rc.conf

11. Activate just upgraded jailed boot environment.
# beadm activate jailed
Activated successfully


12. Reboot into upgraded system.

6.3. Import Boot Environment from Other Machine

Lets assume, that You need to upgrade or do some major modification to some of Your servers, You will then create new boot environment from the default one, move it to other 'free' machine, perform these tasks there and after everything is done, move the modified boot environment to the production without any risks. You may as well transport that environment into You laptop/workstation and upgrade it in a Jail like in step 6.2 of this guide.

1. Create new environment on the production server.
# beadm create upgrade
Created successfully.


2. Send the upgrade environment to test server.
# zfs send sys/ROOT/upgrade | ssh TEST zfs recv -u sys/ROOT/upgrade

3. Activate the upgrade environment on the test server.
# beadm activate upgrade
Activated successfully.


4. Reboot into the upgrade environment on the test server.
# shutdown -r now

5. PERFORM ACTUAL UPGRADE AFTER REBOOT.

6. Sent the upgraded upgrade environment onto production server.
# zfs send sys/ROOT/upgrade | ssh PRODUCTION zfs recv -u sys/ROOT/upgrade

7. Activate upgraded upgrade environment on the production server.
# beadm activate upgrade
Activated successfully.


8. Reboot into the upgrade environment on the production server.
# shutdown -r nowCourier New


7. References

[1] http://forums.freebsd.org/showthread.php?t=10334
[2] http://forums.freebsd.org/showthread.php?t=12082
[3] http://docs.oracle.com/cd/E19963-01/pdf/820-6565.pdf
[4] http://docs.oracle.com/cd/E19963-01/.../beadm-1m.html
[5] http://anonsvn.h3q.com/projects/free.../wiki/manageBE
[6] https://sourceforge.net/projects/beadm/


The last part of the HOWTO remains the same as Year ago ...

You can now add your users, services and packages as usual on any FreeBSD system, have fun ;)
__________________
religions, worst damnation of mankind
"If 386BSD had been available when I started on Linux, Linux would probably never had happened." Linus Torvalds

Linux is not UNIX! Face it! It is not an insult. It is fact: GNU is a recursive acronym for “GNU's Not UNIX”.
vermaden's: links resources deviantart spreadbsd

Last edited by vermaden; 20th June 2012 at 12:21 PM.
Reply With Quote
 

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
FreeBSD GPT howto graudeejs Guides 10 21st December 2010 12:24 AM
HOWTO: FreeBSD CPU Scaling with cpufreq.ko vermaden Guides 10 27th October 2010 07:58 AM
interrupt storm and irq madness siffland FreeBSD General 5 23rd October 2009 05:16 AM
HOWTO: QEMU on FreeBSD vermaden Guides 10 9th March 2009 07:10 PM
HOWTO: FreeBSD with CCACHE vermaden Guides 10 9th July 2008 06:14 PM


All times are GMT. The time now is 11:16 AM.


Powered by vBulletin® Version 3.8.4
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content copyright © 2007-2010, the authors
Daemon image copyright ©1988, Marshall Kirk McKusick