My laptop's daily use is with OpenBSD/amd64-current, but I also use it when needed to build OpenBSD/i386-stable, as I have several i386 servers. Until now, that has meant multibooting, and I could not run my amd64-current applications while i386-stable was running.
No longer. Now, I can build -stable, and -stable ports, in an i386 virtual machine. This will free up a second drive for other uses, and will no longer cause any operational delays that I faced when running the two OSes in sequence.
Provisioning the host OpenBSD system
At this writing, vmm() operates with Intel VT capable processors only.
- Intel VT may need to be enabled in the BIOS/EFI. The dmesg(8) will confirm vmm(4) functionality if VT is available and enabled:
Code:
vmm0 at mainbus0: VMX/EPT
- The vmd(8) virtual machine management daemon must be started. This can be done manually or via the rc.d(8) daemon control system and the rcctl(8) command.
- Virtual machines may be configured and started automatically by a vm.conf(5) configuration file, or dynamically via the vmctl(8) command.
Virtual machines do not use a bootloader. The hypervisor loads the virtual machine kernel and passes control to the virtual machine. This means the operator does not see a boot> prompt. Should you wish to enter single-user mode, you'll need to do so after the OS is running, via a SIGTERM to init(8):
A virtual serial port is used for console communication. Virtual machines do not have video cards.
Provisioning the virtual machine network
The most complex part of the configuration is networking. It can be simple, but there are many choices, and there are several restrictions around the use of DHCP, depending on whether one is using bridge(4), or a wireless network interface.
- I elected to give my host a permanent pseudo-NIC which could be used with an entire network of virtual machines, even though today I am using only one. (This is not necessary, I chose to do so for personal convenience.)
- I decided the host would act as a router to the virtual machine subnet, and use NAT, so that both the virtual machines and the host share the same single IP address over any physically attached network.
- I chose to deploy a bridge(4), so that any tap() device could be attached to it by vmd(). The bridge() would also be dyamically created, and always connected to the host network stack via a vether(4) NIC created at boot time.
- I deployed static addressing, rather than DHCP, as a matter of personal convenience.
The sysctl.conf(5) file enables IPv4 packet forwarding.
Code:
net.inet.ip.forwarding=1
The pf.conf(5) file enables NAT.
Code:
match out on egress from !(egress) nat-to (egress)
The hostname.if(5) file
/etc/hostname.vether0 defines a permanent connection to the virtual machine subnet, whether there are any virtual machines running or not.
Provisioning a virtual machine
The operator must pre-define any disk drives to be used by a virtual machine. These are always raw image files, which may be managed, if needed, as devices on the host with vnconfig(8).
- I created a 10GB disk drive with vmctl().
$ vmctl create disk.drive -s 10g
- I started the vmd(8) daemon manually.
# rcctl -f start vmd
- I obtained an i386 bsd.rd RAMDISK installation kernel, and used it to start the virtual machine with vmctl() manually. I chose 1GB of RAM, though for installation I could have used less, and a single virtual NIC.
# vmctl start install -c -k bsd.rd -d disk.drive -i 1 -m 1g
At this point, I was presented with the RAMDISK dmesg and the "(I)nstall, (U)pgrade, or (S)hell?" prompt. I chose (I)nstall, and went on to provision the network on the host before answering any questions.
I created a bridge and added both the vether0 NIC I'd provisioned in advance and the tap0 NIC created by vmd when it started the virtual machine.
# ifconfig bridge0 add vether0 add tap0 up
Returning to the install script, I defined the virtual machine's vio0 network interface as 10.9.0.2, with a /24 (255.255.255.0) netmask, and set the gateway as 10.9.0.1, the address assigned to vether0 on the host. I also pointed the virtual machine to my DNS nameserver.
After the installation was complete, I halted the virtual machine with
# halt
on its console, and then stopped the virtual machine with
# vmctl stop install
on the host.
To run the installed system, I started a new virtual machine, this time with a local copy of the i386 GENERIC kernel file bsd.
# vmctl start i386 -c -k bsd -d disk.drive -i 1 -m 1g
I also needed to re-add the tap0 device to the bridge, as the first tap0 device had been destroyed when I stopped the install virtual machine, and a new tap0 was created with the new virtual machine.
# ifconfig bridge0 add tap0
I created a vm.conf() file, so that this virtual machine will start automatically when I elect to start vmd(). This configuration assigns a permanent MAC address to the virtual machine's vio0 device. I did this to avoid any MAC address collisions with any other virtual machines I may add in the future. It is my first permanently available (though only started when needed) virtual machine, so I selected a very simple "first" MAC address for it.
Code:
files= /home/vm/i386-stable/
vm i386 {
memory 1g
kernel $files bsd
disk $files disk.drive
interface tap { lladdr 00:00:00:00:00:01 switch localnet }
}
switch localnet {
add vether0
}
With that configuration file, whenver vmd() is started, the virtual machine starts, and its tap() device is bridged with the hosts's vether0 device. I can ssh(1) in within 1 or 2 minutes.
--------
This is not exactly a "how to" -- mostly because I have not covered the many possible network configurations I've tried since vmm(4) was announced. Some network connections are simpler than this, including just assigning an IP address to the tap(4) device to communicate on a subnet with a single virtual machine. Some can be far more complex, and some are currently works in progress that may become usable as we approach 6.1.