If you’re looking to use live migration with your Solaris 11 kernel zones, you’ll need to get your zone onto shared storage. In Solaris zone terminology, shared storage is storage that can be accessed by multiple hosts using the same device names.
In this post, we’ll be using fiber channel storage and the built-in multi-pathing drivers. Since the storage device name is composed of the target LUN’s World Wide Name (WWN), the device name will be the same regardless of the host used.
We’ll be using the simple and popular method of mirror, attach, and detach to get our kernel zone’s rpool from parent-zone zfs storage to a shared storage LUN. First up, we need to look at our existing zone’s configuration:
zonecfg -z myzone export
... add device set match=/dev/rdsk/c0t600ABDFEFDSADE87651287F901000000d0 set id=2<br>end<br>add device set storage=dev:/dev/zvol/dsk/my_zone_pool/myzone/disk0 set bootpri=0<br>set id=0 end
Notice in our kernel zone we already have one volume which is on shared storage (id=2), but our boot volume (id=0,bootpri=0) is on parent zfs storage. We’re going to move this volume to a dedicated LUN from our storage which can be accessed by multiple Solaris hosts. We need to identify the size of the LUN needed:
root@zonehost1:~# zfs get volsize my_zone_pool/myzone/disk0 NAME PROPERTY VALUE SOURCE my_zone_pool/myzone/disk0 volsize 16G local
We’ll need at least a 16GB volume to mirror our rpool. However, this is also a good time to expand how much storage is available to this pool, so we will allocate a larger volume. After we create the LUN and map it to our zone hosts, we need to make it available to the kernel zone. Take note of the newly assigned id number. Also, you’ll need to set the bootpri so that we can mirror this volume to the existing rpool volume. We’ll come back later and change the bootpri as well:
root@zonehost1:~# zonecfg -z myzone zonecfg:myzone> add device zonecfg:myzone:device> set storage=dev:/dev/dsk/c0t600EE665544332222FFDDAAB01000000d0 zonecfg:myzone:device> set bootpri=1 zonecfg:myzone:device> info device 2: match not specified storage: dev:/dev/dsk/c0t600EE665544332222FFDDAAB01000000d0 id: 1 bootpri: 1 zonecfg:myzone:device> end zonecfg:myzone> commit zonecfg:myzone> exit
Next apply your zone config changes to the running zone:
root@zonehost1:~# zoneadm -z myzone apply zone 'myzone': Checking: Adding device storage=dev:/dev/dsk/c0t600EE665544332222FFDDAAB01000000d0 zone 'myzone': Applying the changes
Next, login to the kernel zone and attach new disk to the existing rpool disk:
root@myzone:~# zpool attach rpool c1d0 c1d1 Make sure to wait until resilver is done before rebooting.
Use zpool status rpool to check on the re-silver progress. Once the re-silver is done, you can detach the original rpool volume:
zpool detach rpool c1d0a
Now, return to the parent zone and remove the original device. You can also update the new volume to have a bootpri=0:
zonecfg:myzone> remove device 1 zonecfg:myzone> select device 2 zonecfg:myzone:device> set bootpri=0 zonecfg:myzone:device> end zonecfg:myzone> commit zonecfg:myzone> exit root@zonehost1:~# zoneadm -z myzone apply zone 'myzone': Checking: Removing device storage=dev:/dev/zvol/dsk/my_zone_pool/myzone/disk0 zone 'myzone': Checking: Modifying device storage=dev:/dev/dsk/c0t600EE665544332222FFDDAAB01000000d0 zone 'myzone': Applying the changes
Once you are satisfied that you won’t need to mirror back to the original volume for any reason, you can expand the new rpool to use your newly available space (if you mirrored to a larger volume):
root@myzone:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.6G 12.5G 3.08G 80%% 1.00x ONLINE - root@myzone:~# zpool set autoexpand=on rpool root@myzone:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 55.5G 12.5G 43.0G 22%% 1.00x ONLINE - root@myzone:~# zpool set autoexpand=off rpool
Generally, I would keep autoexpand set to off and only set it to on when you are performing a planned expansion.
If your other zone hosts are configured to access the volumes, you can test your zone’s live migration:
zoneadm -z myzone migrate -n ssh://root@zonehost2