How to rebuild a Solaris 10 server using the ZFS Root Pool Snapshots

Solaris 10 has provides the ability to recover from a disk drive failure using ZFS root pool snapshots. The snapshot can also be used to quickly rebuild a new server in the event of a disaster. Snapshot is different from flash archive which can also be used to rebuild a server. If the flash archive contains an image of a ZFS filesystem then it can only be restored via a JumStart server. Flash archive of UFS filesystems can be restored after booting up with the Solaris 10 installation CD.

I will explain how to create a ZFS root pool snapshot and the steps to restore it to new hardware. This is a tried and tested procedure which I created. I could not find a complete documentation to do this on the internet, this took me a few days to figure out.

First create a NFS share on the remote server to store the snapshots. Let’s call this server remote-server.

For example create a filesystem.

# zfs create rpool/snaps

Share rpool/snaps.

# zfs set sharenfs=rw rpool/snaps

Check to see if share created properly.

# share
– /rpool/snaps rw “” “”

To create a recursive snapshot of the root pool on the server you wish to backup, logon as root. For this exercise let’s call this server prod-server.

List the current ZFS filesystems.

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 7.43G 41.1G 97K /rpool
rpool/ROOT 4.43G 41.1G 21K legacy
rpool/ROOT/s10s_u8wos_08a 4.43G 41.1G 4.36G /
rpool/ROOT/s10s_u8wos_08a/var 69.1M 41.1G 69.1M /var
rpool/dump 1.00G 41.1G 1.00G –
rpool/export 44K 41.1G 23K /export
rpool/export/home 21K 41.1G 21K /export/home
rpool/swap 2G 43.1G 16K –

Create recursive snapshots of the root pool.

# zfs snapshot –r rpool@backup

List the ZFS filesystems, you should see the snapshots.

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 7.43G 41.1G 97K /rpool
rpool@backup 0 – 97K –
rpool/ROOT 4.43G 41.1G 21K legacy
rpool/ROOT@backup 0 – 21K –
rpool/ROOT/s10s_u8wos_08a 4.43G 41.1G 4.36G /
rpool/ROOT/s10s_u8wos_08a@backup 244K – 4.36G –
rpool/ROOT/s10s_u8wos_08a/var 69.2M 41.1G 69.1M /var
rpool/ROOT/s10s_u8wos_08a/var@backup 40.5K – 69.1M –
rpool/dump 1.00G 41.1G 1.00G –
rpool/dump@backup 0 – 1.00G –
rpool/export 44K 41.1G 23K /export
rpool/export@backup 0 – 23K –
rpool/export/home 21K 41.1G 21K /export/home
rpool/export/home@backup 0 – 21K –
rpool/swap 2.00G 43.1G 16K –
rpool/swap@backup 0 – 16K –

Send the root pool snapshot to remote-server.

#Zfs send –Rv rpool@backup > /net/remote-server/rpool/snaps/rpool.backup

On the server you wish to restore the snapshot to, boot up with the Solaris installation CD in single user mode.

ok boot cdrom –s

Configure the NIC

# ifconfig bge0 10.55.99.22 netmask 255.255.0.0 up

Mount NFS share on remote-server using remote server’s IP.

# mount -f nfs 10.55.99.33:/rpool/snaps /mnt

If the disk drive is new then ensure it is labeled as SMI and a slice 0 exist. This slice needs to be big enough to restore to. Use format –e command to create the slices and label the disk.

#format –e
Select disk number, partition, print.
A SMI labeled disk should look similar to the below example.

Current partition table (original):
Total disk cylinders available: 65533 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 root wm 0 – 9262 49.43GB (9263/0/0) 103652970
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 – 65532 349.67GB (65533/0/0) 733314270
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 9263 – 65532 300.25GB (56270/0/0) 629661300

Create the ZFS root pool. C0t0d0 is the disk and the S0 at the end is slice 0.

#zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c0t0d0s0

Check to see if the pool was created.

# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rpool 49.2G 5.45G 43.8G 11% ONLINE –

Restore the root pool snapshot. This step will take some time depending on the size of the snapshot.

# cat /mnt/rpool.backup | zfs receive -Fdu rpool

Check to see if the restore was successful.

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 5.43G 128G 97K /a/rpool
rpool@backup 0 – 97K –
rpool/ROOT 4.43G 128G 21K legacy
rpool/ROOT@backup 0 – 21K –
rpool/ROOT/s10s_u8wos_08a 4.43G 128G 4.36G /a
rpool/ROOT/s10s_u8wos_08a@backup 0 – 4.36G –
rpool/ROOT/s10s_u8wos_08a/var 69.1M 128G 69.1M /a/var
rpool/ROOT/s10s_u8wos_08a/var@backup 0 – 69.1M –
rpool/dump 1.00G 128G 1.00G –
rpool/dump@backup 0 – 1.00G –
rpool/export 44K 128G 23K /a/export
rpool/export@backup 0 – 23K –
rpool/export/home 21K 128G 21K /a/export/home
rpool/export/home@backup 0 – 21K –
rpool/swap 16K 128G 16K –
rpool/swap@backup 0 – 16K –

Set the bootfs property on the root pool BE.

#zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool

Determine the platform name, you will need it for the next step.

# uname -i
SUNW,SPARC-Enterprise

Install the boot blocks on the new disk, insert the result of uname -i. On a SPARC server.

#installboot -F zfs /usr/platform/SUNW,SPARC-Enterprise/lib/fs/zfs/bootblk /dev/rdsk/ c0t0d0s0

On an x86 server. Note, I only tested my procedures on a SPARC server, but it should work on a x86 server as well.

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t0d0s0

You are done, reboot the system.

#init 6

If the server does not boot up, then ensure that the disk you choose is the default boot device. If you need to find out how to do that then see this article, http://www.gamescheat.ca/2010/03/changing-the-default-boot-device-on-a-sparc-server/.

2 comments for “How to rebuild a Solaris 10 server using the ZFS Root Pool Snapshots

Comments are closed.