First create a 10 cylinder partition (with the free cylinders we left over in step 1): # format AVAILABLE DISK SELECTIONS: 0. c0t0d0 /pci@1f,4000/scsi@3/sd@0,0 1. c0t1d0 /pci@1f,4000/scsi@3/sd@1,0 . . . Specify disk (enter its number): 0 selecting c0t0d0 [disk formatted] format> p In the end the partition table should look something like this: partition> p Current partition table (unnamed): Total disk cylinders available: 4924 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 4342 7.44GB (4343/0/0) 15595713 1 swap wu 4343 - 4913 1001.20MB (571/0/0) 2050461 2 backup wm 0 - 4923 8.43GB (4924/0/0) 17682084 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 root wm 4914 - 4923 17.53MB (10/0/0) 35910 Don't forget to label the disk! partition> label Ready to label disk, continue? y partition> q Is the Soltice Disk Suite installed? # metastat metastat: wsmon01: there are no existing databases Indicates it is installed however if you see: # metastat ksh: metastat: not found It is not installed. To install SDS follow these steps: First place the Solaris 8 Software 2 of 2 cd in the drive. # mount -F hsfs /dev/dsk/c1t6d0s0 /cdrom # pkgadd -d /cdrom/Solaris_8/EA/products/DiskSuite_4.2.1/sparc/Packages The following packages are available: 1 SUNWlvma Solaris Volume Management API's (sparc) 1.0,REV=2001.11.02.03.17 2 SUNWlvmg Solaris Volume Management Application (sparc) 1.0,REV=2001.11.14.03.19 3 SUNWlvmr Solaris Volume Management (root) (sparc) 1.0,REV=2001.11.14.03.19 4 SUNWmdg Solstice DiskSuite Tool (sparc) 4.2.1,REV=1999.11.04.18.29 5 SUNWmdja Solstice DiskSuite Japanese localization (sparc) 4.2.1,REV=1999.12.09.15.37 6 SUNWmdnr Solstice DiskSuite Log Daemon Configuration Files (sparc) 4.2.1,REV=1999.11.04.18.29 7 SUNWmdnu Solstice DiskSuite Log Daemon (sparc) 4.2.1,REV=1999.11.04.18.29 8 SUNWmdr Solstice DiskSuite Drivers (sparc) 4.2.1,REV=1999.12.03.10.00 9 SUNWmdu Solstice DiskSuite Commands (sparc) 4.2.1,REV=1999.11.04.18.29 10 SUNWmdx Solstice DiskSuite Drivers(64-bit) (sparc) 4.2.1,REV=1999.11.04.18.29 Select package(s) you wish to process (or 'all' to process all packages). (default: all) [?,??,q]: Install in this order, one by one: SUNWmdr, SUNWmdx, SUNWlvmr, SUNWmdu, SUNWlvma, SUNWlvmg, SUNWmdg, SUNWmdnr and SUNWmdnu. Verify the installation: # pkginfo | grep md system SUNWmdg Solstice DiskSuite Tool system SUNWmdnr Solstice DiskSuite Log Daemon Configuration Files system SUNWmdnu Solstice DiskSuite Log Daemon system SUNWmdr Solstice DiskSuite Drivers system SUNWmdu Solstice DiskSuite Commands system SUNWmdx Solstice DiskSuite Drivers(64-bit) # pkginfo | grep lv system SUNWlvma Solaris Volume Management API's system SUNWlvmg Solaris Volume Management Application system SUNWlvmr Solaris Volume Management (root) Reboot the system for the software install to take effect. MAKE SURE YOU USE "reboot -- -r"!!! # reboot -- -r 13A) Prepare the mirror disk Copy the boot disk partitions to the mirror disk: # prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2 prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t2d0s2 Create the metadb state database: # metadb -a -f -c2 /dev/rdsk/c0t0d0s7 /dev/rdsk/c0t1d0s7 Create the boot mirror: NOTE: Along the way if you see a message like this: "metainit: wseln01: c0t1d0s0: is swapped on" this means c0t1d0s0 is a swap device. Probably left over from the install. We must first remove the swap. First confirm this: # swap -l | grep c0t1d0s0 /dev/dsk/c0t1d0s0 32,8 16 69047072 6904707 To remove it it make sure this is in the /etc/vfstab: # grep c0t1d0s0 /etc/vfstab /dev/dsk/c0t1d0s0 - - swap - no - remove that line from /etc/vfstab and reboot # shutdown -y -g0 -i6 Continue with mirroring: # metainit -f d10 1 1 c0t0d0s0 d10: Concat/Stripe is setup # metainit d20 1 1 c0t1d0s0 d20: Concat/Stripe is setup # metainit d30 -m d10 d30: Mirror is setup Create the swap mirror: # metainit -f d11 1 1 c0t0d0s1 d11: Concat/Stripe is setup # metainit d21 1 1 c0t1d0s1 d21: Concat/Stripe is setup # metainit d31 -m d11 d31: Mirror is setup Edit the vfstab: # cp /etc/vfstab /etc/vfstab.orig.kinscoe # metaroot d30 Modify the swap line to look like this: /dev/md/dsk/d31 - - swap - no - Restart the server so that the root and swap are now operating on the mirror set: # lockfs -fa # init 6 You should see messages similar to below on startup: WARNING: forceload of misc/md_trans failed WARNING: forceload of misc/md_raid failed WARNING: forceload of misc/md_hotspares failed WARNING: forceload of misc/md_sp failed You can safely ignore these. It has to do with the fact we have not defined a "hotspare" disk to the RAID. Since we will not be doing RAID 5 I did not see the need for it. AFTER REBOOT: Attach the second submirrors to the mirrors: # metattach d30 d20 d30: submirror d20 is attached # metattach d31 d21 d31: submirror d21 is attached Enable the mirror disk to be bootable: # installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s0 # ls -l /dev/rdsk/c0t1d0s0 lrwxrwxrwx 1 root root 45 Sep 11 18:42 /dev/rdsk/c0t1d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a,raw Verify the mirrors have been synchronized: # metastat | grep "progress" Resync in progress: 21 % done Shutdown and apply nvram changes to support a secondary boot disk: # shutdown -y -g0 -i0 (this is the value from the ls -l command above) Now what is misleading about the 64 bit architecture is "sd" is no longer used but rather "disk" so you will need to check this address against the devalias command at the nvram prompt. ok devalias vx-rootdisk2 /pci@6,4000/scsi@4/disk@0,0:a mirror /pci@1f,4000/scsi@3/1,0:a disk /pci@1f,4000/scsi@3/disk@0,0 disk0 /pci@1f,4000/scsi@3/disk@0,0 disk1 /pci@1f,4000/scsi@3/disk@1,0 disk2 /pci@1f,4000/scsi@3/disk@2,0 disk3 /pci@1f,4000/scsi@3/disk@3,0 scsi /pci@1f,4000/scsi@3 diskx0 /pci@1f,4000/scsi@2/disk@0,0 diskx1 /pci@1f,4000/scsi@2/disk@1,0 diskx2 /pci@1f,4000/scsi@2/disk@2,0 diskx3 /pci@1f,4000/scsi@2/disk@3,0 cdrom /pci@1f,4000/scsi@2/disk@6,0:f tape /pci@1f,4000/scsi@2/tape@4,0 scsix /pci@1f,4000/scsi@2 Since "disk1" matches up with our disk address we can use that. ok nvalias mirror /pci@1f,4000/scsi@3/sd@1,0:a ok nvalias mirror /pci@1f,4000/scsi@3/sd@8,0:a or for 64-bit (PCI cards) ok nvalias mirror /pci@1f,4000/scsi@3/disk@1,0 Test booting from the mirror ok boot mirror If you see this error: Can't open boot device Check your alias and make sure it's correct. You may need to use the commands: probe-scsi-all, probe-pci or probe-ide. When you boot up it is normal to see the message: WARNING: md: d41: /dev/dsk/c2t0d0s0 needs maintenance since you are now booting up off the mirror and therefore the mirror is now broken. Now to set it up so that the system will automatically find the first good disk: # eeprom boot-device=disk0:a disk1:a (from the unix prompt) or ok setenv boot-device disk0:a disk1:a (from the ok prompt) boot-device = disk0:a disk1:a Documentation: Keep backups of your of configuration in case of corruption. Regular usage of metastat, metastat -p, and prtvtoc can help.