This chapter describes some hints and tips for using DiskSuite.
Use the following to proceed directly to the section that provides the information you need.
Creating a trans metadevice (UFS logging) is an easy way to increase availability of UFS. Here's a tip that makes efficient use of slices when using trans metadevices:
For more information on adding a state database replica or creating a trans metadevice, refer to Chapter 2, "Creating DiskSuite Objects."
Prestoserve
Prestoserve improves NFS
DiskSuite is fully compatible with Prestoserve, with the following restrictions.
The simple reason is that using Prestoserve together with mirors introduces a single point of failure into the I/O subsystem, which is exactly what mirrors are designed to avoid. The use of Prestoserve lowers the MTBF of a mirror to approximately the same as a single disk.
Prestoserve cannot be used on trans metadevices. Using Prestoserve on a logging UFS can cause system hangs or panics. Prestoserve operates by redirecting I/O from a device to NVRAM. This redirection interferes with the communication protocol between a logging UFS and a metadevice.
The following steps describe how to load and enable Prestoserve for use with DiskSuite. Basically, you edit the /etc/system file to load Prestoserve after the DiskSuite driver.
-----------------
exclude: drv/pr -----------------
-----------------------------------------------------------------
'start') rm -f /tmp/.mdlock if [ -x "$METAINIT" -a -c "$METADEV" ]; then #echo "$METAINIT -r" $METAINIT -r error=$? #echo "$error" case "$error" in 0|1) ;; 66) echo "Insufficient metadevice database replicas located." echo "" echo "Use metadb to delete databases which are broken." echo "Ignore any \"Read-only file system\" error messages." echo "Reboot the system when finished to reload the metadevice database." echo "After reboot, repair any broken database replicas which were deleted." /sbin/sulogin < /dev/console echo "Resuming system initialization. Metadevice database will remain stale." ;; *) echo "Unknown $METAINIT -r failure $error." ;; esac modload /kernel/drv/pr presto -p /dev/null fi ;; -----------------------------------------------------------------
Replace the following line:
-----------
presto -u -----------
with the line:
---------------------------
presto -u /filesystem... ---------------------------
In this command, filesystem... is a list of every file system to be accelerated with Prestoserve. Do not include any of the following:
A poorly designed DiskSuite configuration can degrade performance. This section offers tips for getting good performance from DiskSuite.
For SPARCstorage Arrays, you should use drives in a mirror on different chassis, if possible. This type of configuration ensures that all mirror data would survive a SPARCstorage Array chassis failure. If you cannot spread drives across different chassis, then use drives in different trays. This enables you to offline submirrors and the tray to be spun down or removed for maintenance while the mirror stays online.
For example, consider a two-way mirror with each submirror composed of a concatenation of three SPARCstorage Array disks. One submirror would consist of three disks in tray 1, and the other submirror would consist of drives in tray 2. Using the command line interface to initialize this configuration would look like this:
-------------------------------------------------
# metainit d1 3 1 c0t0d0s2 1 c0t0d1s2 c0t0d2s2 d1: Concat/Stripe is setup # metainit d2 3 1 c0t2d0s2 1 c0t2d1s2 c0t2d2s2 d2: Concat/Stripe is setup # metainit d0 -m d1 d0: Mirror is setup # metattach d0 d2 d0: Component d2 is attached -------------------------------------------------
Strings t0 and t1 are contained in tray 1, t2 and t3 in tray 2, and t4 and t5 are in tray 3. Hence, in the above commands, to create submirrors in different trays, we use string t0 for one submirror, and t2 for the second submirror.
Make sure these files are backed up on a regular basis.
Read the guidelines for stripes and concatenations above.
In some cases, the geometric read option improves performance by minimizing head motion and access time. This option is most effective when there is only one slice per disk, when only one process at a time is using the slice/file system, and when I/O patterns are highly sequential or when all accesses are read.
To change mirror options, refer to "How to Change a Mirror's Options (DiskSuite Tool)."
Concatenated slices also differ in the sense that they do not have parity striped on any of the regions. Thus, the entire contents of the slice are available for data.
Any performance enhancements for large or sequential writes are lost when slice are concatenated.
If you need to repartition a disk drive, for example, after a disk replacement, you can create a script using the fmthard(1M) command to quickly recreate the VTOC (Volume Table of Contents) information on the disk.
------------------------------------------
# prtvtoc /dev/rdsk/c2t0d0s0 /tmp/vtoc ------------------------------------------
In this example, the information for disk c2t0d0 is redirected to a file on disk.
--------------------------------------------
for i in 1 2 3 5 do fmthard -s /tmp/vtoc /dev/rdsk/c2t${i}d0s2 done --------------------------------------------
You can set up quotas to limit the amount of disk space and number of inodes (roughly equivalent to the number of files) available to users. (This is a feature of Solaris, not of DiskSuite.) These quotas are activated automatically each time a file system is mounted.
To create a trans metadevice, refer to Chapter 2, "Creating DiskSuite Objects." For more information on quotas, refer to System Administration Guide, Volume II.
This section describes some advanced uses (and limitations) of DiskSuite Tool.
Here are three tips to help manage screen real estate on the Metadevice Editor's canvas:
Setting filters within DiskSuite Tool on the Slice View and Disk View windows can help you quickly locate suitable slices for the task at hand.
If you have a system with many disks (and slices), searching for available slices of a certain size can be a chore. Using the Slice Filter window can save you time in this activity.
This task describes how to create a filter in the Slice View window for available slices larger than 200 MBytes, then drag and drop these slices to the Disk View window to see where they are located.
The Slice View window appears.
The Slice Filters window appears.
If necessary, change values in the Slice Filters window and click Apply to change the filtering scheme.
The Disk View window appears.
DiskSuite Tool uses the selected drop site's color for all slices dragged to the Disk View window. You can now make your slice selection (for example to create a submirror) following the considerations outlined in "General Guidelines."
This task shows how to use DiskSuite Tool to find a suitably sized replacement slice for an errored slice in a submirror.
Note - This approach is not limited to mirrors. You can use this task to find replacement slices for any type of metadevice.
The Disk View window appears.
The Disk View window colors the slices with a different color corresponding to the submirrors in the Mirror object. This helps you see where the slices are located, for example, across controllers.
The Slice View window appears.
The Slice Filters window appears.
One way to do this is to set up a filter that finds slices greater than a size that is slightly smaller than the errored slice. This will display a larger range of slices than if you set up a filter that searches for slices equal to the errored slice size.
If necessary, change values in the Slice Filters window and click Apply to change the filtering scheme.
DiskSuite Tool uses the color for all slices dragged to the Disk View window.
You can now make your slice selection for a DiskSuite object following the guidelines outlined in "General Guidelines ." Pick a replacement that is large enough and follows mirror guidelines (on different controller, or at least a different disk.)
Click inside the top of the Mirror object then click Commit. A mirror resync begins.
By default, DiskSuite Tool uses colors and fonts that are compatible with the
OpenWindows
DiskSuite Tool uses a variety of colors:
The X Window System RGB (Red, Green, Blue) color specification mechanism enables you to specify a nearly infinite variety of colors. Of course, many of these colors will appear similar, varying only slightly in shade or intensity.
To aid in selecting and specifying colors, the X Window System provides a
standard default set of colors that you can specify by name instead of RGB
values. This "database" of color names can be examined using the standard
X utility showrgb. It shows the RGB values and a corresponding descriptive
alias. For example:
----------------------------------
# showrgb 199 21 133 medium violet red 176 196 222 light steel blue 102 139 139 paleturquoise4 159 121 238 mediumpurple2 141 182 205 lightskyblue3 0 238 118 springgreen2 255 160 122 light salmon 154 205 50 yellowgreen 178 58 238 darkorchid2 69 139 116 aquamarine4 ... 107 107 107 gray42 71 71 71 gray28 61 61 61 gray24 255 255 255 white 0 205 205 cyan3 0 0 0 black ----------------------------------
You can also examine the default color name database by looking at the /usr/openwin/lib/X11/rgb.txt file.
Unfortunately, there are no standard applications for browsing colors. If you don't have access to a public domain color browser, experiment by trial and error.
DiskSuite Tool's default colors are:
Table 8-1 DiskSuite Tool's Default Colors
------------------------------------
Color Type Color ------------------------------------
Standard Foreground black Standard Background gray Canvas Background gray66 Mapping Colors: mappingColor1 blue mappingColor2 green mappingColor3 magenta mappingColor4 cyan mappingColor5 purple mappingColor6 mediumseagreen mappingColor7 firebrick mappingColor8 tan mappingColor9 white Status Colors: Critical red Urgent orange Attention yellow ------------------------------------
DiskSuite Tool uses four different fonts:
The available fonts depend on which X Window System server you use to display the application. The standard X utility, xlsfonts, displays the available fonts on a server. For example:
-------------------------------------------------------
# xlsfonts --courier-bold-o-normal--0-0-0-0-m-0-iso8859-1 --courier-bold-r-normal--0-0-0-0-m-0-iso8859-1 --courier-medium-o-normal--0-0-0-0-m-0-iso8859-1 --courier-medium-r-normal--0-0-0-0-m-0-iso8859-1 --symbol-medium-r-normal--0-0-0-0-p-0--symbol -symbol-medium-r-normal--0-0-0-0-p-0-sun-fontspecific -adobe-courier-bold-i-normal--0-0-0-0-m-0-iso8859-1 ... utopia-bolditalic utopia-italic utopia-regular variable vshd vtbold vtsingle zapfchancery-mediumitalic zapfdingbats -------------------------------------------------------
Another helpful utility for displaying available fonts is xftonsel. Refer to the man pages for these utilities for more information.
DiskSuite Tool's default fonts all come from the Lucida font family:
Table 8-2 DiskSuite Tool's Default Fonts
--------------------------------------------
Font Type Font --------------------------------------------
Standard Font lucidasans12 Mono-spaced Font lucidasans-typewriter12 Bold Font lucidasans-bold12 Small Font lucidasans8 --------------------------------------------
DiskSuite Tool uses the X Window System's resource database mechanism to determine which fonts to use. The default resource specifications are:
Table 8-3 DiskSuite Tool's Default Font Resource Specifications
------------------------------------------------------------
Resource Font ------------------------------------------------------------
Metatool*fontList: lucidasans12 Metatool*smallFontList: lucidasans8 Metatool*boldFontList: lucidasans-bold12 Metatool*fixedFontList: lucidasans-typewriter12 Metatool*XmList.fontList: lucidasans-typewriter12 Metatool*Help*helpsubjs.fontlist: lucidasans-typewriter12 Metatool*Help*helptext.fontlist: lucidasans-typewriter12 ------------------------------------------------------------
You can change DiskSuite Tool's default colors and fonts by using one of the following four methods.
---------------------------------
# metatool -xrm 'resource' ---------------------------------
-------------------------------------
# xrdb -merge path_to_.Xdefaults -------------------------------------
This example changes the standard font to lucidasans16 for a single invocation of DiskSuite Tool.
-------------------------------------------------
# metatool -xrm 'Metatool*fontList: lucidasans16' -------------------------------------------------
Using a naming convention for your metadevices can help with your DiskSuite administration, and enable you at a glance to easily identify the metadevice type. Here are a few suggestions:
Note - The metarename command enables you to reorganize your metadevice names. Refer to the metarename(1M) man page for more information.
In addition to renaming metadevices, DiskSuite's metarename command also provides the ability to switch "layered" metadevices. When used with the -x option, metarename switches (exchanges) the names of an existing layered metadevice and one of its subdevices. This includes a mirror and one of its submirrors, or a trans metadevice and its master device.
Note - You must use the command line to exchange metadevices. This functionality is currently unavailable in DiskSuite Tool, although you can rename a metadevice with either the command line or DiskSuite Tool.
The metadevice name switch can take place in both directions.
If you have an existing stripe, you can use the metarename -x command to create a compound metadevice. This includes creating a mirror from a concat/stripe, or a trans device with a metadevice as the master device.
This example begins with a concatenation, d1, with a mounted file system, and ends up with the file system mounted on a two-way mirror named d1.
------------------------------------------------
# metastat d1 d1: Concat/Stripe Size: 5600 blocks Stripe 0: Device Start Block Dbase c0t0d0s1 0 No # metainit d2 1 1 c1t3d0s1 d2: Concat/Stripe is setup # metainit -f d20 -m d1 d20: Mirror is setup # umount /fs2 # metarename -x d20 d1 d20 and d1 have exchanged identities # metastat d1 d1: Mirror Submirror 0: d20 State: Okay ... d20: Submirror of d1 State: Okay ... # metattach d1 d2 d1: submirror d2 is attached # metastat d1 d1: Mirror Submirror 0: d20 State: Okay Submirror 1: d2 State: Okay ... # mount /fs2 --------------------------------
The metastat command confirms that the concatenation d1 is in the "Okay" state. You use the metainit command to create a second concatenation (d2), and then to force (-f) the creation of mirror d20 from d1. You must unmount the file system before using metarename -x to switch d20 for d1; d1 becomes the top-level device (the mirror), which metastat confirms. You attach d2 as the second submirror, verify the state of the mirror with metastat, then remount the file system. Note that because the mount device for /fs2 did not change, you do not have to edit the /etc/vfstab file.
This example begins with a mirror, d1, with a mounted file system, and ends up with the file system mounted on a trans device named d1.
--------------------------------------
# metastat d1 d1: Mirror Submirror 0: d20 State: Okay Submirror 1: d2 State: Okay ... # umount /fs2 # metainit d21 -t d1 d21: Trans is setup # metarename -f -x d21 d1 d21 and d1 have exchanged identities # metastat d1 d1: Trans State: Detached Size: 5600 blocks Master Device: d21 ... # metattach d1 d0 d1: logging device d0 is attached # mount /fs2 --------------------------------------
The metastat command confirms that the mirror d1 is in the "Okay" state. You must unmount the file system before using the metainit command to create the trans device d21, with d1 as the master. The metarename -f -x command forces the switch of d21 and d1; d1 is now the top-level trans metadevice, as confirmed by the metastat command. A logging device d0 is attached with the metattach command. You then remount /fs2. Note that because the mount device for /fs2 has not changed (it is still d1), you do not have to edit the /etc/vfstab file.
If you have an existing mirror or trans metadevice, you can use the metarename -x command to remove the mirror or trans metadevice and keep data on an underlying metadevice. For a trans metadevice, as long as the master device is a metadevice (stripe/concat, mirror, or RAID5 metadevice), you keep data on that metadevice.
When you use metarename -x as part of this process, the mount point of the file system remains the same.
This example begins with a mirror, d1, containing a mounted file system, and ends up with the file system mounted on a stripe named d1.
--------------------------------------
# metastat d1 d1: Mirror Submirror 0: d20 State: Okay Submirror 1: d2 State: Okay Pass: 1 ... # umount /fs2 # metarename -x d1 d20 d1 and d20 have exchanged identities # metastat d20 d20: Mirror Submirror 0: d1 State: Okay Submirror 1: d2 State: Okay ... # metadetach d20 d1 d20: submirror d1 is detached # metaclear -r d20 d20: Mirror is cleared d2: Concat/Stripe is cleared # mount /fs2 --------------------------------
The metastat command confirms that mirror d1 is in the "Okay" state. This file system is unmounted before exchanging the mirror d1 and its submirror d20. This makes the mirror d20, as confirmed by metastat. Next, d1 is detached from d20, then mirror d20 and the other submirror, d2 are deleted. Finally, /fs2 is remounted. Note that because the mount device for /fs2 did not change, the /etc/vfstab file does not require editing.
This example begins with a trans metadevice, d1, containing a mounted file system, and ends up with the file system mounted on the trans metadevice`s underlying master device, which will be d1.
----------------------------
# metastat d1 d1: Trans State: Okay Size: 5600 blocks Master Device: d21 Logging Device: d0 d21: Mirror Submirror 0: d20 State: Okay Submirror 1: d2 State: Okay ... d0: Logging device for d1 State: Okay Size: 5350 blocks # umount /fs2 # metadetach d1 d1: logging device d0 is detached # metarename -f -x d1 d21 d1 and d21 have exchanged identities # metastat d21 d21: Trans State: Detached Size: 5600 blocks Master Device: d1 d1: Mirror Submirror 0: d20 State: Okay Submirror 1: d2 State: Okay # metaclear 21 # fsck /dev/md/dsk/d1 ** /dev/md/dsk/d1 ** Last Mounted on /fs2 ** Phase 1 - Check Blocks and Sizes ** Phase 2 - Check Pathnames ** Phase 3 - Check Connectivity ** Phase 4 - Check Reference Counts ** Phase 5 - Check Cyl groups FILE SYSTEM STATE IN SUPERBLOCK IS WRONG; FIX? y 3 files, 10 used, 2493 free (13 frags, 310 blocks, 0.5% fragmentation) # mount /fs2 ----------------------------------------------------------
The metastat command confirms that the trans metadevice, d1, is in the "Okay" state. The file system is unmounted before detaching the trans metadevice's logging device. The trans metadevice and its mirrored master device are exchanged using the -f (force) flag. Running metastat again confirms that the exchange occurred. The trans metadevice and the logging device (if desired) are cleared, in this case, d21 and d0, respectively. Next, the the fsck command is run on the mirror, d1, and the prompt is answered with a y. After the fsck command is done, the file system is remounted. Note that because the mount device for /fs2 did not change, the /etc/vfstab file does not require editing.
This section describes a technique for regaining access to a metadevice that is defined on a failing controller, causing sporadic system panics. If there is another available controller on the system, the metadevice can in effect be "moved" to the new controller by moving the disks to the controller and redefining the metadevice. This technique does away with the need to back up and restore data to the metadevice.
This example consists of a disk that has two slices that are each part of two separate striped metadevices, d100 and d101, containing file systems /user6 and /maplib1, respectively. The affected controller was c5; the disks will be moved to a free controller (c4). This example also uses the md.tab file.
For example, unmount any file systems associated with the striped metadevice.
--------------------
# umount /user6 # umount /maplib1 --------------------
--------------------------------
# metaclear d100 d100: Concat/Stripe is cleared # metaclear d101 d101: Concat/Stripe is cleared --------------------------------
----------------------------------------------------------
Lines from the md.tab file before change: # Stripe /user6 /dev/md/dsk/d100 1 2 /dev/dsk/c5t0d0s3 /dev/dsk/c2t2d0s3 # Stripe /maplib1 /dev/md/dsk/d101 1 2 /dev/dsk/c5t0d0s0 /dev/dsk/c2t2d0s0 after change: # Stripe /user6 /dev/md/dsk/d100 1 2 /dev/dsk/c4t0d0s3 /dev/dsk/c2t2d0s3 # Stripe /maplib1 /dev/md/dsk/d101 1 2 /dev/dsk/c4t0d0s0 /dev/dsk/c2t2d0s0 ----------------------------------------------------------
------------------------------
# metainit d100 d100: Concat/Stripe is setup # metainit d101 d101: Concat/Stripe is setup ------------------------------
Caution -
Don't run the newfs command on the metadevice or its associated file system. It will result in a massive data loss, and the need to restore from tape.This section describes some tips regarding mirrors and their operation.
The following two tasks show how to change the interlace value of submirrors without destroying a mirror, and how to use a mirror for an online backup.
Use this task to change the interlace value of a mirror's underlying submirrors which are composed of striped metadevices. Using this method does away with the need to recreate the mirror and submirrors and restore data.
Note - To use the command line to perform this task, refer to the metadetach(1M), metainit(1M), and metattach(1M)man pages.
The high-level overview of the steps in this task are:
The object appears on the canvas.
If this is a two-way mirror, the mirror's status changes to "Urgent."
A mirror resync begins.
Although DiskSuite is not meant to be a "backup product," it does provide a means for backing up mirrored data without unmounting the mirror or taking the entire mirror offline, and without halting the system or denying users access to data. This happens as follows: one of the submirrors is taken offline - temporarily losing the mirroring - and backed up; that submirror is then placed online and resynced as soon as the backup is complete.
You can use this procedure on any file system except root (/). Be aware that this type of backup creates a "snapshot" of an active file system. Depending on how the file system is being used when it is write-locked, some files and file content on the backup may not correspond to the actual files on disk.
Note - If you use these procedures regularly, put them into a script for ease of use.
The high-level steps in this procedure are:
A mirror that is in the "Maintenance" state should be repaired first.
--------------------------------------
# /usr/sbin/lockfs -w mount point --------------------------------------
Only a UFS needs to be write-locked. If the metadevice is set up as a raw device for database management software or some other specific application, running lockfs(1M) is not necessary. (You may, however, want to run the appropriate vendor-supplied utility to flush any buffers and lock access.)
--------------------
# metaoffline mirror submirror --------------------
In this command,
--------------------------
mirrorIs the metadevice name of the mirror. submirrorIs the metadevice name of the submirror (metadevice) being taken offline. --------------------------
Reads will continue to be made from the other submirror. The mirror will be out of sync as soon as the first write is made. This inconsistency is corrected when the offlined submirror is brought back online in Step 6.
There is no need to run fsck(1M) on the offlined file system.
--------------------------------------
# /usr/sbin/lockfs -u mount point --------------------------------------
You may need to perform necessary unlocking procedures based on vendor-dependent utilities used in Step 2 above.
Note - To ensure a proper backup, use the raw metadevice, for example, /dev/md/rdsk/d4. Using "rdsk" allows greater than 2 Gbyte access.
-------------------
# metaonline mirror submirror -------------------
DiskSuite automatically begins resyncing the submirror with the mirror.
This example uses a mirror named d1, consisting of submirrors d2 and d3. d3 is taken offline and backed up while d2 stays online. The file system on the mirror is /home1 .
----------------------------------------
# /usr/sbin/lockfs -w /home1 # metaoffline d1 d3 d1: submirror d3 is offlined # /usr/sbin/lockfs -u /home1 (Perform backup using /dev/md/rdsk/d3) # metaonline d1 d3 d1: submirror d3 is onlined ----------------------------------------
If a system with mirrors for root (/), /usr, and swap - the so-called "boot" file systems - is booted into single-user mode (boot -s), these mirrors and possibly all mirrors on the system will appear in the "Needing Maintenance" state when viewed with the metastat command. Furthermore, if writes occur to these slices, metastat shows an increase in dirty regions on the mirrors.
Though this appears potentially dangerous, there is no need for concern. The metasync -r command, which normally occurs during boot to resync mirrors, is interrupted when the system is booted into single-user mode. Once the system is rebooted, metasync -r will run and resync all mirrors.
If this is a concern, run metasync -r manually.
A hot spare pool can contain 0 to n hot spares. A hot spare pool can be associated with multiple submirrors and RAID5 metadevices. You can define one hot spare pool with a variety of different size slices, and associate it with all the submirrors or RAID 5 metadevices. DiskSuite knows how to use the correctly sized hot spare when necessary.
Place hot spares in the same hot spare pool across controllers, to preserve lines of failure. In this respect, follow the same guidelines as you would for creating submirrors.
This section provides tips for configuring disksets.
Note - Currently, DiskSuite only supports disksets on SPARCstorage Array disks.
Configuring the hardware for use in diskset configuration can be problematic. The disk drives must be symmetric; that is, the shared drives must have the same device number, which implies the same device name and number (controller/target/drive). This task explains how to configure this setup.
Note - On a set of new machines, where the hardware was pre-configured, the desired symmetry occurs by default. You do not need to perform this task.
You must configure device names done before creating any metadevices in the diskset. Any other drives that are on non built-in controllers will also be affected.
This is best achieved by having the controllers for a given SPARCstorage Array in the same slots on identical processor models. If this is not possible, then you must make sure that the order of the slots will probe out identically on both processors. Because the probing of the Sbus is conducted in an orderly fashion, this can be achieved, but not easily. It is also recommended that slots be used in order from lowest to highest numbered slot leaving all the unused slots at the high end.
Note - The configuration system numbers controllers of the same type in sequence. In this case, "disk drive" is the type, so all controllers for disk drives will affect the order that devices are found. To this end, all the devices that are to be shared should probably be placed before any other disk controllers in the system to make sure that they will be found and accounted for in the correct order.
Once this has been done, you can do one of two things: a complete install on both host machines, or continue with this task. The latter is considerably faster.
------------------------------------------------------------------
# rm /etc/path_to_inst* # reboot -- '-rav' reboot: rebooted by root syncing file systems... [1] done rebooting... Resetting ... Rebooting with command: -rav Boot device: /iommu/sbus/espdma@f,400000/esp@f,800000/sd@3,0 File and args: -rav Enter filename [kernel/unix]: Size: 253976+126566+39566 Bytes Enter default directory for modules [/kernel /usr/kernel]: SunOS Release 5.4 Generic [UNIX(R) System V Release 4.0] Copyright (c) 1983-1995, Sun Microsystems, Inc. Name of system file [etc/system]: The /etc/path_to_inst on your system does not exist or is empty. Do you want to rebuild this file [n]? y Using default device instance data root filesystem type [ufs]: Enter physical name of root device [/iommu@f,e0000000/sbus@f,e0001000/espdma@f,400000/esp@f,800000 /sd@3,0:a]: ... The system is ready. console login: root Password:<root-password> # /usr/bin/rm -r /dev/*dsk/* # /usr/sbin/disks # ^D ------------------------------------------------------------------
Given that the hardware is set up correctly, this will ensure that the software reflects that setup. Because the /etc/path_to_inst file is used to keep things from sliding, which will occur generally when controllers are moved around, it is removed to make sure that controllers slide to the correct location. The '-rav' option with reboot makes sure that the kernel will interact with the user during boot and do a reconfigure reboot. The removal of /dev/*dsk/* is used to make sure that the sym-links get created correctly when the /usr/sbin/disks program is run.
Note - Because the SPARCstorage Array controller contains a unique World Wide Name, which identifies it to Solaris, special procedures apply for SPARCstorage Array controller replacement. Contact your service provider for assistance.
If you want to change the size of your state database replicas in a diskset, the basic steps are adding two disks to the diskset, deleting one of the new disk's state database replicas, then deleting the other disk from the diskset. You then add the deleted disk back to the diskset, along with any other disks you want added to the diskset. The state database replicas will automatically resize themselves to the new size.
-----------------------------------------
# metadb -s rimtic -d c1t0d0s7 # metadb -s rimtic -a -l 2068 c1t0d0s7 # metaset -s rimtic -d c1t1d0 # metaset -s rimtic -a c1t1d0 # metadb -s rimtic -----------------------------------------
This example assumes you have already added two disks to the diskset, rimtic, and that there is no data on the rest of the disk to which the replica will be added. The new size of the state database replica is 2068 blocks, as specified by the -l 40000 option. The metadb command confirms the new size of the state database replicas.