Navigation | How to mirror ZFS root disk on a sparc system.

How to mirror ZFS root disk on a sparc system.

First we look to see that everything looks good on our current drive that we are booted off of.  I would also like to note that there is no UFS on this system at all.  We are booting entirely off of the ZFS pool rpool.

# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:

NAME        STATE     READ WRITE CKSUM
rpool       ONLINE       0     0     0
c0t0d0s0  ONLINE       0     0     0

errors: No known data errors

Now verify that the two disks that you want to mirror looks the same at the slave level.

# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
0. c0t0d0 <ST320011A cyl 38790 alt 2 hd 16 sec 63>
/pci@1f,0/ide@d/dad@0,0
1. c0t2d0 <DEFAULT cyl 38790 alt 2 hd 16 sec 63>
/pci@1f,0/ide@d/dad@2,0
Specify disk (enter its number): 0
selecting c0t0d0
[disk formatted, no defect list found]
/dev/dsk/c0t0d0s0 is part of active ZFS pool rpool. Please see zpool(1M).
/dev/dsk/c0t0d0s2 is part of active ZFS pool rpool. Please see zpool(1M).

FORMAT MENU:
disk       – select a disk
type       – select (define) a disk type
partition  – select (define) a partition table
current    – describe the current disk
format     – format and analyze the disk
repair     – repair a defective sector
show       – translate a disk address
label      – write label to the disk
analyze    – surface analysis
defect     – defect list management
backup     – search for backup labels
verify     – read and display labels
save       – save new disk/partition definitions
volname    – set 8-character volume name
!<cmd>     – execute <cmd>, then return
quit
format> ver

Primary label contents:

Volume name = <        >
ascii name  = <ST320011A cyl 38790 alt 2 hd 16 sec 63>
pcyl        = 38792
ncyl        = 38790
acyl        =    2
nhead       =   16
nsect       =   63
Part      Tag    Flag     Cylinders         Size            Blocks
0       root    wm       0 – 38789       18.64GB    (38790/0/0) 39100320
1 unassigned    wm       0                0         (0/0/0)            0
2     backup    wm       0 – 38789       18.64GB    (38790/0/0) 39100320
3 unassigned    wm       0                0         (0/0/0)            0
4 unassigned    wm       0                0         (0/0/0)            0
5 unassigned    wm       0                0         (0/0/0)            0
6 unassigned    wm       0                0         (0/0/0)            0
7 unassigned    wm       0                0         (0/0/0)            0

format> disk 1
selecting c0t2d0
[disk formatted, no defect list found]
format>

Notice that both disks have 0 – 38789 Cylinders.  So the disks look good, now we can attach the second disk.

# zpool attach rpool c0t0d0s0 c0t2d0s0
#
# zpool status
pool: rpool
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress for 0h0m, 3.28% done, 0h24m to go
config:

NAME          STATE     READ WRITE CKSUM
rpool         ONLINE       0     0     0
mirror      ONLINE       0     0     0
c0t0d0s0  ONLINE       0     0     0
c0t2d0s0  ONLINE       0     0     0

errors: No known data errors

WoW that was easy to mirror two disks with ZFS.  No need to set up data stores or anything like that.  The fact that the zpool status output specify “mirror” means the disks are mirrored and not striped.  If you have multiple zfs pools you can specify zpool status POOLNAME.

Now to try to distroy the disk, ha, ha, ha!

# dd if=/dev/random of=/dev/rdsk/c0t2d0s0
969+0 records in
969+0 records out

Now if that disk was on a normal RAID utility stuch as solstice disk suite SDS or even hardware raid then both disks would be destroyed because they operate at the block level, and will copy the random blocks back do c0t0d0s0

# zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ‘zpool online’.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: resilver in progress for 0h1m, 10.52% done, 0h12m to go
config:

NAME          STATE     READ WRITE CKSUM
rpool         DEGRADED     0     0     0
mirror      DEGRADED     0     0     0
c0t0d0s0  ONLINE       0     0     0
c0t2d0s0  UNAVAIL      0     0     0  cannot open

errors: No known data errors

Oha happy ZFS is already working on fixing or resilvering the disk that was written to with random data.

# zpool status
pool: rpool
state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using ‘zpool online’.
see: http://www.sun.com/msg/ZFS-8000-2Q
scrub: resilver completed after 0h0m with 0 errors on Tue Apr 14 12:55:48 2009
config:

NAME          STATE     READ WRITE CKSUM
rpool         DEGRADED     0     0     0
mirror      DEGRADED     0     0     0
c0t0d0s0  ONLINE       0     0     0
c0t2d0s0  UNAVAIL      0     0     0  cannot open

errors: No known data errors

Now ZFS is angry because we wrote random data to the disk.

# zpool online  rpool  c0t2d0s0

# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t2d0s0

Filed by admin at March 24th, 2010 under Solaris, Solaris 10, Solaris Sparc

Leave a comment

You must be logged in to post a comment.