Adding the mirror disk into zfs pool is abort with core dump and workaround

Thursday, December 6, 2012 at 8:39 AM
Adding the mirror disk into the pool abort with core dump and workaround


root@node:/ > zpool add pool mirror c5t600508B40006AB5F0000900002230000d0 c5t600508B40006B1440000700006090000d0

Abort (core dumped)
root@node:/ >

root@node:/ > NOINUSE_CHECK=1 zpool add eqptpool mirror c5t600508B40006AB5F0000900002230000d0 c5t600508B40006B1440000700006090000d0

root@node:/ >

Mounting issue vxfs file system

Wednesday, October 10, 2012 at 7:43 AM
node# mount /u001


vxfs mount: /dev/vx/dsk/rootdg/vol1 is corrupted. needs checking

We have tried the fsck, but was not successful. We are unable to mount the filesystem /local

node#

Solution:
format >> analyze >> read to fix the block 5125 error of disk c3t2d0.

#fsck -F vxfs -y /dev/vx/rdsk/rootdg/vol1

#mount /local

Oneway mirror Need Maintenance state and connectivity lost issue in Storage

at 7:38 AM

If you encounter onway mirror issue with Storage disk. If the disk status is okay but problem to clear the Need maintenance state and to avoid this problem in future the below is the solution.


Configuration and the submirror status:



d8: Mirror

Submirror 2: d28
State: Needs maintenance
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 1715358144 blocks


d28: Submirror of d8


State: Needs maintenance


Invoke: after replacing "Maintenance" components:

metareplace d8 c2t0d1s0
Size: 1715365632 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c2t0d1s0 0 No Last Erred

# umount /local

# umount /test2


If the file systems cannot be unmounted then you will have to reboot into single user mode before continuing with the below commands.

# fsck -F ufs -y /dev/md/rdsk/d215

# fsck -F ufs -y /dev/md/rdsk/d230

# metaclear d215

# metaclear d230

# metaclear -rf d8


# metainit d8 1 1 c2t0d1s0

# metainit d215 -p d9 -o 87666101 -b 743486015 -o 853202938 -b 76379648 -o 930764284 -b 182452224 -o 1117410814 -b 85983232
# metainit d230 -p d9 -o 46833323 -b 3774156 -o 843685879 -b 3807232


# mount /local

# mount /test2


d8: Mirror

Submirror 2: d28
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 1715358144 blocks


d28: Submirror of d8

State: Okay
Size: 1715365632 blocks
Stripe 0:
Device Start Block Dbase State Hot Spare
c2t0d1s7 0 No Okay

Soft Partition error and recover

Friday, July 13, 2012 at 7:18 AM
Error State:

d304: Soft Partition

Component: d7
State: Errored
Size: 6291456 blocks
Extent Start Block Block count
0 104867779 6291456

Action to clear the Error:
# metarecover d20 -p -m



20: Soft Partition metadb configuration is valid


WARNING: You are about to overwrite portions of 20 with soft partition metadata. The extent headers will be written to match the existing metadb configuration. If the device was not previously setup with this configuration, data loss may result.


Are you sure you want to do this (yes/no)? yes


d20: Soft Partitions recovered from metadb # metastat d304


d304: Soft Partition
Component: 20
State: Okay
Size: 6291456 blocks
Extent Start Block Block count
0 104867779 6291456




d20: Mirror
Submirror 0: d17
State: Okay
Submirror 1: d27
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 286678272 blocks



d17: Submirror of d20
State: Okay
Size: 286678272 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
c1t2d0s0 0 No Okay
c1t3d0s0 10176 No Okay



d27: Submirror of d20
State: Okay
Size: 286678272 blocks
Stripe 0: (interlace: 32 blocks)
Device Start Block Dbase State Hot Spare
c1t4d0s0 0 No Okay
c1t5d0s0 10176 No Okay

#


Error Message: metainit: node: "d74": unit not found

Thursday, February 23, 2012 at 2:15 AM
node:/root# metainit -s nfsdg d74 -p d80 100g
metainit: node: "d74": unit not found
node:/root#
Identify the devfsadm process ID and kill it.

node:/root# ps -ef grep -i devfsadm
root 3099 1472 0 10:38:14 pts/1 0:00 grep -i devfsadm
root 15066 1 0 11:25:41 ? 58:46 /usr/lib/devfsadm/devfsadmd
node:/root# kill -9 15066
node:/root#
Now try it should work now
node:/# metainit -s nfsdg d74 -p d80 100g
d74: Soft Partition is setup
node:/#

Migrating Solaris 8 physical server to Solaris zone

Thursday, February 16, 2012 at 4:20 AM
Step1:
Preparation:
Solaris 10 with latest updated versionDownload and install the Solaris 8 Migration
SUNWs8brandr Solaris 8 Migration Assistant: solaris8 brand support (Root)SUNWs8brandu Solaris 8 Migration Assistant: solaris8 brand support (Usr)SUNWs8p2v Solaris 8 p2v Tool

Step2: Create the Flash Archive of solaris 8 server:
node# flarcreate -S -n solaris8 solaris8.flar
Step3: Set up a Solaris 8 zone
node# zonecfg -z solaris8solaris8: No such zone configuredUse 'create' to begin configuring a new zone.zonecfg:solaris8-system> create -t SUNWsolaris8zonecfg:solaris8> set zonepath=/export/solaris8zonecfg:solaris8> add netzonecfg:solaris8:net> set address=192.168.1.155/24zonecfg:solaris8:net> set physical=bge0zonecfg:solaris8:net> endzonecfg:solaris8> commitzonecfg:solaris8> exit
Step4: Install Solaris 8 zone using flar-archive

node# zoneadm -z solaris8 install -u -a /export/solaris8.flarLog File: /var/tmp/solaris8.install.13597.logSource: /export/solaris8.flarInstalling: This may take several minutes…Postprocessing: This may take several minutes…WARNING: zone did not finish booting.
Result: Installation completed successfully.Log File: /export/solaris8/root/var/log/solaris8.install.13597.log

Step5:
node #uname -aSunOS solaris8 5.8 Generic_Virtual sun4u sparc SUNW,Sun-Fire-V490

Solaris | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com