sftp error ld.so.1: sftp: fatal: libgcc_s.so.1: open failed: No such file or directory

Tuesday, December 21, 2010 at 12:08 AM
sftp Error message

#sftp
ld.so.1: sftp: fatal: libgcc_s.so.1: open failed: No such file or directory
Killed
#

ldd utility lists the dynamic dependencies of executable files

#ldd /local/opt/SSH/product/4.3p2/sbin/sshd
libpam.so.1 => /usr/lib/libpam.so.1
libdl.so.1 => /usr/lib/libdl.so.1
libresolv.so.2 => /usr/lib/libresolv.so.2
libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8
librt.so.1 => /usr/lib/librt.so.1
libz.so => /usr/lib/libz.so
libsocket.so.1 => /usr/lib/libsocket.so.1
libnsl.so.1 => /usr/lib/libnsl.so.1
libc.so.1 => /usr/lib/libc.so.1
libcmd.so.1 => /usr/lib/libcmd.so.1
libgcc_s.so.1 => (file not found)
libaio.so.1 => /usr/lib/libaio.so.1
libmd5.so.1 => /usr/lib/libmd5.so.1
libmp.so.2 => /usr/lib/libmp.so.2
libscf.so.1 => /usr/lib/libscf.so.1
libdoor.so.1 => /usr/lib/libdoor.so.1
libuutil.so.1 => /usr/lib/libuutil.so.1
libgen.so.1 => /usr/lib/libgen.so.1
libm.so.2 => /usr/lib/libm.so.2
/platform/SUNW,Sun-Fire-880/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-880/lib/libmd5_psr.so.1
#

symbolic link (/usr/local/lib/libgcc_s.so.1) is missing and not pointing to source
path /local/opt/SSH/product/4.3p2/lib/libgcc_s.so.1.


#ls -l /usr/local/lib/libgcc_s.so.1
usr/local/lib/libgcc_s.so.1: No such file or directory
#

Create symbolic link

#ln -s /local/opt/SSH/product/4.3p2/lib/libgcc_s.so.1 /usr/local/lib/libgcc_s.so.1


# ls -l /usr/local/lib/libgcc_s.so.1
lrwxrwxrwx 1 root root 46 Dec 21 08:40 /usr/local/lib/libgcc_s.so.1 -> /local/opt/SSH/product/4.3p2/lib/libgcc_s.so.1
#

Verify the dynamic dependencies of executable files

#ldd /local/opt/SSH/product/4.3p2/sbin/sshd
libpam.so.1 => /usr/lib/libpam.so.1
libdl.so.1 => /usr/lib/libdl.so.1
libresolv.so.2 => /usr/lib/libresolv.so.2
libcrypto.so.0.9.8 => /usr/local/lib/libcrypto.so.0.9.8
librt.so.1 => /usr/lib/librt.so.1
libz.so => /usr/lib/libz.so
libsocket.so.1 => /usr/lib/libsocket.so.1
libnsl.so.1 => /usr/lib/libnsl.so.1
libc.so.1 => /usr/lib/libc.so.1
libcmd.so.1 => /usr/lib/libcmd.so.1
libgcc_s.so.1 => /usr/local/lib/libgcc_s.so.1
libaio.so.1 => /usr/lib/libaio.so.1
libmd5.so.1 => /usr/lib/libmd5.so.1
libmp.so.2 => /usr/lib/libmp.so.2
libscf.so.1 => /usr/lib/libscf.so.1
libdoor.so.1 => /usr/lib/libdoor.so.1
libuutil.so.1 => /usr/lib/libuutil.so.1
libgen.so.1 => /usr/lib/libgen.so.1
libm.so.2 => /usr/lib/libm.so.2
/platform/SUNW,Sun-Fire-880/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-880/lib/libmd5_psr.so.1
/#

#sftp
usage: sftp [-1Cv] [-B buffer_size] [-b batchfile] [-F ssh_config]
[-o ssh_option] [-P sftp_server_path] [-R num_requests]
[-S program] [-s subsystem | sftp_server] host
sftp [[user@]host[:file [file]]]
sftp [[user@]host[:dir[/]]]
sftp -b batchfile [user@]host
#

#sftp node2
Connecting to node2...
exec: /usr/local/bin/ssh: No such file or directory
Connection closed
#

#ls -l /usr/local/bin/ssh
/usr/local/bin/ssh: No such file or directory
#

checked the location of the ssh binary

#which scp ssh
/local/opt/SSH/product/4.3p2/bin/scp
/usr/local/bin/ssh
#

The easy workaround is to make /usr/local/bin/ssh a valid path. created symbolic link (/usr/local/bin/ssh) pointing to the source path.

#ln -s /local/opt/SSH/product/4.3p2/bin/ssh /usr/local/bin/ssh

#ls -la /usr/local/bin/ssh
lrwxrwxrwx 1 root root 36 Dec 21 08:40 /usr/local/bin/ssh -> /local/opt/SSH/product/4.3p2/bin/ssh
#

#sftp node2
Connecting to node1...
The authenticity of host 'node1 (192.168.1.3)' can't be established.
RSA key fingerprint is 6a:51:92:3a:4e:07:8d:dc:01:e8:63:10:e5:45:46:a5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node2,192.168.1.3' (RSA) to the list of known hosts.
root@node1's password:
sftp>

Adding EMC disk and Increasing the Cluster 3.0 file system (Veritas volume manager)

Friday, December 17, 2010 at 5:22 AM
Adding EMC disk and Increasing the Cluster 3.0 file system (Veritas volume manager)

- Check the disk is declared in /kernel/drv/sd.conf - On both nodes.

- Perform the LUN masking for both nodes from the SAN admin

WARNING: Select the correct Storage system, FA port and Volume ID !

(For instance:
STORAGE SYSTEM: …2683
FA PORT: 3BB
Volume ID: 024 [ ie 36 in decimal ] )

- Tick the boxes for node1 and node2, for each adapter (HBA).

- Do NOT forget, for each Symmetrix to do:
“Masking”/ “VCMDB Management”/ “Make active”.


#devfsadm - On node #1

When finished:

#devfsadm - On node #2

#scdidadm –L | grep tXdY
(X = SYMM number. Y = LUN number)
DID devices are not seen yet.

#scgdevs - On node #1

When terminated, enter the following command on node#2

# ps –ef | grep scgdevs

Wait for this process to be terminated before going to next step !

# scdidadm –L | grep tXdY
DID devices are now seen.

“# format” | grep tXdY - On both nodes
+ Enter “Control D” to exit the format command.

Check that the disk is seen on both nodes !

# format => to “label” the disk, and check that its size is correct – On one node

PowerPath commands: On both nodes

# powermt display dev=all | grep tXdY (new device is not visible yet)
# powercf –q (or –i for interactive)
# powermt config
# powermt display dev=all | grep tXdY (new device is now visible)


# vxdisk -o alldgs list | grep tXdY
(X = SYMM number. Y = LUN number)
new device is not yet visible by VxVM.

# vxdctl enable => On both nodes

# vxdisk -o alldgs list | grep tXdY
Only one controller is seen as PowerPath only shows one controller.
Check that the disk is in “error” state ! (Otherwise, this means that this disk could be in use !)

No action required regarding DMP.

(With PowerPath 3.0.3, DMP has to stay enabled. In previous version of PowerPath, DMP had to be disabled)

Configure the disks that will be in the new Disk Group for use with VERITAS Volume

Manager: => On one node.

node1# /etc/vx/bin/vxdisksetup -i c3t9d120 format=sliced
node1# /etc/vx/bin/vxdisksetup -i c3t9… (continue for other disks of SYMM9)
node1# /etc/vx/bin/vxdisksetup -i c3t11d131 format=sliced
node1# /etc/vx/bin/vxdisksetup -i c3t11… (continue for other disks of SYMM11)


To add disks in the DG, use the following command lines:

This is an example given for a Disk Group “DG_name”, made up of a mirrored Logical Volume “Volume_name”.

Add the DG disks for SYMM09: => On one node !

sym11
node1#vxdg -g DG_name adddisk DG_name_m_08=emcpower553
node1#vxdg -g DG_name adddisk DG_name_m_09=emcpower558
node1#vxdg -g DG_name adddisk DG_name_m_10=emcpower557

Add the DG disks for SYMM11: => On one node !

sym19
node1#vxdg -g DG_name adddisk DG_name_08=emcpower554
node1#vxdg -g DG_name adddisk DG_name_09=emcpower555
node1#vxdg -g DG_name adddisk DG_name_10=emcpower556


node1# vxprint –g DG_name (Check the DG configuration)

node1# scconf -c -D name=DG_name,sync


For every added disk in the Disk Group, declare the disk in NOHOTUSE mode:

node1#vxedit -g DG_name set nohotuse=on DG_name_m_08
node1#vxedit -g DG_name set nohotuse=on DG_name_m_09
node1#vxedit -g DG_name set nohotuse=on DG_name_m_10

node1#vxedit -g DG_name set nohotuse=on DG_name_08
node1#vxedit -g DG_name set nohotuse=on DG_name_09
node1#vxedit -g DG_name set nohotuse=on DG_name_10


node1# vxprint –g DG_name (Check the DG configuration)

node1# scconf -c -D name=DG_name,sync


In this example, we want to grow the logical volume volume_name by 50 gigabytes (i.e. add 2 disks of 50 gigabytes).


node1:root> df -k /filesystem
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/DG_name/volume_name
10268396 8923420 1252376 88% /filesystem
node1:root>

Get the really size of the disk:

node1:root> vxassist -g DG_name maxsize layout=mirror
Maximum volume size: 33607680 (16410Mb)
node1:root>

node1:root> vxdisk -g DG_name list DG_name_m_08 | grep public
public: slice=4 offset=0 len=8380800 disk_offset=3840
node1:root>


Associate subdisk to the disks:

sym11
node1#vxmake -g DG_name sd DG_name_m_08-01 DG_name_m_08,0,8380800
node1#vxmake -g DG_name sd DG_name_m_09-01 DG_name_m_09,0,4190400

sym19
node1#vxmake -g DG_name sd DG_name_08-01 DG_name_08,0,8380800
node1#vxmake -g DG_name sd DG_name_09-01 DG_name_09,0,4190400


Associate the subdisk to the existing plex volume_name-04 and volume_name-01:

node1#vxsd -g DG_name assoc volume_name-01 DG_name_m_08-01
node1#vxsd -g DG_name assoc volume_name-01 DG_name_m_09-01

node1#vxsd -g DG_name assoc volume_name-04 DG_name_08-01
node1#vxsd -g DG_name assoc volume_name-04 DG_name_09-01

Grow the FS and volume to the required size:

node1#vxresize -F ufs -g DG_name volume_name +12571200

node1:root> df -k /u663
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/DG_name/volume_name
16455210 8929556 7433054 55% /u663
node1:root>

v volume_name fsgen ENABLED 33434176 - ACTIVE - -
pl volume_name-01 volume_name ENABLED 33434880 - ACTIVE - -
sd DG_name_m_05-01 volume_name-01 ENABLED 8380800 0 - - -
sd DG_name_m_06-01 volume_name-01 ENABLED 8380800 8380800 - - -
sd DG_name_m_07-01 volume_name-01 ENABLED 2053440 16761600 - - -
sd DG_name_m_01-02 volume_name-01 ENABLED 2048640 18815040 - - -
sd DG_name_m_08-01 volume_name-01 ENABLED 8380800 20863680 - - -
sd DG_name_m_09-01 volume_name-01 ENABLED 4190400 29244480 - - -
pl volume_name-02 volume_name ENABLED LOGONLY - ACTIVE - -
sd DG_name_05-05 volume_name-02 ENABLED 64 LOG - - -
pl volume_name-04 volume_name ENABLED 33434880 - ACTIVE - -
sd DG_name_05-01 volume_name-04 ENABLED 8378880 0 - - -
sd DG_name_02-02 volume_name-04 ENABLED 2047680 8378880 - - -
sd DG_name_06-01 volume_name-04 ENABLED 8380800 10426560 - - -
sd DG_name_02-03 volume_name-04 ENABLED 7680 18807360 - - -
sd DG_name_01-02 volume_name-04 ENABLED 2048640 18815040 - - -
sd DG_name_08-01 volume_name-04 ENABLED 8380800 20863680 - - -
sd DG_name_09-01 volume_name-04 ENABLED 4190400 29244480 - - -


When finished:

• Check that the DG has the proper configuration (mirroring between SYMM9 and SYMM11):

node1# vxprint -g DG_name

• check that the filesystem has been increased by the right size:

node1# df -k /filesystem

• Resync the Cluster DG:

node1# scconf -c -D name=DG_name,sync

#scgdevs - On node #1

When terminated, enter the following command on node #2:

#ps –ef | grep scgdevs
Wait for this process to be terminated before going to next step.

Configuring NEW LUNs:

Sunday, December 5, 2010 at 12:20 AM
egrep 'qlc|WWN' /var/adm/messages
mpathadm list lu

# format < /dev/null
Searching for disks...done


AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0
1. c1t1d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0
Specify disk (enter its number):

# cfgadm -o show_FCP_dev -al
Ap_Id Type Receptacle Occupant Condition
c1 fc-private connected configured unknown
c1::2100000c506b2fca,0 disk connected configured unknown
c1::2100000c506b39cf,0 disk connected configured unknown
c3 fc-fabric connected unconfigured unknown

spdma501:# cfgadm -c configure c3
Nov 16 17:32:25 spdma501 last message repeated 54 times
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,48 (ssd2):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,47 (ssd3):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46 (ssd4):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45 (ssd5):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44 (ssd6):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43 (ssd7):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42 (ssd8):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41 (ssd9):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number
Nov 16 17:32:26 spdma501 scsi: WARNING: /pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40 (ssd10):
Nov 16 17:32:26 spdma501 corrupt label - wrong magic number

spdma501:# cfgadm -c configure c5
Nov 16 17:32:55 spdma501 last message repeated 5 times
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,48 (ssd14):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,47 (ssd15):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,46 (ssd16):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,45 (ssd17):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,44 (ssd18):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,43 (ssd19):
Nov 16 17:32:59 spdma501 corrupt label - wrong magic number
Nov 16 17:32:59 spdma501 scsi: WARNING: /pci@8,600000/SUNW,qlc@1/fp@0,0/ssd@w50060482ccaae5bc,42 (ssd20):

c3t50060482CCAAE5A3d61: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d62: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d63: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d64: configured with capacity of 17.04GB
c3t50060482CCAAE5A3d65: configured with capacity of 17.04GB


AVAILABLE DISK SELECTIONS:
0. c1t0d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b2fca,0
1. c1t1d0
/pci@8,600000/SUNW,qlc@4/fp@0,0/ssd@w2100000c506b39cf,0
2. c3t50060482CCAAE5A3d61
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3d
3. c3t50060482CCAAE5A3d62
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3e
4. c3t50060482CCAAE5A3d63
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,3f
5. c3t50060482CCAAE5A3d64
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,40
6. c3t50060482CCAAE5A3d65
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,41
7. c3t50060482CCAAE5A3d66
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,42
8. c3t50060482CCAAE5A3d67
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,43
9. c3t50060482CCAAE5A3d68
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,44
10. c3t50060482CCAAE5A3d69
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,45
11. c3t50060482CCAAE5A3d70
/pci@8,700000/SUNW,qlc@2/fp@0,0/ssd@w50060482ccaae5a3,46
Specify disk (enter its number):

IF YOU DON'T SEE THE NEW LUNS IN FORMAT, RUN devfsadm !!!!

# /usr/sbin/devfsadm

Label the new disks !!!!

# cd /tmp

# cat format.cmd
label
quit

# for disk in `format < /dev/null 2> /dev/null | grep "^c" | cut -d: -f1`
do
format -s -f /tmp/format.cmd $disk
echo "labeled $disk ....."
done

SAN Stuff for Solaris

at 12:18 AM
To verify whether an HBA is connected to a fabric or not:
# /usr/sbin/luxadm -e port

Found path to 4 HBA ports

/devices/pci@1e,600000/SUNW,qlc@3/fp@0,0:devctl CONNECTED
/devices/pci@1e,600000/SUNW,qlc@3,1/fp@0,0:devctl NOT CONNECTED
/devices/pci@1e,600000/SUNW,qlc@4/fp@0,0:devctl CONNECTED
/devices/pci@1e,600000/SUNW,qlc@4,1/fp@0,0:devctl NOT CONNECTED



Your SAN administrator will ask for the WWNs for Zoning. Here are some steps I use to get that information:
# prtconf -vp | grep wwn
port-wwn: 210000e0.8b1d8d7d
node-wwn: 200000e0.8b1d8d7d
port-wwn: 210100e0.8b3d8d7d
node-wwn: 200000e0.8b3d8d7d
port-wwn: 210000e0.8b1eaeb0
node-wwn: 200000e0.8b1eaeb0
port-wwn: 210100e0.8b3eaeb0
node-wwn: 200000e0.8b3eaeb0


Or you may use fcinfo, if installed.
# fcinfo hba-port
HBA Port WWN: 210000e08b8600c8
OS Device Name: /dev/cfg/c11
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b8600c8
HBA Port WWN: 210100e08ba600c8
OS Device Name: /dev/cfg/c12
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08ba600c8
HBA Port WWN: 210000e08b86a1cc
OS Device Name: /dev/cfg/c5
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200000e08b86a1cc
HBA Port WWN: 210100e08ba6a1cc
OS Device Name: /dev/cfg/c6
Manufacturer: QLogic Corp.
Model: 375-3108-xx
Type: N-port
State: online
Supported Speeds: 1Gb 2Gb
Current Speed: 2Gb
Node WWN: 200100e08ba6a1cc



Here are some commands you can use for QLogic Adapters:
# modinfo | grep qlc
76 7ba9e000 cdff8 282 1 qlc (SunFC Qlogic FCA v20060630-2.16)

# prtdiag | grep qlc
pci 66 PCI5 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@18,600000/SUNW,qlc@1
pci 66 PCI5 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@18,600000/SUNW,qlc@1,1
pci 33 PCI2 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@19,700000/SUNW,qlc@1
pci 33 PCI2 SUNW,qlc-pci1077,2312 (scsi-+
okay /ssm@0,0/pci@19,700000/SUNW,qlc@1,1

# luxadm qlgc

Found Path to 4 FC100/P, ISP2200, ISP23xx Devices

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04

Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1,1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04

Opening Device: /devices/ssm@0,0/pci@18,600000/SUNW,qlc@1/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter Driver: 1.14.09 03/08/04
Complete


# luxadm -e dump_map /devices/ssm@0,0/pci@19,700000/SUNW,qlc@1,1/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 1f0112 0 5006048accab4f8d 5006048accab4f8d 0x0 (Disk device)
1 1f011f 0 5006048accab4e0d 5006048accab4e0d 0x0 (Disk device)
2 1f012e 0 5006048acc7034cd 5006048acc7034cd 0x0 (Disk device)
3 1f0135 0 5006048accb4fc0d 5006048accb4fc0d 0x0 (Disk device)
4 1f02ef 0 50060163306043b6 50060160b06043b6 0x0 (Disk device)
5 1f06ef 0 5006016b306043b6 50060160b06043b6 0x0 (Disk device)
6 1f0bef 0 5006016330604365 50060160b0604365 0x0 (Disk device)
7 1f19ef 0 5006016b30604365 50060160b0604365 0x0 (Disk device)
8 1f0e00 0 210100e08ba6a1cc 200100e08ba6a1cc 0x1f (Unknown Type,Host Bus Adapter)


# prtpicl -v
.
.
SUNW,qlc (scsi-fcp, 7f0000066b) <--- go to qLogic website to get model number
:_fru_parent (7f0000dc86H)
:DeviceID 0x1
:UnitAddress 1
:vendor-id 0x1077
:device-id 0x2312
:revision-id 0x2
:subsystem-vendor-id 0x1077
:subsystem-id 0x10a
:min-grant 0x40
:max-latency 0
:cache-line-size 0x10
:latency-timer 0x40

To Determine what HBA is installed in a Solaris server

at 12:03 AM
The prtpicl command outputs information to accurately determine the make and model of an HBA.

The QLogic HBAs have a PCI identifier of 1077. Search through the output from the command prtpicl -v for the number 1077. A section displays similar to the following:

QLGC,qla (scsi, 44000003ac)
:DeviceID 0x4
:UnitAddress 4
:vendor-id 0x1077
:device-id 0x2300
:revision-id 0x1
:subsystem-vendor-id 0x1077
:subsystem-id 0x9
:min-grant 0x40
:max-latency 0
:cache-line-size 0x10
:latency-timer 0x40

The subsystem-ID value determines the model of HBA. Reference this chart to determine the model of HBA:


Vendor HBA model VendorID DeviceID SubsysVendorID Subsys Device ID
QLogic QCP2340 1077 2312 1077 109
QLogic QLA200 1077 6312 1077 119
QLogic QLA210 1077 6322 1077 12F
QLogic QLA2300/QLA2310 1077 2310 1077 9
QLogic QLA2340 1077 2312 1077 100
QLogic QLA2342 1077 2312 1077 101
QLogic QLA2344 1077 2312 1077 102
QLogic QLE2440 1077 2422 1077 145
QLogic QLA2460 1077 2422 1077 133
QLogic QLA2462 1077 2422 1077 134
QLogic QLE2360 1077 2432 1077 117
QLogic QLE2362 1077 2432 1077 118
QLogic QLE2440 1077 2432 1077 147
QLogic QLE2460 1077 2432 1077 137
QLogic QLE2462 1077 2432 1077 138
QLogic QSB2340 1077 2312 1077 104
QLogic QSB2342 1077 2312 1077 105
Sun SG-XPCI1FC-QLC 1077 6322 1077 132
Sun 6799A 1077 2200A 1077 4082
Sun SG-XPCI1FC-QF2/x6767A 1077 2310 1077 106
Sun SG-XPCI2FC-QF2/x6768A 1077 2312 1077 10A
Sun X6727A 1077 2200A 1077 4083
Sun SG-XPCI1FC-QF4 1077 2422 1077 140
Sun SG-XPCI2FC-QF4 1077 2422 1077 141
Sun SG-XPCIE1FC-QF4 1077 2432 1077 142
Sun SG-XPCIE2FC-QF4 1077 2432 1077 143

HA NFS Sun Cluster Setup

Saturday, December 4, 2010 at 10:58 AM
HA NFS Sun Cluster Setup

Goal: dual-node NFS failover cluster that shares 2 concatenated SVM volumes

2 nodes (hostname: node1, node2) with installed Solaris 10 01/06, patches, Sun Cluster, NFS agent for Sun Cluster, VxFS
Both nodes are connected to the FC SAN storage, 8 storage LUNs are mapped to each node.

Configure SVM
On both nodes
Create the 25MB partition on the boot disk (s7)
Create the SVM database replica
metadb -afc 3 c0d0s7 (c0t0d0s7 on sparc)

On one node (node1)- Create disk sets :

metaset -s nfs1 -a -h node1 node2
metaset -s nfs1 -t -f
metaset -s nfs1 -a /dev/did/rdsk/d2 /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d5
metainit -s nfs1 d1 4 1 /dev/did/rdsk/d2s0 1 /dev/did/rdsk/d3s0 1 /dev/did/rdsk/d4s0 1 /dev/did/rdsk/d5s0
metastat -s nfs1 -p >> /etc/lvm/md.tab
metaset -s nfs2 -a -h node2 node1
metaset -s nfs2 -t -f
metaset -s nfs2 -a /dev/did/rdsk/d6 /dev/did/rdsk/d7 /dev/did/rdsk/d8 /dev/did/rdsk/d9
metainit -s nfs2 d1 4 1 /dev/did/rdsk/d6s0 1 /dev/did/rdsk/d7s0 1 /dev/did/rdsk/d8s0 1 /dev/did/rdsk/d9s0
metastat -s nfs2 -p >> /etc/lvm/md.tab
scp /etc/lvm/md.tab node2:/tmp/md.tab
ssh node2 'cat /tmp/md.tab >> /etc/lvm/md.tab'

- Create VxFS on shared devices

mkfs -F vxfs /dev/md/nfs1/rdsk/d1
mkfs -F vxfs /dev/md/nfs2/rdsk/d1

On both nodes - Create the directories

mkdir -p /global/nfs1
mkdir -p /global/nfs2

- Add the mount entries to the vfstab file

cat >> /etc/vfstab << EOF
/dev/md/nfs1/dsk/d1 /dev/md/nfs1/rdsk/d1 /global/nfs1 vxfs 2 no noatime
/dev/md/nfs2/dsk/d1 /dev/md/nfs2/rdsk/d1 /global/nfs2 vxfs 2 no noatime
EOF
(mount-at-boot "no" because we'll use the HAStoragePlus resource type)

- Add logical hostnames :

cat >> /etc/hosts << EOF
10.1.1.1 log-name1
10.1.1.2 log-name2
EOF

On one node (node1)- Mount metavolumes and create the PathPrefix directories

mount /global/nfs1
mount /global/nfs2

mkdir -p /global/nfs1/share
mkdir -p /global/nfs2/share

Configure HA NFS On one node (node1) - Register resource types :

scrgadm -a -t SUNW.HAStoragePlus
scrgadm -a -t SUNW.nfs

- Create failover resource groups :

scrgadm -a -g nfs-rg1 -h node1,node2 -y PathPrefix=/global/nfs1 -y Failback=true
scrgadm -a -g nfs-rg2 -h node2,node1 -y PathPrefix=/global/nfs2 -y Failback=true

- Add logical hostname resources to the resource groups :

scrgadm -a -j nfs-lh-rs1 -L -g nfs-rg1 -l log-name1
scrgadm -a -j nfs-lh-rs2 -L -g nfs-rg2 -l log-name2

- Create dfstab file for each NFS resource :

mkdir -p /global/nfs1/SUNW.nfs /global/nfs1/share
mkdir -p /global/nfs2/SUNW.nfs /global/nfs2/share
echo 'share -F nfs -o rw /global/nfs1/share' > /global/nfs1/SUNW.nfs/dfstab.share1
echo 'share -F nfs -o rw /global/nfs2/share' > /global/nfs2/SUNW.nfs/dfstab.share2

- Configure device groups :

scconf -c -D name=nfs1,nodelist=node1:node2,failback=enabled
scconf -c -D name=nfs2,nodelist=node2:node1,failback=enabled

- Create HAStoragePlus resources :

scrgadm -a -j nfs-hastp-rs1 -g nfs-rg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfs1 -x AffinityOn=True
scrgadm -a -j nfs-hastp-rs2 -g nfs-rg2 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfs2 -x AffinityOn=True

- Share :

share -F nfs -o rw /global/nfs1/share
share -F nfs -o rw /global/nfs2/share

- Bring the groups online :

scswitch -Z -g nfs-rg1
scswitch -Z -g nfs-rg2

- Create NFS resources :

scrgadm -a -j share1 -g nfs-rg1 -t SUNW.nfs -y Resource_dependencies=nfs-hastp-rs1
scrgadm -a -j share2 -g nfs-rg2 -t SUNW.nfs -y Resource_dependencies=nfs-hastp-rs2

- Change the probe interval for each NFS resource to a different value so each probe runs
at at different time (see InfoDoc 84817) :

scrgadm -c -j share1 -y Thorough_probe_interval=130
scrgadm -c -j share2 -y Thorough_probe_interval=140

- Change the number of NFS threads - on each node edit the file /opt/SUNWscnfs/bin/nfs_start_daemons -
instead of
DEFAULT_NFSDCMD="/usr/lib/nfs/nfsd -a 16"
put
DEFAULT_NFSDCMD="/usr/lib/nfs/nfsd -a 1024"

- Enable NFS resources :

scswitch -e -j share1
scswitch -e -j share2
scswitch -e -j share3
scswitch -e -j share4

- Switch resource groups to check the cluster :

scswitch -z -h node2 -g nfs-rg1
scswitch -z -h node2 -g nfs-rg2
scswitch -z -h node1 -g nfs-rg1
scswitch -z -h node1 -g nfs-rg2
scswitch -z -h node2 -g nfs-rg2
________________________________________
Configure IPMP On the node node1

cat > /etc/hostname.bge0 << eof
node1 netmask + broadcast + group sc_ipmp0 up \
addif 10.1.1.5 netmask + broadcast + -failover -standby deprecated up
eof
cat > /etc/hostname.bge1 << eof
10.1.1.6 netmask + broadcast + group sc_ipmp0 -failover -standby deprecated up
eof

On the node node2

cat > /etc/hostname.bge0 << eof
node2 netmask + broadcast + group sc_ipmp0 up \
addif 10.1.1.7 netmask + broadcast + -failover -standby deprecated up
eof
cat > /etc/hostname.bge1 << eof
10.1.1.8 netmask + broadcast + group sc_ipmp0 -failover -standby deprecated up
eof
________________________________________

Configure XNTP

On the node node1
Edit /etc/inet/ntp.conf.cluster
________________________________________
Fix according to InfoDoc 85773 :
Replace the "exit 0" line in /etc/rc2.d/S77scpostconfig.sh with "return"
________________________________________

Patches:
sparc x86 Description
120500-06
120501-06
Sun Cluster 3.1: Core Patch for Solaris 10
119374-13
119375-13
SunOS 5.10: sd and ssd patch
119130-16
119131-16
SunOS 5.10: Sun Fibre Channel Device Drivers
119715-10
119716-10
SunOS 5.10: scsi_vhci driver patch
120182-02
120183-02
SunOS 5.10: Sun Fibre Channel Host Bus Adapter Library
120222-08
120223-07
SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver
119302-02
120111-02
VRTSvxfs 4.1MP1: Maintenance Patch for File System 4.1

VXVM recover volume (I/O error)

Friday, December 3, 2010 at 10:20 AM
Plex I/O error Volume Recover

# vxmend -o force off
# vxmend on
# vxmend fix clean
# vxvol start

Meaning of the “Fault LED” in amber

Wednesday, December 1, 2010 at 11:04 PM
Meaning of the “Fault LED” in amber.

When the Fault LED flashes on and off, a problem has occurred that is fatal to the
server. Circumstances that cause the Fault LED to flash include the following:
n The speed of one of the fans inside the server is too low.
n The temperature inside the server’s enclosure is too high. (By default, this
causes the server to shut down. For information about configuring the server
not to shut down in this condition, see Appendix C.)
n The voltage on one of the server’s output supply rails is too high. (By default,
this causes the server to shut down. For information about configuring the
server not to shut down in this condition, see Appendix C.)
n The temperature inside the CPU is too high. (This causes the server to shut
down.)

Solaris | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com