Adding EMC disk and Increasing the Cluster 3.0 file system (Veritas volume manager)

Friday, December 17, 2010 at 5:22 AM
Adding EMC disk and Increasing the Cluster 3.0 file system (Veritas volume manager)

- Check the disk is declared in /kernel/drv/sd.conf - On both nodes.

- Perform the LUN masking for both nodes from the SAN admin

WARNING: Select the correct Storage system, FA port and Volume ID !

(For instance:
STORAGE SYSTEM: …2683
FA PORT: 3BB
Volume ID: 024 [ ie 36 in decimal ] )

- Tick the boxes for node1 and node2, for each adapter (HBA).

- Do NOT forget, for each Symmetrix to do:
“Masking”/ “VCMDB Management”/ “Make active”.


#devfsadm - On node #1

When finished:

#devfsadm - On node #2

#scdidadm –L | grep tXdY
(X = SYMM number. Y = LUN number)
DID devices are not seen yet.

#scgdevs - On node #1

When terminated, enter the following command on node#2

# ps –ef | grep scgdevs

Wait for this process to be terminated before going to next step !

# scdidadm –L | grep tXdY
DID devices are now seen.

“# format” | grep tXdY - On both nodes
+ Enter “Control D” to exit the format command.

Check that the disk is seen on both nodes !

# format => to “label” the disk, and check that its size is correct – On one node

PowerPath commands: On both nodes

# powermt display dev=all | grep tXdY (new device is not visible yet)
# powercf –q (or –i for interactive)
# powermt config
# powermt display dev=all | grep tXdY (new device is now visible)


# vxdisk -o alldgs list | grep tXdY
(X = SYMM number. Y = LUN number)
new device is not yet visible by VxVM.

# vxdctl enable => On both nodes

# vxdisk -o alldgs list | grep tXdY
Only one controller is seen as PowerPath only shows one controller.
Check that the disk is in “error” state ! (Otherwise, this means that this disk could be in use !)

No action required regarding DMP.

(With PowerPath 3.0.3, DMP has to stay enabled. In previous version of PowerPath, DMP had to be disabled)

Configure the disks that will be in the new Disk Group for use with VERITAS Volume

Manager: => On one node.

node1# /etc/vx/bin/vxdisksetup -i c3t9d120 format=sliced
node1# /etc/vx/bin/vxdisksetup -i c3t9… (continue for other disks of SYMM9)
node1# /etc/vx/bin/vxdisksetup -i c3t11d131 format=sliced
node1# /etc/vx/bin/vxdisksetup -i c3t11… (continue for other disks of SYMM11)


To add disks in the DG, use the following command lines:

This is an example given for a Disk Group “DG_name”, made up of a mirrored Logical Volume “Volume_name”.

Add the DG disks for SYMM09: => On one node !

sym11
node1#vxdg -g DG_name adddisk DG_name_m_08=emcpower553
node1#vxdg -g DG_name adddisk DG_name_m_09=emcpower558
node1#vxdg -g DG_name adddisk DG_name_m_10=emcpower557

Add the DG disks for SYMM11: => On one node !

sym19
node1#vxdg -g DG_name adddisk DG_name_08=emcpower554
node1#vxdg -g DG_name adddisk DG_name_09=emcpower555
node1#vxdg -g DG_name adddisk DG_name_10=emcpower556


node1# vxprint –g DG_name (Check the DG configuration)

node1# scconf -c -D name=DG_name,sync


For every added disk in the Disk Group, declare the disk in NOHOTUSE mode:

node1#vxedit -g DG_name set nohotuse=on DG_name_m_08
node1#vxedit -g DG_name set nohotuse=on DG_name_m_09
node1#vxedit -g DG_name set nohotuse=on DG_name_m_10

node1#vxedit -g DG_name set nohotuse=on DG_name_08
node1#vxedit -g DG_name set nohotuse=on DG_name_09
node1#vxedit -g DG_name set nohotuse=on DG_name_10


node1# vxprint –g DG_name (Check the DG configuration)

node1# scconf -c -D name=DG_name,sync


In this example, we want to grow the logical volume volume_name by 50 gigabytes (i.e. add 2 disks of 50 gigabytes).


node1:root> df -k /filesystem
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/DG_name/volume_name
10268396 8923420 1252376 88% /filesystem
node1:root>

Get the really size of the disk:

node1:root> vxassist -g DG_name maxsize layout=mirror
Maximum volume size: 33607680 (16410Mb)
node1:root>

node1:root> vxdisk -g DG_name list DG_name_m_08 | grep public
public: slice=4 offset=0 len=8380800 disk_offset=3840
node1:root>


Associate subdisk to the disks:

sym11
node1#vxmake -g DG_name sd DG_name_m_08-01 DG_name_m_08,0,8380800
node1#vxmake -g DG_name sd DG_name_m_09-01 DG_name_m_09,0,4190400

sym19
node1#vxmake -g DG_name sd DG_name_08-01 DG_name_08,0,8380800
node1#vxmake -g DG_name sd DG_name_09-01 DG_name_09,0,4190400


Associate the subdisk to the existing plex volume_name-04 and volume_name-01:

node1#vxsd -g DG_name assoc volume_name-01 DG_name_m_08-01
node1#vxsd -g DG_name assoc volume_name-01 DG_name_m_09-01

node1#vxsd -g DG_name assoc volume_name-04 DG_name_08-01
node1#vxsd -g DG_name assoc volume_name-04 DG_name_09-01

Grow the FS and volume to the required size:

node1#vxresize -F ufs -g DG_name volume_name +12571200

node1:root> df -k /u663
Filesystem kbytes used avail capacity Mounted on
/dev/vx/dsk/DG_name/volume_name
16455210 8929556 7433054 55% /u663
node1:root>

v volume_name fsgen ENABLED 33434176 - ACTIVE - -
pl volume_name-01 volume_name ENABLED 33434880 - ACTIVE - -
sd DG_name_m_05-01 volume_name-01 ENABLED 8380800 0 - - -
sd DG_name_m_06-01 volume_name-01 ENABLED 8380800 8380800 - - -
sd DG_name_m_07-01 volume_name-01 ENABLED 2053440 16761600 - - -
sd DG_name_m_01-02 volume_name-01 ENABLED 2048640 18815040 - - -
sd DG_name_m_08-01 volume_name-01 ENABLED 8380800 20863680 - - -
sd DG_name_m_09-01 volume_name-01 ENABLED 4190400 29244480 - - -
pl volume_name-02 volume_name ENABLED LOGONLY - ACTIVE - -
sd DG_name_05-05 volume_name-02 ENABLED 64 LOG - - -
pl volume_name-04 volume_name ENABLED 33434880 - ACTIVE - -
sd DG_name_05-01 volume_name-04 ENABLED 8378880 0 - - -
sd DG_name_02-02 volume_name-04 ENABLED 2047680 8378880 - - -
sd DG_name_06-01 volume_name-04 ENABLED 8380800 10426560 - - -
sd DG_name_02-03 volume_name-04 ENABLED 7680 18807360 - - -
sd DG_name_01-02 volume_name-04 ENABLED 2048640 18815040 - - -
sd DG_name_08-01 volume_name-04 ENABLED 8380800 20863680 - - -
sd DG_name_09-01 volume_name-04 ENABLED 4190400 29244480 - - -


When finished:

• Check that the DG has the proper configuration (mirroring between SYMM9 and SYMM11):

node1# vxprint -g DG_name

• check that the filesystem has been increased by the right size:

node1# df -k /filesystem

• Resync the Cluster DG:

node1# scconf -c -D name=DG_name,sync

#scgdevs - On node #1

When terminated, enter the following command on node #2:

#ps –ef | grep scgdevs
Wait for this process to be terminated before going to next step.

1 Responses to Adding EMC disk and Increasing the Cluster 3.0 file system (Veritas volume manager)

  1. Ramesh K Says:

    Thanks for sharing this important information. You may also refer http://www.s4techno.com/blog/2016/06/17/extend-vxvm-filesystem/

Post a Comment

Solaris | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com