HA NFS Sun Cluster Setup

Saturday, December 4, 2010 at 10:58 AM
HA NFS Sun Cluster Setup

Goal: dual-node NFS failover cluster that shares 2 concatenated SVM volumes

2 nodes (hostname: node1, node2) with installed Solaris 10 01/06, patches, Sun Cluster, NFS agent for Sun Cluster, VxFS
Both nodes are connected to the FC SAN storage, 8 storage LUNs are mapped to each node.

Configure SVM
On both nodes
Create the 25MB partition on the boot disk (s7)
Create the SVM database replica
metadb -afc 3 c0d0s7 (c0t0d0s7 on sparc)

On one node (node1)- Create disk sets :

metaset -s nfs1 -a -h node1 node2
metaset -s nfs1 -t -f
metaset -s nfs1 -a /dev/did/rdsk/d2 /dev/did/rdsk/d3 /dev/did/rdsk/d4 /dev/did/rdsk/d5
metainit -s nfs1 d1 4 1 /dev/did/rdsk/d2s0 1 /dev/did/rdsk/d3s0 1 /dev/did/rdsk/d4s0 1 /dev/did/rdsk/d5s0
metastat -s nfs1 -p >> /etc/lvm/md.tab
metaset -s nfs2 -a -h node2 node1
metaset -s nfs2 -t -f
metaset -s nfs2 -a /dev/did/rdsk/d6 /dev/did/rdsk/d7 /dev/did/rdsk/d8 /dev/did/rdsk/d9
metainit -s nfs2 d1 4 1 /dev/did/rdsk/d6s0 1 /dev/did/rdsk/d7s0 1 /dev/did/rdsk/d8s0 1 /dev/did/rdsk/d9s0
metastat -s nfs2 -p >> /etc/lvm/md.tab
scp /etc/lvm/md.tab node2:/tmp/md.tab
ssh node2 'cat /tmp/md.tab >> /etc/lvm/md.tab'

- Create VxFS on shared devices

mkfs -F vxfs /dev/md/nfs1/rdsk/d1
mkfs -F vxfs /dev/md/nfs2/rdsk/d1

On both nodes - Create the directories

mkdir -p /global/nfs1
mkdir -p /global/nfs2

- Add the mount entries to the vfstab file

cat >> /etc/vfstab << EOF
/dev/md/nfs1/dsk/d1 /dev/md/nfs1/rdsk/d1 /global/nfs1 vxfs 2 no noatime
/dev/md/nfs2/dsk/d1 /dev/md/nfs2/rdsk/d1 /global/nfs2 vxfs 2 no noatime
(mount-at-boot "no" because we'll use the HAStoragePlus resource type)

- Add logical hostnames :

cat >> /etc/hosts << EOF log-name1 log-name2

On one node (node1)- Mount metavolumes and create the PathPrefix directories

mount /global/nfs1
mount /global/nfs2

mkdir -p /global/nfs1/share
mkdir -p /global/nfs2/share

Configure HA NFS On one node (node1) - Register resource types :

scrgadm -a -t SUNW.HAStoragePlus
scrgadm -a -t SUNW.nfs

- Create failover resource groups :

scrgadm -a -g nfs-rg1 -h node1,node2 -y PathPrefix=/global/nfs1 -y Failback=true
scrgadm -a -g nfs-rg2 -h node2,node1 -y PathPrefix=/global/nfs2 -y Failback=true

- Add logical hostname resources to the resource groups :

scrgadm -a -j nfs-lh-rs1 -L -g nfs-rg1 -l log-name1
scrgadm -a -j nfs-lh-rs2 -L -g nfs-rg2 -l log-name2

- Create dfstab file for each NFS resource :

mkdir -p /global/nfs1/SUNW.nfs /global/nfs1/share
mkdir -p /global/nfs2/SUNW.nfs /global/nfs2/share
echo 'share -F nfs -o rw /global/nfs1/share' > /global/nfs1/SUNW.nfs/dfstab.share1
echo 'share -F nfs -o rw /global/nfs2/share' > /global/nfs2/SUNW.nfs/dfstab.share2

- Configure device groups :

scconf -c -D name=nfs1,nodelist=node1:node2,failback=enabled
scconf -c -D name=nfs2,nodelist=node2:node1,failback=enabled

- Create HAStoragePlus resources :

scrgadm -a -j nfs-hastp-rs1 -g nfs-rg1 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfs1 -x AffinityOn=True
scrgadm -a -j nfs-hastp-rs2 -g nfs-rg2 -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/nfs2 -x AffinityOn=True

- Share :

share -F nfs -o rw /global/nfs1/share
share -F nfs -o rw /global/nfs2/share

- Bring the groups online :

scswitch -Z -g nfs-rg1
scswitch -Z -g nfs-rg2

- Create NFS resources :

scrgadm -a -j share1 -g nfs-rg1 -t SUNW.nfs -y Resource_dependencies=nfs-hastp-rs1
scrgadm -a -j share2 -g nfs-rg2 -t SUNW.nfs -y Resource_dependencies=nfs-hastp-rs2

- Change the probe interval for each NFS resource to a different value so each probe runs
at at different time (see InfoDoc 84817) :

scrgadm -c -j share1 -y Thorough_probe_interval=130
scrgadm -c -j share2 -y Thorough_probe_interval=140

- Change the number of NFS threads - on each node edit the file /opt/SUNWscnfs/bin/nfs_start_daemons -
instead of
DEFAULT_NFSDCMD="/usr/lib/nfs/nfsd -a 16"
DEFAULT_NFSDCMD="/usr/lib/nfs/nfsd -a 1024"

- Enable NFS resources :

scswitch -e -j share1
scswitch -e -j share2
scswitch -e -j share3
scswitch -e -j share4

- Switch resource groups to check the cluster :

scswitch -z -h node2 -g nfs-rg1
scswitch -z -h node2 -g nfs-rg2
scswitch -z -h node1 -g nfs-rg1
scswitch -z -h node1 -g nfs-rg2
scswitch -z -h node2 -g nfs-rg2
Configure IPMP On the node node1

cat > /etc/hostname.bge0 << eof
node1 netmask + broadcast + group sc_ipmp0 up \
addif netmask + broadcast + -failover -standby deprecated up
cat > /etc/hostname.bge1 << eof netmask + broadcast + group sc_ipmp0 -failover -standby deprecated up

On the node node2

cat > /etc/hostname.bge0 << eof
node2 netmask + broadcast + group sc_ipmp0 up \
addif netmask + broadcast + -failover -standby deprecated up
cat > /etc/hostname.bge1 << eof netmask + broadcast + group sc_ipmp0 -failover -standby deprecated up

Configure XNTP

On the node node1
Edit /etc/inet/ntp.conf.cluster
Fix according to InfoDoc 85773 :
Replace the "exit 0" line in /etc/rc2.d/S77scpostconfig.sh with "return"

sparc x86 Description
Sun Cluster 3.1: Core Patch for Solaris 10
SunOS 5.10: sd and ssd patch
SunOS 5.10: Sun Fibre Channel Device Drivers
SunOS 5.10: scsi_vhci driver patch
SunOS 5.10: Sun Fibre Channel Host Bus Adapter Library
SunOS 5.10: Emulex-Sun LightPulse Fibre Channel Adapter driver
VRTSvxfs 4.1MP1: Maintenance Patch for File System 4.1


Post a Comment

Solaris | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com