RAID mirroring for V490 with storeEdge 3150

Saturday, August 8, 2009 at 9:01 AM

Hardware Setup and Assumptions

This guide is intended primarily to get the basic Solaris 9 Operating Environment set up and to configure the FC3510 array for use. It stops short of setting up the partitioning scheme used by ON-Center, but leaves the system in a state where this is easily done as the next step.
This guide assumes that the FC3510 only has a single controller, and is connected to the host with a single Fibre Channel HBA. The HBA should be connected to Port 0 of the FC3510 controller.

Operating System Setup

This section guides you through the installation and configuration of Solaris 9.

Operating System Installation
Install Solaris 9 on the server. Keep the following in mind:

  • Just install Solaris - don't worry about Extra Value Software, the Software Companion, or any extra products.
  • Select "Entire Distribution plus OEM support" as the software group to install.
  • Allocate only system partitions during the install (/, /var, and swap). Choose Manual Layout and use the example below.
  • Use only the first disk for system partitions, as we will encapsulate and mirror these to the second disk later with Solaris Volume Manager.
  • The system should have two internal physical disks. Leave the bulk of the free space on the first disk unallocated. We can use this space for additional soft partitions later, if needed.

Example disk configuration for a system with two 146GB drives (this is just the first disk, we will set up the second disk after install):

Part Tag Size

Slice 0 root 30.00GB # /

Slice 1 swap 8.00GB # swap space

Slice 2 backup 146.35GB # whole disk

Slice 3 var 10.00GB # /var

Slice 4 unassigned 8.00GB # dump device

Slice 5 unassigned 0

Slice 6 unassigned 90.00GB # soft partitions mirror

Slice 7 unassigned 34.78MB # SVM state database

  • Configure networking on the server as appropriate.
  • Configure the network-based Service Processor for future administration. This is Highly Recommended.

Disk Configuration

We use Solaris Volume Manager (previously Solstice) to provide redunancy and management for our system. For this description, the first disk (primary mirror) is c1t0d0 and the second disk (secondary mirror) is c1t1d0.

  • Duplicate the slice layout of the boot disk on the drive that will be its mirror:

# prtvtoc /dev/rdsk/c1t0d0s2 fmthard -s - /dev/rdsk/c1t1d0s2

  • Configure the new dedicated dump device. This is done so that we don't have to depend on a working swap mirror to retrieve a kernel core dump:

# dumpadm -d /dev/dsk/c1t0d0s4

  • Initialize the state database replicas for Solaris Volume Manager. This gives us four state replicas, two per disk:

# metadb -a -f -c 2 c1t0d0s7 # metadb -a -c 2 c1t1d0s7

  • Create the /, swap, and /var volumes as follows:

# metainit -f d10 1 1 c1t0d0s0

# metainit -f d11 1 1 c1t0d0s1

# metainit -f d13 1 1 c1t0d0s3

# metainit d20 1 1 c1t1d0s0

# metainit d21 1 1 c1t1d0s1

# metainit d23 1 1 c1t1d0s3

# metainit d0 -m d10

# metainit d1 -m d11

# metainit d3 -m d13

  • Set up the system to boot from the mirror: # metaroot d0
  • Change the entries for swap and /var in /etc/vfstab to point to their new locations (/dev/md/dsk/d1, /dev/md/dsk/d3). Don't forget to change the /dev/rdsk entries to /dev/md/rdsk as well.
  • Write down the name of the second disk in the mirror indicated in bold below:

# ls -l /dev/rdsk/c1t1d0s0 lrwxrwxrwx 1 root root 47 Feb 22 11:38 /dev/rdsk/c1t1d0s0 ->devices/pci@1c,600000/scsi@2/sd@1,0:a,raw

  • Halt the system:

# init 0

  • Create an OpenBoot PROM alias for the bootable disks. Some controllers require you to replace sd with disk; you can verify this with the show-disks command before running the nvalias commands. For the V210 and the V240, it appears we should make this substitution. Also, remove ,raw from the end of the device name. You can also look at the output of devalias for examples; in particular, look at disk0 and disk1.

ok nvalias bootdisk /pci@1c,600000/scsi@2/disk@0,0:a

ok nvalias mirrdisk /pci@1c,600000/scsi@2/disk@1,0:a

ok setenv boot-device bootdisk mirrdisk

ok boot
Once booted, attach the submirrors to complete the mirroring:

# metattach d0 d20

# metattach d1 d21

# metattach d3 d23

* Mirror the remaining free space on the disks for future soft partitions. Later, you can allocate soft partitions from d6 for whatever you like:

# metainit d16 1 1 c1t0d0s6

# metainit d26 1 1 c1t1d0s6

# metainit d6 -m d16

# metattach d6 d26

  • For example, to allocate a 10g soft partition that would be mounted at /data:

# metainit d100 -p d6 10g

# newfs -m 0 /dev/md/dsk/d100

# mkdir /data

  • Then, edit /etc/vfstab to include the new mountpoint and set it to mount during boot.

Patching

Now we will install the Solaris 9 Recommended and Security patches, last updated Feb 21, 2007. Download 9_Recommended_Security-20070221.zip. Once it's on the system, run the following commands to install:

# unzip -d 9_Recommended_Security-20070221 9_Recommended_Security-20070221.zip ... unzip output ...

# cd 9_Recommended_Security-20070221/9_Recommended

# ./install_cluster

Answer `y' to start installing patches.
Many of the patches will fail to install, but this is (usually) because the patch has already been applied or is not needed on the system. Check the patch log at /var/sadm/install_data/Solaris_9_Recommended_Patch_Cluster_log for details of the process.

FC3510 Setup

Here we install the software required to access the FC3510 storage array, and then we configure it.

SAN 4.4.12 Installation

Next we install the SAN software, which includes drivers for the Fibre Channel host adapter.

Download SAN 4.4.12: SAN_4.4.12_install_it.tar.Z. Once it's on the system, run the following commands to install:

# zcat SAN_4.4.12_install_it.tar.Z tar xvf - ... tar output ...

# cd cd SAN_4.4.12_install_it

# ./install_it

Answer `y' to start installing the software.
If the installation succeeds, reboot the system:

# shutdown -y -i6 -g0

It would be wise to watch the system console while it reboots in case errors appear during boot related to the new patches.

StorEdge 3000 Family Software Installation

The Sun StorEdge 3000 Family Software includes the command line utility (sccli) used to manage the external array.
Download the Sun StorEdge 3000 Family Software:
2.2_sw_solaris-sparc.zip and2.3_smis_provider.zip. Once on the system, run the following commands to install:

# unzip -d 2.2_sw_solaris-sparc 2.2_sw_solaris-sparc.zip ... unzip output ...

# pkgadd -d 2.2_sw_solaris-sparc/solaris/sparc Answer `all' to install all packages, and then `y' to any questions. # unzip -d 2.3_smis_provider 2.3_smis_provider.zip ... unzip output ... # pkgadd -d 2.3_smis_provider Answer `all' to install all packages, and then `y' to any questions.

FC3510 Firmware Upgrade

Now we will upgrade the firmware on the FC3510.
Download patch 113723-15:
113723-15.zip. Decompress the patch with unzip and read the section entitled "Patch Installation Instructions" inside of README.113723-15. This README documents the upgrade steps better than I could do here.

Array Configuration

We need to delete any existing LUN mappings, and then delete the logical drives themselves. Run sccli to enter the configuration tool. It should connect to the 3510 automatically.
First, display the current LUN mappings (this is an example and may not match what you see): sccli> show lun-maps

Ch Tgt LUN ld/lv ID-Partition Assigned Filter Map

-----------------------------------------------------------------0 40 0 ld0 1A6C4238-00 Primary
For each LUN mapping, run unmap Ch.Tgt.LUN. Do this starting with the highest numbered LUNs and work your way down to 0. For example, using the above output:

sccli> unmap 0.40.0

sccli>
Exit sccli and run the following:

# devfsadm -Cv ... output regarding device changes, if any ...

Now restart sccli, and display the logical drives:

sccli> show logical-drives

LD LD-ID Size Assigned Type Disks Spare Failed Status ------------------------------------------------------------------------

ld0 1A6C4238 58.59GB Primary RAID0 2 0 0 Good Write-Policy Default StripeSize 128KB

And delete each logical drive by running delete logical-drive LD for each logical drive. For example:

sccli> delete logical-drive ld0

This operation will result in the loss of all data on the logical drive.

Are you sure? y

sccli: ld0: deleted logical drive
Now we have a clean array, ready for a new configuration. We have 6 73GB drives in each array, we will configure 5 of the disks into a RAID5 configuration with a spare sixth disk. This should give us about 292GB of usable storage. Note the following examples were done using only 5 disks, so the size numbers will be less than a 6 disk system. Make sure that your configuration uses all but 1 disk for the RAID5 array and the last disk for a spare.

First, we configure SCSI channel 0. This setting should be the same as the default config, but we'll make sure:

sccli> configure channel 0 host pid 40 --reset

sccli: shutting down controller...

sccli: controller is shut down

sccli: resetting controller... sccli: controller has been reset

Now we will set the cache parameters for the array. Since this is a single-controller array, we need to make sure we're using write-through caching, as write-back caching is dangerous without a redundant controller. Also, we set the array to optimize for random access:

sccli> set cache-parameters random write-through

Changes will not take effect until controller is reset

Do you want to reset the controller now? y

sccli: resetting controller...

sccli: controller has been reset

Now we're ready to create our logical disk. Type the following commands into sccli to display the disks in the system, configure a RAID5 logical disk, and configure a global spare drive. Remember, make sure that your configuration uses all but 1 disk for the RAID5 array and the last disk for a spare:

sccli> show disks

Ch Id Size Speed LD Status IDs

Rev ----------------------------------------------------------------------------

2(3) 0 68.37GB 200MB NONE FRMT FUJITSU MAT3073F SUN72G 0602 S/N000513B02RF7 WWNN

500000E010FC3EF0

2(3) 1 68.37GB 200MB NONE FRMT FUJITSU MAT3073F SUN72G 0602 S/N 000512B02DYP WWNN 500000E010F8CF60

2(3) 2 68.37GB 200MB NONE FRMT FUJITSU MAT3073F SUN72G 0602 S/N 000512B02E3S WWNN 500000E010F8D410

2(3) 3 68.12GB 200MB NONE NEW FUJITSU MAT3073F SUN72G 0602 S/N 000513B02RN8 WWNN 500000E010FC8500

2(3) 4 68.37GB 200MB NONE FRMT FUJITSU MAT3073F SUN72G 0602 S/N 000514B02VRY WWNN 500000E010FE8100

sccli> create logical-drive raid5 2.0,2.1,2.2,2.3 primary global-spare 2.4

sccli> map ld0 0.40.0 sccli> show disks

Ch Id Size Speed LD Status IDs

Rev ----------------------------------------------------------------------------

2(3) 0 68.37GB 200MB ld0 ONLINE FUJITSU MAT3073F SUN72G 0602 S/N 000513B02RF7 WWNN 500000E010FC3EF0

2(3) 1 68.37GB 200MB ld0 ONLINE FUJITSU MAT3073F SUN72G 0602 S/N 000512B02DYP WWNN 500000E010F8CF60

2(3) 2 68.37GB 200MB ld0 ONLINE FUJITSU MAT3073F SUN72G 0602 S/N 000512B02E3S WWNN 500000E010F8D410

2(3) 3 68.37GB 200MB ld0 ONLINE FUJITSU MAT3073F SUN72G 0602 S/N 000513B02RN8 WWNN 500000E010FC8500

2(3) 4 68.37GB 200MB GLOBAL STAND-BY FUJITSU MAT3073F SUN72G 0602 S/N 000514B02VRY WWNN 500000E010FE8100

sccli> show logical-drives

LD LD-ID Size Assigned Type Disks Spare Failed Status ------------------------------------------------------------------------

ld0 7D1F7008 204.35GB Primary RAID5 4 1 0 Good I Write-Policy Default StripeSize 32KB

sccli> show map
We are now done with the array configuration. Exit =sccli= and run:

# devfsadm -Cv

... output regarding device changes, if any ...

Host Configuration

We will now configure the drive array for access from the host system.

Slice Setup

Now, when you run format, you should see the new device and should be able to configure it:

# format

Searching for disks...done

c2t40d0: configured with capacity of 204.34GB

AVAILABLE DISK SELECTIONS:

0. c1t0d0 /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w21000011c63f6cd7,0 1. c1t1d0 /pci@8,600000/SUNW,qlc@2/fp@0,0/ssd@w2100000c50010967,0 2. c2t40d0 /pci@9,600000/SUNW,qlc@1,1/fp@0,0/ssd@w216000c0ff88655a,0 Specify disk (enter its number): 2

selecting c2t40d0

[disk formatted]

Disk not labeled. Label it now? y

... format menu, choose `p' and `p' again ...
And there's the disk, right on target 40 where we put it. You should now see the slices in the new LUN. Reconfigure the slices such that all space is given to the first slice (i.e., it matches the s2 slice). It should look something like this:

partition> p

Current partition table (unnamed):

Total disk cylinders available: 52723 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks

0 unassigned wm 0 - 52722 20 4.34GB (52723/0/0)

428532544

1 unassigned wm 0 0 (0/0/0) 0

2 backup wu 0 - 52722 204.34GB (52723/0/0)

428532544

3 unassigned wm 0 0 (0/0/0) 0

4 unassigned wm 0 0 (0/0/0) 0

5 unassigned wm 0 0 (0/0/0) 0

6 unassigned wm 0 0 (0/0/0) 0

7 unassigned wm 0 0 (0/0/0) 0

label the disk and exit the format utility.

Solaris Volume Manager Setup

We need to add another set of SVM state replicas to the new disk and create an SVM submirror so we can allocate soft partitions:

# metadb -a -c 2 c2t40d0s0

# metainit d7 1 1 c2t40d0s0

At this point we should have two SVM devices suitable for soft partition allocation: d6 and d7. d6 has the extra space from the disks internal to the server, and d7 has the entire RAID5 array from the FC3510. For example, to allocate a 50GB soft partition from the RAID5 array (we'll call it d101), you would run the following command:

# metainit d101 -p d7 50g
You could then run newfs on /dev/md/dsk/d101 and otherwise treat it as a standard block device.

0 comments

Post a Comment

Solaris | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com