Failed root disk replacement in SVM

Wednesday, December 21, 2011 at 4:46 AM
Step1: Backup of the current SDS configuration:
[root]# metastat -p >> /etc/lvm/md.tab
removal of replicas stored on this disk
[root]# metadb -d c1t0d0s4
Step2: detach and deletion of sub mirror d11 (swap)
[root]# metadetach d1 d11 && metaclear d11
Step3: disk removal
[root]# luxadm remove_device /dev/rdsk/c1t0d0s2
WARNING!!! Please ensure that no file systems are mounted on these device(s). All data on these devices should have been backed up.
The list of devices which will be removed is:
1: Device name: /dev/rdsk/c1t0d0s2 Node WWN: 2000000c504fd050
Device Type: Disk device
Device Paths: /dev/rdsk/c1t0d0s2
Please verify the above list of devices and then enter c or to Continue or q to Quit. [Default: c]:c stopping: /dev/rdsk/c1t0d0s2.... Done
offlining: /dev/rdsk/c1t0d0s2.... Done
The drives are now off-line and spun down.
Physically remove the disk and press the Return key.
Hit after removing the device(s).
Note: Disk has to be physically removed before pressing "ENTER" when running luxadm remove_device; otherwise, picld will notify the kernel that the drive was not removed and WWN of this drive will remain in the loop -- and the FCAL subsystem may get confused, especially if recommended patches are out of date
[root]# devfsadm -C -c disk
Step4: Insertion of the new disk drive:
<<< physical insertion >>>
[root]# devfsadm -v
Step5: partitioning:[root]# prtvtoc /dev/rdsk/c1t1d0s2 fmthard -s - /dev/rdsk/c1t0d0s2
Step6: adding SDS replicas :
[root]# metadb -ac 3 c1t0d0s4
Step7: resynchronization :
[root]# metainit d11 && metattach d1 d11
[root]# for i in 0 3 6 ; do metareplace -e d$i c1t0d0s$i ; done

How to Remove and Replace a Sun Storage 3510/3511 FC Array Battery

Tuesday, December 6, 2011 at 4:29 AM
PROBLEM OVERVIEW
Defective Battery
WHAT STATE SHOULD THE SYSTEM BE IN TO BE READY TO PERFORM THE RESOLUTION ACTIVITY?
Write cache should be disabled prior to replacing battery to prevent data loss if power event occurs before replacement battery is charged

WHAT ACTION DOES THE CUSTOMER NEED TO TAKE
Check for amber LED on battery, check CLI battery status and location (upper or lower slot). From SC CLI: sccli> show battery
Prepare array for battery replacement
1. To determine status of write cache on array, access array Main Menu from telnet.


2. Select appropriate mode and press

3. Clear any pop-up messages with + < C> Select Disable pop-ups On upper right of Main Menu, Cache Status is displayed. Value to right indicates percentage of controller cache that differs from what is saved to disk. “Clean” means all cache has been saved to disk.

4. From telnet Main Menu, Disable cache: a. Select View and edit Configuration parameters b. Select Caching Parameters . Note cache state (enabled or disabled). c. Select Write-Back Cache Enable ? > No d. Disable Write-back cache? >YesFrom sccli, Disable Cache: e. sccli -o (out-of-band) or sccli /dev/dsk/c#t#d#s2(in-band) f. Check cache parameters. From sccli> show cache-parameters g. Disable cache. sccli> set cache-parameters write-through

5. Verify write-through cache status. sccli> show cache-parameters
Note: Verify "cache" status led on Raid controller is off

6. Remove ethernet and serial cable from battery module

7. Unscrew and disengage thumbscrews and cables on battery module and use thumbscrews to pull out the battery module

8. Insert new battery module and tighten thumbscrews until finger tightWARNING: Do not force or apply excessive pressure when inserting battery module

9. Re-attach cables

10. Using CLI (sccli), check battery status and set in service datesccli> show battery-statussccli> show battery-status -usccli> show battery-status
If second battery needs to be replaced, repeat steps 6-10

11. Confirm controller redundancy. sccli >show redundancy

12. Restore write cache to original settings. To Enable Write-Back Cache a. Select View and edit Configuration parameters b. Select Caching Parameters c. Select Write-Back Cache Disabled d. Enable Write-Back Cache?--> YESsccli> set cache-parameters write-back
Note:Write cache will enable once battery can protect on-board cache. Verify battery is charging by blinking green light on battery. Battery status LED may remain amber for up to 30 minutes.

13. Verify host access

14. Restart Configuration Services agent to prevent expired battery related messages from recurring. From host, type/etc/init.d/ssagent stop/etc/init.d/ssagent start

VCS Interview Questions

Sunday, November 20, 2011 at 2:11 AM
1. How do check the status of VERITAS Cluster Server aka VCS ?
Ans: hastatus –sum3

2. Which is the main config file for VCS and where it is located?
Ans: main.cf is the main configuration file for VCS and it is located in /etc/VRTSvcs/conf/config.

3. Which command you will use to check the syntax of the main.cf?
Ans: hacf -verify /etc/VRTSvcs/conf/config

4. How will you check the status of individual resources of VCS cluster?
Ans: hares –state

5. What is the service group in VCS ?
Ans: Service group is made up of resources and their links which you normally requires to maintain the HA of application.

6. What is the use of halink command ?
Ans: halink is used to link the dependencies of the resources

7. What is the difference between switchover and failover ?
Ans: Switchover is an manual task where as failover is automatic. You can switchover service group from online cluster node to offline cluster node in case of power outage, hardware failure, schedule shutdown and reboot. But the failover will failover the service group to the other node when VCS heartbeat link down, damaged, broken because of some disaster or system hung.

8. What is the use of hagrp command ?
Ans: hagrp is used for doing administrative actions on service groups like online, offline, switch etc.
9. How to switchover the service group in VCS ?
Ans: hagrp –switch -to

10. How to online the service groups in VCS ?
Ans: hagrp –online -sy

Linux Interview Questions

at 2:09 AM
1.What is the best RAID level?
RAID 0 for performance
RAID 5 for High availability
RAID 6 even better HA if the budget is fine

2.What is MAC address and How to check the MAC address in linux?
A mac address means media access control address.It is a unique address assigned to almost all networking hardware such as Ethernet cards, router etc.
Most layer 2 network protocols use one of three numbering spaces which are designed to be globally unique.

Linux Command to see MAC address:
Ifconfig is used to configure network interfaces.
$ /sbin/ifconfig grep HWaddr

Output: eth0 Link encap:Ethernet HWaddr 00:0F:EA:91:04:07

OR
$ /sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 00:0F:EA:91:04:07 <<< THIS IS THE MAC ADDRESS
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::20f:eaff:fe91:407/64 Scope:Link

OR as a root user type following command:
# grep eth0 /var/log/dmesg

eth0: RealTek RTL8139 at 0xc000, 00:0f:ea:91:04:07, IRQ 18 <<< this line 2 component from this side is MAC address
eth0: Identified 8139 chip type 'RTL-8100B/8139D'
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1

3.how to assign a permanent IP to a client which is presently in DHCP in Linux?
/sbin/ifconfig eth0 192.168.10.1 netmask 255.255.255.0 broadcast 192.168.10.255
In this command we are assigning 192.168.10.1 IP to ethernet interface(NIC card) eth0.

Also in redhat linux terminal you can type comand "setup" & launch a Wizard type interface in which u can choose network & configure IP

You can use the GUI tool /usr/bin/neat - Gnome GUI network administration tool. It handles all interfaces and configures for both static assignment as well as dynamic assignment using DHCP.

4. How to exclude some ip address range in DHCP?
To exclude the range of IP address in a subnet is to split the address range of subnet into two.
Example:
subnet 1.1.1.0 netmask 255.255.255.0
{
range 1.1.1.10 1.1.1.15
range 1.1.1.21 1.1.1.40
}

so in the above example automatically the IP 16-20 will be excluded from the list.

5.What is the default serial number of DNS ?
Are based on ISO dates. Every time the data in the database is changed, the serial number must be increased in order that the slave servers know the zone has changed

6.how to pull the data to the secondary nis server from master server?
ypxfr is a pull command which runs on each slave server to make that server import the map from master nis server

7.what file need to be changed in nis client if you are changing the machine from one subnetwork to other subnetwork?
/etc/yp.conf

8.how to see memory usage?
Commands "top" and "free -m"

9.how to increase the filesystem ?
Using command # fdisk

Solaris Interview Questions

at 2:07 AM
1) What files control user administration?
A) /etc/passwd file: 7 Fields: loginid:x:userid:groupid:comment:homedir:shell
/etc/shadow: 9 Fields: loginid:password:lastchng:min:max:warn:inactive:expire
/etc/group : 4 Fields : groupname:password:groupid:username list)

2) What does the "pwconv" command do?
A) It updates the /etc/shadow file with information from /etc/passwd file.

3) Where are the failed login attemps to system logged?
A) /var/adm/loginlog (We need to create this file as it does not exist by default)

4) Which command shows the users currently logged in to system?
A) who ( It reads the information from /var/adm/utmpx file)

5) Which command will show detailed information about a user?
A) finger –m

6) Which command displays all login and logouts?
A) last (It reads the information from /var/adm/wtmpx file)

7) What is the "StickyBit" file permission?
A) Sticky Bit permission protects the file within a public writable directory.
File set with sticky bit will not allow any user to delete the file except the
Owner of file, owner of the group or the root user.

8) How is ACL (Access Control Lists) implemented?
A) 8.1) "getfacl"command : To display an ACL on file.
Syntax : getfacl

8.2) setfacl command : To set the ACL on file
Syntax : setfacl

8.3) setfacl –m command : To modify ACL entries
Syntax : setfacl –m

8.4) setfacl –s command : Remove old ACL entries and replace with new one.
Syntax : setfacl –s

8.5) setfacl –d command : Delete ACL entry
Syntax : setfacl –d

9) Imp "root(/)" subdirectories and their purpose :
9.1) / : Root of overall file system.
9.2) /bin : Symbolic link to /usr/bin. Stores standard system commands and binary files.
9.3) /dev : Primary location for "logical" device names
9.4) /devices : Primary location for "physical" device names
9.5) /etc : Contains host specific system admin config files
9.6) /export : Default directory for commonly shared filesystems.
9.7) /home : Default directory / mount point for user's home directory
9.8) /kernel : Directory of platform independent loadable kernel file
9.9) /mnt : Temporary mount point for file systems
9.10) /opt : Default directory for add on packages
9.11) /sbin : Executables used in booting process and file recovery
9.12) /tmp : Temporary files
9.13) /usr : Mount point for /usr file system
9.14) /var : Directory for varying files, temporary logging or status files

10) What are the different disk slices?
Slice Name Function
0 / Root's system files
1 swap Swap area
2 Entire Disk
5 /opt Optional Software
6 /usr System Exe's
7 /export/home User's file and directories

11) Which command displays the system configuration information?
A) prtconf

12) Which command is used to configure newly attached hardware ?
A) devfsadm –c where

13) What are the different types of "file systems" in Solaris?
A) There are 3 Types of file system :
13.A.1) Disk based : ufs (standard unix), hsfs (cd-rom), pcfs (Floppy)
Or udf (DVD and CD Rom)
13.A.2) Distributed : NFS (enables sharing of files between many types of n/w)
13.A.3) Pseudo : tmpfs (temporary), swapfs , fdfs, procfs

14) What is a "boot block"?
A) The bootstrap program (bootblk) is found in the next 15 disk sectors. Only the "root" file system has an active boot block, although the space is allocated for boot block at the beginning of each file system.

15) What is "superblock"?
A) The file system is determined by its superblock. It is contained in the 16 disk sectors following the boot block. It contains :
· No. of data blocks
· No.of cylinder groups
· Size of data block fragment
· Description of hardware
· Name of mount point
· File system state flag ( clean , stable , active , logging or unknown)

16) How will you repair the main superblock if it gets corrupted?
A) Every file system has backup superblock at block no.32, which can be given to fsck to repair the main superblock.
# fsck –o b=32 /dev/rdsk/c0t0d0s0

17) How to create new file systems ?
A) newfs /dev/rdsk/c0t0d0s0

18) How will you restore /etc/vfstab file if it gets corrupted?
A) Step 1 : Insert Solaris CD 1 of 2
Step 2 : Go to single user mode : ok boot cdrom –s
Step 3 : Run "fsck" on /(root) partition : # fsck /dev/rdsk/c0t0d0s0
Step 4 : Mount /(root) file system on /a directory to gain access to file system
# mount /dev/dsk/c0t0d0s0 /a
Step 5 : Set & export TERM variable
# TERM=sun
# export TERM
Step 6 : Edit /etc/vfstab file and remove the incorrect entry : # vi /a/etc/vfstab
Step 7 : Unmount the file system : # cd / ; # umount /a and reboot the system.

19) How will you share user's home directory?
A) Step 1 : Login as root and verify mountd daemon is running
# ps –ef grep mountd
Step 2 : If the daemon is not running start it :
# /etc/init.d/nfs.server start
Step 3 : List all shared filesystems
# share
Step 4 : Edit the /etc/dfs/dfstab file and add :
# share –F nfs /export/home
Step 5 : Share the file systems in the /etc/dfs/dfstab file :
# shareall –F nfs
Step 6 : Verify that the home directory is shared.
# share

20) What does /etc/inittab file contain ?
A) The /etc/inittab contains the systems default run level, processes to start/monitor
or restart. It also contains the actions to be taken when run level changes.
/etc/inittab file is in foll format :- id:rstate:action:process

21) How will you use "shutdown" command?
A) # shutdown –i0 –g300 –y

22) How will you check the OBP version of your system ?
A) Use " banner" command at the ok prompt

23) Explain the Solaris Boot process?
A) 23.1 ) Boot Prom Phase : Runs POST to verify system hardware and memory Loads "bootblk" primary boot program.
23.2) Boot Program Phase : "bootblk" loads the finds "ufsboot" and loads it in memory.ufsboot loads the kernel.
23.3) Kernel Initialization Phase : Loads modules using "ufsboot"
Creates user processes and starts /sbin/init process.
23.4) Initialization Phase : Starts "rc" scripts. These scripts check & mounts file System, starts various processes and perform system maintanence tasks

24) Backup And Restore :
Full backup : # ufsdump 0uf /dev/rmt/0 /
Where 0 à indicates full backup f à Path of the backup device u à update the dumpdates file.
Restore : # ufsrestore if /dev/rmt/0

25) How to temporary disable user's login.
A) Log in as "root"
B) Create /etc/nologin file
# vi /etc/nologin
C) Include a message
D) Close and Save the file.

26) What does 'Probe' command do?
A) probe-scsi-all à list all internal and external scsi devices
B) probe-ide-all à List all ide devices

27) How to find whether a system is configured for 32-bit or 64-bit?
A) # isainfo -v

28) How to activate Ethernet card ?
A) # ifconfig qfe0 plumb

29) How will you assign ip address to system?
A) # ifconfig qfe0 192.168.0.1 netmask 255.255.255.0 up

30) How will you check current ip configuration?
A) # ifconfig –a

31) How will you set a default router ?
A) # /etc/defaultrouter

32) How to remove all current routes and assign 192.168.1.100 as default router?
A) # route flush
# route add default 192.168.1.100

33) How to change the network settings ?
A) # sys-unconfig

34) What all does the NVRAM store?
A) Ethernet Address / Host ID / Time of Day (TOD) clock and EEPROM Parameters

35) Where are all the port numbers stored?
A) Port numbers are stored in /etc/services

36) Where are eeprom file stored ?
A) /usr/sbin/eeprom

37) Some important NIS commands :
1) # ypcat hosts à Prints info from hosts database
2) # ypmatch host1 hosts à Match individual host entries
3) # ypmatch user1 passwd
4) # ypwhich à Returns NIS master server.

38) Controlling the tape drive ?
1) mt –f /dev/rmt/0n à 'n' indicates no rewind.

39) What are the network utilities ?
1) snoop à To capture network packets & display contents
2) netstat –i à Displays state of Ethernet address
3) ndd command à Set & examine kernel parameters namely TCP/IP drivers.

40) Network Configuration :
1) /etc/resolv.conf : Contains Internet domain name, name server and search order.
2) /etc/nsswitch.conf : Specifies information source from files, NIS, NIS+ or DNS
3) /etc/hostname.[int](hme0eri0le0] : IP v4 host
4) /etc/nodename : IP v6 host
5) /etc/inet/hosts : Host namefile (/etc/host links to this file)
6) /etc/inet/netmasks : TCP/IP subnet router
7) /etc/inet/protocols : Network protocols
8) /etc/inet/services : Network service name & port numbers
9) /etc/notrouter : Create this file to prevent in.routed or in.rdiscd from starting at boot time
10) /etc/inet/inetd.conf : Internet super daemon config file
11) To change hostname / ip address :

/etc/hostname.{int}(hme0le0)
/etc/nodename
/etc/inet/hosts
/etc/net/*/hosts
/etc/defaultrouter
/etc/resolv.conf

41) How to configure interfaces at boot time>
A) /etc/rcS.d/s30network.sh file à This script is run each time system is booted. It uses ifconfig utility to configure each interface with IP add & other network info.It searches for files called : hostname.xxn in /etc where xx à int type & nà instance of interface

VXVM interview questions

at 1:43 AM
1. Name the mandatory disk group in VxVM 3.5 ? How will you configure VxVM in 3.5 ?
ANS: rootdg is the mandatory disk group in VxVM 3.5, vxinstall is the command to configure VxVM, It will create the disk groups, initializes the disks and adds them to the group.

2. How will you create private and shared disk group using VxVM ?
ANS: For Private DG:
Command: vxdg init

For Shared DG:
Command: vxdg -s init < disk1 disk2 disk3 >

3. Which are the different layouts for volumes in VxVM ?
ANS: mirror, stripe, concat (default one), raid5, stripe-mirror, mirror-stripe.

4. What is the basic difference between private disk group and shared disk group ?
ANS: Private DG: The DG which is only visible for the host on which you have created it, if the host is a part of cluster, the private DG will not be visible to the other cluster nodes.
Shared DG: The DG which is sharable and visible to the other cluster nodes.

5. How will you add new disk to the existing disk group ?
ANS: Run vxdiskadm command, which will open menu driven program to do various disk operations, select add disks option or you can use another command vxdiskadd.

6. How will you grow/shrink the volume/file system ? What is the meaning of growby and growto options ? What is the meaning on shrinkto and shrinkby options ?
ANS: vxassist command is used to do all volume administration, following is the description and syntax.

Growby option: This is will grow your file system by adding new size to the existing file system.

Growto option: This will grow your file system as per the new size. This WILL NOT ADD new size to the existing one.

Shrinkby option: This will shrink your file system by reducing new size from existing file system.

Shrinkto option: This will shrink your file system as per the new size. This WILL NOT REDUCE the file system by reducing new size.

Command:
vxassist -g [growto, growby, shrinkto, shrinkby] length

7. How will you setup and unsetup disks explicitly using VxVM ?
ANS: You can use /etc/vx/bin/vxdiskunsetup to unsetup the disk, and /etc/vx/vxdisksetup to setup the disk.

8. How will you list the disks, which are in different disk groups ?
ANS: vxdisk list is the command will list the disks from the DG which is currently imported, you can check the same using vxprint command too. vxdisk -o alldgs list command list all the disks which are in different dg's.

9. What is the private region in VxVM ?
ANS: Private region stores the structured VxVM information, it also stores the disk ID and disk geometry. In short words it has metadata of the disk.

10. If, vxdisk list command gives you disks status as "error", what steps you will follow to make the respective disks online ?
ANS: If you faced this issue because of fabric disconnection then simply do vxdisk scandisks, otherwise unsetup the disk using using /etc/vx/bin/vxdiskunsetup and setup the disks again using /etc/vx/bin/vxdisksetup, this will definitely help! [ /etc/vx/bin/vxdiskunsetup will remove the private region from the disk and destroys data, backup the data before using this option]

Disk test with dd command

Tuesday, October 11, 2011 at 5:59 AM
#dd if=/dev/dsk/c2t0d0 of=/dev/null bs=1024k
#dd if=/dev/dsk/c2t1d0 of=/dev/null bs=1024k

Check the Memory and Pages detail

Friday, September 30, 2011 at 12:14 AM
#kstat -c pages | egrep "(mem|pages)"
name: system_pages class: pages
availrmem 4162512
freemem 3722148
pagesfree 3722148
pageslocked 4035816
pagestotal 8213592
physmem 8235220

Find the block error in system messages

Tuesday, September 27, 2011 at 7:25 AM
#cd /var/adm; less messages/messages.1grep 'Error Block'cut -d ' ' -f 12sort uniq -c

Solaris Container Vs LDOM

Saturday, September 10, 2011 at 11:27 AM
Solaris Containers
------------------
No special hardware required
Single OS image
Sub-CPU resource granularity
Shared kernel, memory, file systems (configuration, resources and
management)
Solaris only (excluding Linux branded zone on x86)
CPUs can be shared
Works on all systems
Virtually unlimited partitioning (max is 8191 non-global zones)
Single system patch level
Most admin operations can be applied to all containers in a single operation
Very little performance overhead for zone infrastructure


LDoms
-----
Sun4v systems only
Multiple OS images
Multiples of CPU granularity
Dedicated kernel, memory, file systems
Can support other OSes
CPUs can not be shared (CPUs here refers to a strand/thread)
Currently available on Tx000, T5xy0 only
Partitioning limited to number of CPUs
Multiple and different patch and release levels possible
Each LDom must be fully managed separately

Identify UFS fstyp (Multitera Byte)

Tuesday, August 30, 2011 at 11:52 PM
Non-Multitera Byte File system
node1:# fstyp -v /dev/vx/dsk/dg/vol1 head -5
ufs
magic 11954 format dynamic time Wed Aug 31 08:38:41 2011
sblkno 16 cblkno 24 iblkno 32 dblkno 800
sbsize 2048 cgsize 8192 cgoffset 32 cgmask 0xffffffe0
ncg 21399 size 1073690626 blocks 1056913459

Multitera Byte File system

node1:# fstyp -v /dev/vx/dsk/dg/vol2 head -5
ufs
magic decade format dynamic time Wed Aug 31 08:43:11 2011
sblkno 2 cblkno 3 iblkno 4 dblkno 8
sbsize 8192 cgsize 8192 cgoffset 4 cgmask 0xf

Apache Tomcat Config in Sun Cluster 3.2

Sunday, August 14, 2011 at 11:52 AM
Register the SUNW.gds and SUNW.HAStoragePlus resource type

clresourcetype register SUNW.gds SUNW.HAStoragePlus

Create a failover re
source group for the SharedAddress resource

clresourcegroup create shared-ip-rg

Create the SharedAddress resource

clressharedaddress create -g shared-ip-rg -h log-apache shared-ip-res

Online the SharedAddress resource group

clresourcegroup online -M shared-ip-rg

Create the resource group for the scalable service

clresourcegroup create -p Maximum_primaries=2 -p Desired_primaries=2 -p RG_dependencies=shared-ip-rg apache-tom-Scalable-rg

Create a resource for the Apache Tomcat Disk Storage if it is not in the root file system

clresource create -g apache-tom-Scalable-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/global/apache apache-tom-has-res

Enable the failover resource group that now includes the Apache Tomcat Disk Storage and Logical Hostname resources

clresourcegroup online -M apache-tom-Scalable-rg

Apache Configuration in Sun Cluster 3.2

at 4:11 AM
Failover resource group

clrg create shared-ip-rg

clrssa create -g shared-ip-rg -h log-apache shared-ip-res

clrg online -eM shared-ip-rg

Scalable resource group

clrt register SUNW.apache

clrg create -p Maximum_primaries=2 -p Desired_primaries=2 -p RG_dependencies=shared-ip-rg apache-rg

clrs create -g apache-rg -t SUNW.apache -p Bin_dir=/usr/apache2/bin -p Resource_dependencies=shared-ip-rs -p Scalable=True -p Port_list=80/tcp apache-res

clrg online -eM apache-rg

clrs set -p Load_balancing_weights=4@1,3@2 apache-rs

Investigate with truss command for explorer hang

Tuesday, June 14, 2011 at 8:34 AM
/usr/bin/truss -eflda -p -rall -wall -vall -fall -o /tmp/truss.explorer.out

or

/usr/bin/truss -eflda -rall -wall -vall -fall -o /tmp/truss.explorer.out /upt/SUNWexplo/bin/explorer -w default

VXVM: Resolving duplicate disk/device entries in "vxdisk list" or vxdisksetup

Friday, May 6, 2011 at 8:13 AM
vxdisk list
c1t0d0s2 sliced -- error
c1t0d0d2 sliced -- error
-- root-disk rootdg removed was:c1t0d0s2

1. Remove c1t0d0s2 entries from vxvm control
vxdisk -f rm c1t0d0s2

2.Remove the disk c1t0d0s2 using luxadm

luxadm remove_device c1t0d0s2
luxadm remove_device /dev/rdsk/c1t0d0s2

Pull the disk out as per the luxadm instrunctions.

3.Run command "devfsadm -C"
4. Run command "vxdctl enable"
5. luxadm -e offline /dev/dsk/c1t0d0s2

6. Run command "devfsadm -C"
7. Run command "vxdctl enable"

9.You need to use "luxadm insert_device" to replace the disk failed device.
[Once the disk have been replaced, use "vxdctl enable" and "vxdiskadm" option 5 after syncing with the remaining mirror.]

Result:

#vxdisk list

Now both the O/S device tree and VXVM are in a clean state corresponding to disk c1t0d0s2.

HBA cards in Solaris 8, 9 and 10

Tuesday, May 3, 2011 at 4:57 AM
bash-2.03# luxadm probe
No Network Array enclosures found in /dev/es

Logical Path:/dev/rdsk/c5t50060E800475D109d6s2
Node WWN:50070e800475e108 Device Type:Disk device
Logical Path:/dev/rdsk/c5t50060E800475D109d7s2

HBA card WWN

# prtconf -vp | grep wwn
port-wwn: 2100001b.3202f94b
node-wwn: 2000001b.3202f94b
port-wwn: 210000e0.8b90e795
node-wwn: 200000e0.8b90e795

#prtconf -vp | more


For Solaris 8 and 9:

Run the following script to determine the WWNs of the HBAs that are currently being utilized:
#!/bin/sh for i in `cfgadm |grep fc-fabric|awk ‘{print $1}’`;

do

dev=”`cfgadm -lv $i|grep devices |awk ‘{print $NF}’`” wwn= \

“`luxadm -e dump_map $dev |grep ‘Host Bus’|awk ‘{print $4}’`”

echo “$i: $wwn” done

To show link status of card

bash-2.03# luxadm -e port

Found path to 2 HBA ports

/devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED
/devices/ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0:devctl CONNECTED

To see the WWN’s (using address given to you from previous commands),

it is the last one that specifies it is a HBA, so the port WWN here is 50070e800475e108

bash-2.03# luxadm -e dump_map /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Pos Port_ID Hard_Addr Port WWN Node WWN Type
0 642113 0 50070e800475e108 50070e800475e108 0×0 (Disk device)
1 643f13 0 550070e800475e108 50070e800475e108 0×0 (Disk device)
2 643913 0 2100001b3205e828 2000001b3205e828 0x1f (Unknown Type,Host Bus Adapter)

SAN Foundation Software versions display as such

bash-2.03# modinfo | grep SunFC
38 102bcd25 209b8 150 1 fcp (SunFC FCP v20070703-1.98)
39 102d4071 855c - 1 fctl (SunFC Transport v20070703-1.41)
42 102ead69 164e0 149 1 fp (SunFC Port v20070703-1.60)
44 10300a79 cd574 153 1 qlc (SunFC Qlogic FCA v20070212-2.19)

To show Sun/Qlogic HBA’s

bash-2.03# luxadm qlgc

Found Path to 2 FC100/P, ISP2200, ISP23xx Devices

Opening Device: /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06

Complete

To show all vendor HBA’s

bash-2.03# luxadm fcode_download -p

Found Path to 0 FC/S Cards
Complete

Found Path to 0 FC100/S Cards
Complete

Found Path to 2 FC100/P, ISP2200, ISP23xx Devices

Opening Device: /devices/ssm@0,0/pci@18,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06

Opening Device: /devices/ssm@0,0/pci@19,700000/SUNW,qlc@2/fp@0,0:devctl
Detected FCode Version: ISP2312 Host Adapter fcode version 1.16 11/15/06
Complete

Found Path to 0 JNI1560 Devices.
Complete

Found Path to 0 Emulex Devices.
Complete

#fcinfo hba-port
HBA Port WWN: 10000000c98e6a99
OS Device Name: /dev/cfg/c1
Manufacturer: Emulex
Model: LPe11000-S
Firmware Version: 2.82a4 (Z3D2.82A4)
FCode/BIOS Version: Boot:5.02a1 Fcode:1.50a9
Serial Number: 0999VM0-09320010LF
Driver Name: emlxs
Driver Version: 2.50o (2010.01.08.09.45)
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 20000000c98e6a99
HBA Port WWN: 10000000c98e6a86
OS Device Name: /dev/cfg/c2
Manufacturer: Emulex
Model: LPe11000-S
Firmware Version: 2.82a4 (Z3D2.82A4)
FCode/BIOS Version: Boot:5.02a1 Fcode:1.50a9
Serial Number: 0999VM0-09320010LK
Driver Name: emlxs
Driver Version: 2.50o (2010.01.08.09.45)
Type: unknown
State: offline
Supported Speeds: 1Gb 2Gb 4Gb
Current Speed: not established
Node WWN: 20000000c98e6a86

zfs

Thursday, April 14, 2011 at 12:10 AM
Adding the disk to the existing pool

zpool add eqptpool mirror c5t600508B4000909E00000C00007760000d0 c5t600508B4000900A70000800007820000d0

SUN Cluster Quick Reference commands

Saturday, April 9, 2011 at 3:06 AM
Shut down a resource group:
scswitch -F -g [RESOURCE_GROUP_NAME]
Bring up a resource group:
scswitch -Z -g [RESOURCE_GROUP_NAME]
Move failover resource group to node_name:
scswitch -z -g [RESOURCE_GROUP_NAME] -h [NODE_NAME]
Restart resource group:
scswitch -R -h [NODE_NAME] -g [RESOURCE_GROUP_NAME]
Evacuate all resources from node_name:
scswitch -S -h [NODE_NAME]
Disable resource:
scswitch -n -j [RESOURCE]
Enable resource:
scswitch -e -j [RESOURCE]
Clear STOP_FAILED on resource:
scswitch -c -j [RESOURCE] -h [NODE_NAME] -f STOP_FAILED
Disable resource’s fault monitor:
scswitch -n -M -j [RESOURCE]
Enable resource’s fault monitor:
scswitch -e -M -j [RESOURCE]
Enable resource’s fault monitor:
scswitch -e -M -j [RESOURCE]
Lists currently configured DID’s:
scdidadm -L
Put a new device under cluster control:
scgdevs
Displays status of the cluster, resources, resource groups, etc.:
scstat
Display useful setup info about cluster nodes, cluster transport, disksets, etc.:
scconf -p -v

No output shows for df -h and /etc/mnttab file empty

Friday, March 25, 2011 at 10:47 AM
No output for the below command

df -h
more /etc/mnttab

mount the mnttab file
mount -F mntfs mnttab /etc/mnttab

Now try
df -h
more /etc/mnttab

Running Linux applications in Solaris Linux branded zones

Saturday, February 19, 2011 at 8:40 PM
While playing around with the latest version of Nevada this week, I decided to see how well Linux branded zones work. In case your not following the Sun development efforts, Linux branded zones allow you to run Linux ELF executables unmodified on Solaris hosts. This is pretty interesting, and I definitely wanted to take this technology for a test drive. After reading through the documentation in the brandz community, I BFU’ed my Nevada machine to the latest nightly build, and installed the packages listed on the brandz download page. Since brandz currently only supports CentOS 3.0 – 3.7 and the Linux 2.4 kernel series, I first had to download the three CentoS 3.7 iso images (branded zones currently don’t support CentOS 3.8 without some hacking):

$ cd /home/matty/CentOS

$ wget http://www.gtlib.gatech.edu/pub/centos/3.7/isos/i386/CentOS-3.7-i386-bin1of3.iso

$ wget http://www.gtlib.gatech.edu/pub/centos/3.7/isos/i386/CentOS-3.7-i386-bin2of3.iso

$ wget http://www.gtlib.gatech.edu/pub/centos/3.7/isos/i386/CentOS-3.7-i386-bin3of3.iso

After I retrieved the ISO images, I needed to create a branded zone. Creating Linux branded zones is a piece of cake, and is accomplished by running the zonecfg utility with the “-z” option and a name to assign to your zone, and then specifying one or more parameters inside the zone configuration shell. Here is the configuration I used with my test zone:

$ zonecfg -z centostest

centostest: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:centostest> create -t SUNWlx
zonecfg:centostest> add net
zonecfg:centostest:net> set physical=ni0
zonecfg:centostest:net> set address=192.168.1.25
zonecfg:centostest:net> end
zonecfg:centostest> set zonepath=/zones/centostest
zonecfg:centostest> set autoboot=true
zonecfg:centostest> verify
zonecfg:centostest> commit
zonecfg:centostest> exit
This zone configuration is pretty basic. It contains one network interface (when you boot the zone, a virtual interface is configured on that interface with the address passed to the address attribute), a location to store the zone data, and it is configured to automatically boot when the system is bootstrapped. Next I needed to install the CentOS binaries in the zone. To install the CentOS 3.7 binaries in the new zone I created, I ran the zoneadm utility with the ‘install’ option, and passed the directory with the CentOS ISO images as an argument:

$ zoneadm -z centostest install -v -d /home/matty/CentOS

Verbose output mode enabled.
Installing zone "centostest" at root "/zones/centostest"
Attempting ISO-based install from directory:
"/home/matty/CentOS"
Checking possible ISO
"/home/matty/CentOS/CentOS-3.7-i386-bin1of3.iso"...
added as lofi device "/dev/lofi/1"
Attempting mount of device "/dev/lofi/1"
on directory "/tmp/lxisos/iso.1"... succeeded.
Checking possible ISO
"/home/matty/CentOS/CentOS-3.7-i386-bin2of3.iso"...
added as lofi device "/dev/lofi/2"
Attempting mount of device "/dev/lofi/2"
on directory "/tmp/lxisos/iso.2"... succeeded.
Checking possible ISO
"/home/matty/CentOS/CentOS-3.7-i386-bin3of3.iso"...
added as lofi device "/dev/lofi/3"
Attempting mount of device "/dev/lofi/3"
on directory "/tmp/lxisos/iso.3"... succeeded.
Checking for distro "/usr/lib/brand/lx/distros/centos35.distro"...
Checking iso file mounted at "/tmp/lxisos/iso.1"...
read discinfo file "/tmp/lxisos/iso.1/.discinfo"
ISO "/tmp/lxisos/iso.1": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 1
Checking iso file mounted at "/tmp/lxisos/iso.2"...
read discinfo file "/tmp/lxisos/iso.2/.discinfo"
ISO "/tmp/lxisos/iso.2": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 2
Checking iso file mounted at "/tmp/lxisos/iso.3"...
read discinfo file "/tmp/lxisos/iso.3/.discinfo"
ISO "/tmp/lxisos/iso.3": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 3
Checking for distro "/usr/lib/brand/lx/distros/centos36.distro"...
Checking iso file mounted at "/tmp/lxisos/iso.1"...
read discinfo file "/tmp/lxisos/iso.1/.discinfo"
ISO "/tmp/lxisos/iso.1": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 1
Checking iso file mounted at "/tmp/lxisos/iso.2"...
read discinfo file "/tmp/lxisos/iso.2/.discinfo"
ISO "/tmp/lxisos/iso.2": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 2
Checking iso file mounted at "/tmp/lxisos/iso.3"...
read discinfo file "/tmp/lxisos/iso.3/.discinfo"
ISO "/tmp/lxisos/iso.3": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 3
Checking for distro "/usr/lib/brand/lx/distros/centos37.distro"...
Checking iso file mounted at "/tmp/lxisos/iso.1"...
read discinfo file "/tmp/lxisos/iso.1/.discinfo"
ISO "/tmp/lxisos/iso.1": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 1
Added ISO "/tmp/lxisos/iso.1" as disc 1
Checking iso file mounted at "/tmp/lxisos/iso.2"...
read discinfo file "/tmp/lxisos/iso.2/.discinfo"
ISO "/tmp/lxisos/iso.2": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 2
Added ISO "/tmp/lxisos/iso.2" as disc 2
Checking iso file mounted at "/tmp/lxisos/iso.3"...
read discinfo file "/tmp/lxisos/iso.3/.discinfo"
ISO "/tmp/lxisos/iso.3": Serial "1144177644.47"
Release "CentOS [Disc Set 1144177644.47]" Disc 3
Added ISO "/tmp/lxisos/iso.3" as disc 3
Installing distribution 'CentOS [Disc Set 1144177644.47]'...
Installing cluster 'desktop'
Installing zone miniroot.
Installing miniroot from ISO image 1 (of 3)
RPM source directory: "/tmp/lxisos/iso.1/RedHat/RPMS"
Attempting to expand 30 RPM names...
Installing RPM "SysVinit-2.85-4.4.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "basesystem-8.0-2.centos.0.noarch.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "bash-2.05b-41.5.centos.0.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "beecrypt-3.0.1-0.20030630.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "bzip2-libs-1.0.2-11.EL3.4.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "coreutils-4.5.3-28.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "elfutils-0.94-1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "elfutils-libelf-0.94-1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "filesystem-2.2.1-3.centos.1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "glibc-2.3.2-95.39.i586.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "glibc-common-2.3.2-95.39.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "gpm-1.19.3-27.2.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "initscripts-7.31.30.EL-1.centos.1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "iptables-1.2.8-12.3.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "iptables-ipv6-1.2.8-12.3.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "kernel-utils-2.4-8.37.14.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "laus-libs-0.1-70RHEL3.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "libacl-2.2.3-1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "libattr-2.2.0-1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "libgcc-3.2.3-54.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "libtermcap-2.0.8-35.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "ncurses-5.3-9.4.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "pam-0.75-67.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "popt-1.8.2-24_nonptl.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "rpm-4.2.3-24_nonptl.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "rpm-libs-4.2.3-24_nonptl.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "setup-2.5.27-1.noarch.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "termcap-11.0.1-17.1.noarch.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "zlib-1.1.4-8.1.i386.rpm" to miniroot at
"/zones/centostest"...
Installing RPM "centos-release-3-7.1.i386.rpm" to miniroot at
"/zones/centostest"...
Setting up the initial lx brand environment.
System configuration modifications complete!
Duplicating miniroot; this may take a few minutes...

Booting zone miniroot...
Miniroot zone setup complete.

Installing zone 'centostest' from ISO image 1.
RPM source directory: "/zones/centostest/root/iso/RedHat/RPMS"
Attempting to expand 667 RPM names...
Installing 433 RPM packages; this may take several minutes...

Preparing... ##################################################
libgcc ##################################################
setup ##################################################
filesystem ##################################################
hwdata ##################################################
redhat-menus ##################################################
mailcap ##################################################
XFree86-libs-data ##################################################
basesystem ##################################################
gnome-mime-data ##################################################

[.....]
After the brandz installer finished installing the CentOS 3.7 RPMs, I used the zoneadm ‘boot’ option to start the zone:

$ zoneadm -z centostest boot

To view the console output while the zone was booting, I immediately fired up the zlogin utility to console into the new Linux branded zone, and ran a few commands to see what the environment looked like after the zone was booted:

$ zlogin -C centostest

[Connected to zone 'centostest' console] [ OK ]
Activating swap partitions: [ OK ]
Checking filesystems [ OK ]
Mounting local filesystems: [ OK ]
Enabling swap space: [ OK ]
modprobe: Can't open dependencies file /lib/modules/2.4.21/modules.dep (No such file or directory)
INIT: Entering runlevel: 3
Entering non-interactive startup
Starting sysstat: [ OK ]
Starting system logger: [ OK ]
Starting kernel logger: [ OK ]
Starting automount: No Mountpoints Defined[ OK ]
Starting cups: [ OK ]
Starting sshd:[ OK ]
Starting crond: [ OK ]
Starting atd: [ OK ]
Rotating KDC list [ OK ]

CentOS release 3.7 (Final)
Kernel 2.4.21 on an i686

centostest login: root
$ uname -a

Linux centos 2.4.21 BrandZ fake linux i686 i686 i386 GNU/Linux
$ cat /proc/cpuinfo

processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 6
model name : Intel Celeron(r)
stepping : 5
cpu MHz : 1662.136
cache size : 2048 KB
fdiv_bug : no
hlt_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 10
flags : fpu pse tsc msr mce cx8 sep mtrr pge cmov mmx fxsr sse sse2 ss
Yum works swell in a branded zone, and most of the tools you typically use work out of the box. Linux branded zones are wicked cool, and I can see tons of uses for them. Some folks are dead set on running Linux instead of Solaris, which means they can’t take advantage of things like ZFS, FMA and DTrace. If you need to better understand your application and the way it interacts with the system, or if you want to take advantage of the stability the Solaris kernel brings to production system, you can fire up a branded zone and run your application transparently on a Solaris system.

Build a Guest Domain

at 8:30 PM
A guest domain is made up of the following components:


CPU

MAU (Cryptographic Thread)
Memory
Networking
Storage
The control domain will partition CPU threads as VCPU's for the guest domain. Each CPU core has an MAU for cryptographic processing. Only one logical domain using the CPU threads in a core can have control over this thread. So it's important to decide if your guest domain will require one. Memory is partitioned in 8K segments. Networking is handled by connecting a virtual network interface to a virtual switch in one of the service domains. In our example, I configured each physical interface as a separate virtual switch in the control/service (a.k.a. primary) domain. Storage can come from a wide variety of sources:


Local Disk

SAN LUN
Virtual Disk Image File
Disk Slice
ZFS Volume
The T2000 for example has four drive bay that could be used, but obviously that doesn't leave us with a lot of flexibility or space. SAN storage can be used with greater flexibility since it's remote and can easily be migrated or replicated. It's possible to create a sparse file and use it as a virtual disk. This has the advantage of being stored on local, SAN, or even NAS. The fact that files can be used opens the door for very flexible options. Using a disk slice is also possible, but it can not be used for jumpstart installation. One could create ZFS volumes and use them as storage for logical domains as well. However, it can not be used for jumpstart installation. However, it makes for easy allocation of storage for applications. You can even take SAN LUN's and create a ZFS pool and export it into a logical domain. For our example, I'll use two virtual disk image files created on a ZFS file system and use SVM mirroring:) The following will be configured:


4 x VCPU's
1 x MAU
4GB's RAM

2 x 10GB Virtual Disk Image Files
2 x Network Ports


# ldm add-domain ldom1
# ldm add-vcpu 4 ldom1
# ldm add-mau 1 ldom1
# ldm add-memory 4G ldom1
# mkfile 10g /ldoms/vdisk1_10gb.img
# mkfile 10g /ldoms/vdisk2_10gb.img
# ldm add-vdiskserverdevice /ldoms/vdisk1_10gb.img vdisk1@primary-vds0
# ldm add-vdiskserverdevice /ldoms/vdisk2_10gb.img vdisk2@primary-vds0
# ldm add-vdisk vdisk1 vdisk1@primary-vds0 ldom1
# ldm add-vdisk vdisk2 vdisk2@primary-vds0 ldom1
# ldm add-vnet vnet0 primary-vsw0 ldom1
# ldm add-vnet vnet1 primary-vsw2 ldom1
# ldm set-variable auto-boot\?=false ldom1
# ldm set-variable local-mac-address\?=true ldom1
# ldm set-variable boot-device=/virtual-devices@100/channel-devices@200/disk@0 ldom1
# ldm bind-domain ldom1
# ldm start ldom1


So with the above commands we allocated the vcpu's, mau, and the memory. Then created the virtual disk images files, added them as virtual disk devices to the primary domain's VDS service, and finally added them as virtual disks to the guest domain. Then attached virtual networks, set auto-boot to false in the OBP(yes that's right each logical domain gets its own OBP), set local-mac-address to true, and set the default boot device. Finally we've bound the configuration and started the guest domain. So what do we get?


# ldm list-bindings ldom1
Name: ldom1
State: active
Flags: transition
OS:
Util: 0.2%
Uptime: 1d 6h 43m
Vcpu: 4
vid pid util strand
0 4 0.7% 100%
1 5 0.1% 100%
2 6 0.1% 100%
3 7 0.0% 100%
Mau: 1
mau cpuset (4, 5, 6, 7)
Memory: 4G
real-addr phys-addr size
0x4800000 0x104800000 4G
Vars: auto-boot?=false
boot-device=/virtual-devices@100/channel-devices@200/disk@0
local-mac-address?=true
Vldcc: vldcc0 [Domain Services]
service: primary-vldc0 @ primary
[LDC: 0x0]
Vnet: vnet0 [LDC: 0x2]
mac-addr=0:14:4f:fb:c4:ef
service: primary-vsw0 @ primary
[LDC: 0x1]
Vnet: vnet1 [LDC: 0xd]
mac-addr=0:14:4f:fb:24:b6
service: primary-vsw2 @ primary
[LDC: 0xc]
Vdisk: vdisk1 vdisk1@primary-vds0
service: primary-vds0 @ primary
[LDC: 0x17]
Vdisk: vdisk2 vdisk2@primary-vds0
service: primary-vds0 @ primary
[LDC: 0x18]
Vcons: [via LDC:25]
ldom1@primary-vcc0 [port:5000]


As you can see, everything that's been previously configured is listed. Some important things to note are the MAC addresses for the network interfaces (which are assigned automatically) and the Vcons port for the console. So now we can jumpstart our domain:


# telnet localhost 5000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Connecting to console "ldom1" in group "ldom1" ....
Press ~? for control options ..

Sun Fire T200, No Keyboard
Copyright 2007 Sun Microsystems, Inc. All rights reserved.
OpenBoot 4.26.0.build_07, 4096 MB memory available, Serial #66831599.
Ethernet address 0:14:4f:fb:c4:ef, Host ID: 83fbc4ef.



{0} ok show-nets
a) /virtual-devices@100/channel-devices@200/network@1
b) /virtual-devices@100/channel-devices@200/network@0
q) NO SELECTION
Enter Selection, q to quit: a
/virtual-devices@100/channel-devices@200/network@1 has been selected.
Type ^Y ( Control-Y ) to insert it in the command line.
e.g. ok nvalias mydev ^Y
for creating devalias mydev for /virtual-devices@100/channel-devices@200/network@1
{0} ok boot /virtual-devices@100/channel-devices@200/network@1 - install
Boot device: /virtual-devices@100/channel-devices@200/network@1 File and args:
- install
Requesting Internet Address for 0:14:4f:fb:24:b6
Requesting Internet Address for 0:14:4f:fb:24:b6
Requesting Internet Address for 0:14:4f:fb:24:b6
Requesting Internet Address for 0:14:4f:fb:24:b6
Requesting Internet Address for 0:14:4f:fb:24:b6
Requesting Internet Address for 0:14:4f:fb:24:b6
Requesting Internet Address for 0:14:4f:fb:24:b6
SunOS Release 5.10 Version Generic_118833-33 64-bit
Copyright 1983-2006 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
WARNING: machine_descrip_update: new MD has the same generation (1) as the old MD
whoami: no domain name
Configuring devices.
Using RPC Bootparams for network configuration information.
Attempting to configure interface vnet1...
Configured interface vnet1
Attempting to configure interface vnet0...
Skipped interface vnet0
Setting up Java. Please wait...
Extracting windowing system. Please wait...
Beginning system identification...
Searching for configuration file(s)...
...
So after the guest domain is finished jumpstarting, we can take a look around.


# psrinfo -vp
The physical processor has 4 virtual processors (0-3)
UltraSPARC-T1 (cpuid 0 clock 1000 MHz)
# psrinfo -v
Status of virtual processor 0 as of: 04/05/2007 22:17:04
on-line since 04/05/2007 22:16:15.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
Status of virtual processor 1 as of: 04/05/2007 22:17:04
on-line since 04/05/2007 22:16:16.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
Status of virtual processor 2 as of: 04/05/2007 22:17:04
on-line since 04/05/2007 22:16:16.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
Status of virtual processor 3 as of: 04/05/2007 22:17:04
on-line since 04/05/2007 22:16:16.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
# prtdiag -v
System Configuration: Sun Microsystems sun4v Sun Fire T200
Memory size: 4096 Megabytes

========================= CPUs ===============================================

CPU CPU
Location CPU Freq Implementation Mask
------------ ----- -------- ------------------- -----
MB/CMP0/P0 0 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P1 1 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P2 2 1000 MHz SUNW,UltraSPARC-T1
MB/CMP0/P3 3 1000 MHz SUNW,UltraSPARC-T1


========================= IO Configuration =========================

IO
Location Type Slot Path Name Model
----------- ----- ---- --------------------------------------------- ------------------------- ---------

========================= HW Revisions =======================================

System PROM revisions:
----------------------
OBP 4.26.0.build_07 2007/02/14 19:20

IO ASIC revisions:
------------------
Location Path Device Revision
-------------------- ---------------------------------------- ------------------------------ ---------
# df -h
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d0 7.8G 2.2G 5.5G 30% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 5.1G 1.1M 5.1G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/platform/SUNW,Sun-Fire-T200/lib/libc_psr/libc_psr_hwcap1.so.1
7.8G 2.2G 5.5G 30% /platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,Sun-Fire-T200/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
7.8G 2.2G 5.5G 30% /platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 1.6G 0K 1.6G 0% /tmp
swap 5.1G 32K 5.1G 1% /var/run
# metastat
d1: Mirror
Submirror 0: d11
State: Okay
Submirror 1: d21
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 4194600 blocks (2.0 GB)

d11: Submirror of d1
State: Okay
Size: 4194600 blocks (2.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d0s1 0 No Okay No


d21: Submirror of d1
State: Okay
Size: 4194600 blocks (2.0 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d1s1 0 No Okay No


d0: Mirror
Submirror 0: d10
State: Okay
Submirror 1: d20
State: Okay
Pass: 1
Read option: roundrobin (default)
Write option: parallel (default)
Size: 16644000 blocks (7.9 GB)

d10: Submirror of d0
State: Okay
Size: 16644000 blocks (7.9 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d0s0 0 No Okay No


d20: Submirror of d0
State: Okay
Size: 16644000 blocks (7.9 GB)
Stripe 0:
Device Start Block Dbase State Reloc Hot Spare
c0d1s0 0 No Okay No


Device Relocation Information:
Device Reloc Device ID
c0d1 No -
c0d0 No -
# ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
vnet0: flags=9040843 mtu 1500 index 2
inet 192.168.1.2 netmask ffffff00 broadcast 192.168.1.255
groupname ipmp1
ether 0:14:4f:fb:c4:ef
vnet0:1: flags=1000843 mtu 1500 index 2
inet 192.168.1.1 netmask ffffff00 broadcast 192.168.1.255
vnet1: flags=9040843 mtu 1500 index 3
inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255
groupname ipmp1
ether 0:14:4f:fb:24:b6
# uptime
10:20pm up 5 min(s), 1 user, load average: 0.02, 0.11, 0.06

So now we have a guest domain running Solaris 10 Update 3, with SVM mirrored boot drives that are really sparse files, IPMP on virtual NICs, four CPU's, 4GB's RAM, etc

Oracle Solaris 11 Express Released!

at 8:23 PM
Solaris 11 Express has finally been released! This has been a long time in coming and I'm very excited to finally see this day. Just so that folks are clear, this is a full express release with support for developers, system administrators, evaluators, enthusiasts, etc. on x86 and SPARC! It is totally free to use as long as it is not used on production. As you can see on the main link above, Oracle is selling a full suite of support for Solaris 11 Express, if you are looking for support or to use it in production. Oracle is dead serious about Solaris, so make no mistake about it!

Needless to say, I'll be busy downloading and upgrading my systems to this release. I'll make some additional blog posts once I have things in place to take it for a full spin on both x86 and SPARC. I'll leverage my Ultra 20, some VirtualBox instances, and some LDoms to make things interesting!

LDOM Installation

at 8:19 PM
Before you begin, the following is required:


sun4v based server (SunFire T1000/T2000, Sun Netra T2000, or Sun Netra CP3060 Blade).
Solaris 10 Update 3 (HW 11/06) or Solaris Express (Build 57 or higher) installed.
Logical Domains 1.0 Early Access
The first step is into install the firmware included with the LDOM software bundle. The firmware will contain the ALOM CMT, Post, OBP, and hypervisor updates. You must load the corresponding firmware for your platform. There are two methods for doing this. You can download the firmware to the ALOM CMT using FTP or you can upload it from your currently installed Solaris instance. The later is much simpler:)


# cd Firmware/tools # ./sysfwdownload ../Sun_System_Firmware-6_4_0_build_07-Sun_Fire_T2000.bin

.......... (10%).......... (20%).......... (30%).......... (40%).......... (51%)
.......... (61%).......... (71%).......... (81%).......... (92%)........ (100%)

Download completed successfully.


This will upload the firmware to your ALOM CMT. Make sure that you upload the corresponding firmware for your platform. Now you need to shutdown your Solaris instance:


# shutdown -y -g0 -i5


Now you can upgrade the firmware from the ALOM CMT console:


sc> showkeyswitch
Keyswitch is in the NORMAL position.
sc>
SC Alert: Host system has shut down.
flashupdate -s 127.0.0.1

SC Alert: System poweron is disabled.
................................................................................
................................................................................
......

Update complete. Reset device to use new software.

SC Alert: SC firmware was reloaded
sc> resetsc
Are you sure you want to reset the SC [y/n]? y

The firmware is now updated and the SC has been reset. Once it is done resetting, verify the version of the firmware:


sc> showhost
Sun-Fire-T2000 System Firmware 6.4.0_build_07 2007/02/14 22:07

Host flash versions:
Hypervisor 1.4.0_build_07 2007/02/14 21:52
OBP 4.26.0.build_07 2007/02/14 19:20
POST 4.26.0.build_07 2007/02/14 19:51

The version should match the version info in the firmware bin file name. Now you can power on your server and proceed to the installation of the LDOM software. Depending on the OS you are running, you may have to apply the patches that are included in the Patches directory first.

For example, if you are running Solaris 10 Update 3, you will need to install 118833-36 and reboot. Then you'll have to install patches 125043-01 and T124921-02, then reboot. This is not required if you are running build 57 or higher of Nevada (OpenSolaris, Solaris Express, etc.).

Now it's time to install the LDOM software for what will become the control domain. The software package includes JASS to secure the control domain. Remember, the control domain is similar to the SC on a Sun Fire 15K. You don't want it to be used for anything other than administering the platform. You can install the SUNWjass and SUNWldm package with the install-ldm script under the Install directory. Or you can install them manually. If you already have secured the control domain, you may not need JASS, it's up to you:)


# Install/install-ldm
Welcome to the LDoms installer.

You are about to install the domain manager package that will enable
you to create, destroy and control other domains on your system. Given the capabilities of the domain manager, you can now change the security configuration of this Solaris instance using the Solaris Security Toolkit.
Select a security profile from this list:
a) Hardened Solaris configuration for LDoms (recommended)
b) Standard Solaris configuration
c) Your custom-defined Solaris security configuration profile
Enter a, b, or c [a]: a
The changes made by selecting this option can be undone through the
Solaris Security Toolkit’s undo feature. This can be done with the
’/opt/SUNWjass/bin/jass-execute -u’ command.
Installing LDoms and Solaris Security Toolkit packages.

Installation of was successful.
...
Verifying that all packages are fully installed. OK.
Enabling services: svc:/ldoms/ldmd:default
Running Solaris Security Toolkit 4.2.0 driver ldm_control-secure.driver.
...
Solaris Security Toolkit hardening executed successfully; log file
/var/opt/SUNWjass/run//jass-install-log.txt. It will not
take effect until the next reboot. Before rebooting, make sure SSH or
the serial line is setup for use after the reboot.
Then reboot your control domain.

Introduction to LDOM's

at 8:14 PM
Logical domains are discrete instances of the Solaris OE running independently within a virtualized environment. Each logical domain has its own virtual cpu, memory, OBP, console, networking, storage, and I/O components. These components are configured with a combination of different technologies.


sun4v Platform Hypervisor
Logical Domain Management Software
Solaris OE
The hypervisor provides the mechanism for masking and virtualizing the resources on the platform. The hypervisor is a light-weight software layer built into the ALOM CMT firmware. The hypervisor also helps to abstract the low-level hardware details from logical domains.

The logical domain management software is the nexus for control and configuration of the hypervisor. This software provides a CLI to controlling and configuring the resources that define each logical domain. Only one logical domain can run the management software. This logical domain is known as the "primary" or control domain. More about the different LDOM types in a moment.

The Solaris OE provides support for the sun4v platform, dynamic reconfiguration, and virtual devices. At this time, you need Solaris 10 Update 3 (11/06) or Nevada build 57. It's not possible to use Solaris 9 or below for LDOM's, as the platform support is not there.

There are four types of LDOM's that can be created:


Control Domain
Service Domain
I/O Domain
Guest Domain
The control domain is the first installed LDOM or instance of Solaris on the platform. This LDOM contains the Logical Domain Management (SUNWldm) software for managing the platform. It is from this LDOM that all of the hardware platform specifics are visible and configurable. The control and configuration of the platform is communicated through LDC's (Logical Domain Channels). It is through this mechanism that the configuration, virtual devices, and virtual services communications are relayed.

A service domain is an LDOM that has control over either one or more PCI-E controllers. It consists of an instance of the Solaris OE. No additional software is required, the control domain will configure the virtualized devices and service within a service domain. The service domain will then service the I/O for these virtualized components for guest domains to utilize. The service domain has direct control over the hardware under its PCI-E controller. There are only two in the Sun Fire T2000. So only two service domains are configurable, one of which must also be the control domain. It is important to remember that the control domain is one of the service domains. If a second service domain is created, this is called a Split PCI-E configuration. More about that later.

An I/O domain is exactly like a service domain, except for the fact that none of its devices or services are virtualized for guest domains. This is useful if you have an application that requires direct access to a PCI-E device for performance or some other reason.

A guest domain is a consumer of virtualized devices and services. Meaning that it does not virtualize any devices or services for other domains. It is independent of other guest domains. However, it is dependent upon the service domains that provide its virtual devices and services. A guest domain consist of its own instance of Solaris OE. This is where your applications will typically live as consuming resources in the control or services domains affects the platform as a whole.

While a fully configured Sun Fire T2000 has a total of 32 CPU threads, it's probably not a good idea to create 32 LDOM's. As this would under power the control and services domains.

The next post will be about the installation of the firmware, patches, and Logical Domain Management software.

Configuring the Control Domain (LDOM)

at 8:13 PM
Now it's time to configure the resources for your control domain! The first step is to make sure that the required SMF services are running:


# svcs -a | grep ldom
online Mar_20 svc:/ldoms/ldmd:default
online Mar_20 svc:/ldoms/vntsd:default


The ldmd service is responsible for controlling the platform and the vntsd service is responsible for providing the virtual terminal services for your logical domains. If they are not running, enable them. You should then be able to run the /opt/SUNWldm/bin/ldm command:


# /opt/SUNWldm/bin/ldm list
Name State Flags Cons VCPU Memory Util Uptime
primary active -t-cv SP 32 32G 0.8% 3d 16h 27m

As you can see, all 32 vcpu's and all of the memory are assigned to the primary (a.k.a. control) domain. We must free up these resources and create the basic infrastructure to support guest domains.


# /opt/SUNWldm/bin/ldm add-vdiskserver primary-vds0 primary
# /opt/SUNWldm/bin/ldm add-vconscon port-range=5000-5100 primary-vcc0 \
primary
# /opt/SUNWldm/bin/ldm add-vswitch net-dev=e1000g0 primary-vsw0 primary
# /opt/SUNWldm/bin/ldm add-vswitch net-dev=e1000g1 primary-vsw1 primary
# /opt/SUNWldm/bin/ldm add-vswitch net-dev=e1000g2 primary-vsw2 primary
# /opt/SUNWldm/bin/ldm add-vswitch net-dev=e1000g3 primary-vsw3 primary
# /opt/SUNWldm/bin/ldm set-mau 1 primary
# /opt/SUNWldm/bin/ldm set-vcpu 4 primary
# /opt/SUNWldm/bin/ldm set-memory 4G primary


The above creates the virtual disk server for servicing storage, the virtual terminal console ports, virtual switch for each physical network port, one crypto unit, 4 vcpu's, and 4GB's of memory for the primary domain. This sets up enough resources for the primary domain, which acts as a control and service domain for the platform. Now we need to store this configuration into the ALOM CMT and reboot.


# /opt/SUNWldm/bin/ldm list-config
factory-default [current]
# /opt/SUNWldm/bin/ldm add-config initial
# /opt/SUNWldm/bin/ldm list-config
factory-default [current]
initial [next]
# shutdown -y -g0 -i6


This stores the configuration and activates it. When the control domain comes back up, you'll notice that the available cpu and memory has changed:


# ldm list primary
Name State Flags Cons VCPU Memory Util Uptime
primary active -t-cv SP 4 4G 0.9% 3d 16h 39m
# psrinfo -vp
The physical processor has 4 virtual processors (0-3)
UltraSPARC-T1 (cpuid 0 clock 1000 MHz)
shou18leng01:~ $ psrinfo -v
Status of virtual processor 0 as of: 04/02/2007 11:00:03
on-line since 03/09/2007 23:53:23.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
Status of virtual processor 1 as of: 04/02/2007 11:00:03
on-line since 03/09/2007 23:53:27.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
Status of virtual processor 2 as of: 04/02/2007 11:00:03
on-line since 03/09/2007 23:53:27.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
Status of virtual processor 3 as of: 04/02/2007 11:00:03
on-line since 03/09/2007 23:53:27.
The sparcv9 processor operates at 1000 MHz,
and has a sparcv9 floating point processor.
# prtdiag -v | grep -i mem
Memory size: 4096 Megabytes
Now we are ready to create our first guest domain! Watch out for the next post.

root passwd change permission denied

Friday, February 11, 2011 at 7:31 AM
Error Message:

# passwd root
New Password:
Re-enter new Password:
Permission denied
#

#grep passwd /etc/nsswitch.conf
passwd: files nis
#

Use the below syntax to change the password

# passwd -r files
passwd: Changing password for root
New Password:
Re-enter new Password:
passwd: password successfully changed for root
#

Clearing the unavailable disk device path and changing the boot disk path

Tuesday, January 11, 2011 at 10:46 PM
Clearing the unavailable disk device path and changing the boot disk path


1) old c1t0d0s* device still exist...

node# ls -lart /dev/rdsk/c*s2
lrwxrwxrwx 1 root root 47 Sep 22 2005 /dev/rdsk/c1t1d0s2 -> ../../devices/pci@1c,600000/scsi@2/sd@1,0:c,raw
lrwxrwxrwx 1 root root 47 Sep 22 2005 /dev/rdsk/c1t0d0s2 -> ../../devices/pci@1c,600000/scsi@2/sd@0,0:c,raw
lrwxrwxrwx 1 root root 46 Sep 22 2005 /dev/rdsk/c0t0d0s2 -> ../../devices/pci@1e,600000/ide@d/sd@0,0:c,raw
lrwxrwxrwx 1 root other 47 Dec 31 12:26 /dev/rdsk/c1t2d0s2 -> ../../devices/pci@1c,600000/scsi@2/sd@2,0:c,raw
node#

2) So can remove old c1t0d0s* device files using “devfsadm –Cv”. Below I’ve added the “-s” option to preview the actions...

cssv04# devfsadm -Cvs
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:a
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s0
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:b
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s1
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:c
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s2
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:d
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s3
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:e
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s4
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:f
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s5
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:g
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s6
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:h
devfsadm[2811]: verbose: removing file: /dev/dsk/c1t0d0s7
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:a,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s0
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:b,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s1
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:c,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s2
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:d,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s3
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:e,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s4
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:f,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s5
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:g,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s6
devfsadm[2811]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:h,raw
devfsadm[2811]: verbose: removing file: /dev/rdsk/c1t0d0s7

3)Remove old c1t0d0s* device files using “devfsadm –Cv”

node# devfsadm -Cv
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:a
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s0
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:b
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s1
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:c
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s2
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:d
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s3
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:e
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s4
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:f
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s5
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:g
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s6
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:h
devfsadm[12183]: verbose: removing file: /dev/dsk/c1t0d0s7
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:a,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s0
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:b,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s1
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:c,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s2
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:d,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s3
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:e,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s4
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:f,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s5
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:g,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s6
devfsadm[12183]: verbose: removing file: /devices/pci@1c,600000/scsi@2/sd@0,0:h,raw
devfsadm[12183]: verbose: removing file: /dev/rdsk/c1t0d0s7
node#

4) Check the current the deviece status

node# ls -lart /dev/rdsk/c*s2
lrwxrwxrwx 1 root root 47 Sep 22 2005 /dev/rdsk/c1t1d0s2 -> ../../devices/pci@1c,600000/scsi@2/sd@1,0:c,raw
lrwxrwxrwx 1 root root 46 Sep 22 2005 /dev/rdsk/c0t0d0s2 -> ../../devices/pci@1e,600000/ide@d/sd@0,0:c,raw
lrwxrwxrwx 1 root other 47 Dec 31 12:26 /dev/rdsk/c1t2d0s2 -> ../../devices/pci@1c,600000/scsi@2/sd@2,0:c,raw
node#

5)EEPROM is still pointing to the hardware path of c1t0d0 as the primary boot device...

node# eeprom | egrep '^(boot|nvram|devalias)'
boot-command=boot
boot-file: data not available.
boot-device=primary secondary
nvramrc=devalias primary /pci@1c,600000/scsi@2/disk@0,0:a
devalias secondary /pci@1c,600000/scsi@2/disk@1,0:a
node#

6) So reset devalias devices to primary (c1t1d0) then secondary (c1t2d0)...

node# eeprom 'nvramrc=devalias primary /pci@1c,600000/scsi@2/disk@1,0:a devalias secondary /pci@1c,600000/scsi@2/disk@2,0:a'

node# eeprom | egrep '^(boot|nvram|devalias)'
boot-command=boot
boot-file: data not available.
boot-device=primary secondary
nvramrc=devalias primary /pci@1c,600000/scsi@2/disk@1,0:a devalias secondary /pci@1c,600000/scsi@2/disk@2,0:a
node#

7) Copy the ufs boot block to the newly replaced c1t2d0 disk...

installboot /usr/platform/SUNW,Sun-Fire-V240/lib/fs/ufs/bootblk /dev/rdsk/c1t2d0s0

Solaris | Powered by Blogger | Entries (RSS) | Comments (RSS) | Designed by MB Web Design | XML Coded By Cahayabiru.com