Showing posts with label Solaris. Show all posts
Showing posts with label Solaris. Show all posts

Monday, September 28, 2009

Sun Cluster Data Service for ClearCase. ClearCase installation

Chaos and Order are not enemies, only opposites.
Richard Garriott

From my point of view, ClearCase brings exactly needed level of Order in software development Chaos. You can find in Internet a lot of negative comments about it. But the quality of ClearCase seems to be strongly related to the qualification of responsible for it personnel. I love ClearCase and never had a problem with it. Lets finally install it.

Actually all steps are described in Rational ClearCase configuration for High Availability. I just have to follow them carefully.
  1. Installation.
  2. root@[both]:/ # mount kars:sol /mnt
    root@[both]:/ # cd /mnt/cluster/cc/7.0.1/sun5/clearcase/install/
    root@[both]:/mnt/cluster/cc/7.0.1/sun5/clearcase/install # ./install_release
    
    1. Local Install:            Install occurs on the local host.
    2. Full-copy: Regular installation, with no links to this release area.
             User specified:Install into: /opt/rational
      3 : ClearCase Full Function Installation
      9 : ClearCase MultiSite Full Function Installation
      f : Finish selection
    Selection number(s)>> 3 9 f
    ClearCase (Atria) Licensing License Server Host[cclic]:karblade/karu60
    ClearCase Registry Server Host[ccreg]:karblade/karu60
    ClearCase Registry Backup Host(s)[Unknown]:
    ClearCase Registry Region[server]:karc
    Do you wish to exit the install to allow you to plan a VOB migration (yes, no, quit)[yes]?no
    
    ***************************************************************************
    >> Summary of installation selections
    ***************************************************************************
            ClearCase (Atria) Licensing License server host is karblade/karu60
            Install into: /opt/rational
            Install method:local
            Install model:full
            Registry backup host(s): Unknown
            Registry host is karblade/karu60
            Registry region: karc
            Release area pathname:/mnt/cluster/cc/7.0.1/sun5
             
             Depending on the type of installation and components selected,
             the disk space required could be as much as 325 Megabytes.
             Please consult the Installation Guide for disk space 
             requirements for each kind of installation.
             
             The interactive portion of the installation is complete.
             If you choose to continue, the previously listed components
             will be installed/updated.
    
             Upon completion,  the installation status will indicate whether
             there were problems, and provide reminders of post-installation
             steps.
    
             This WILL include stopping all currently running ClearCase Product Family
             software.
             This WILL NOT require a reboot of the system.
             
    **** Enter 'quit' or 'no' to abort the installation ****
    **** Enter 'yes' or press  to continue ****
    
    Continue installation?(yes, no, quit)[yes]:
    
    
    root@[both]:/mnt/cluster/cc/7.0.1/sun5/clearcase/install # cd
    root@[both]:/ # umount /mnt 
    
  3. Deactivate startup.
  4. root@[both]:/ # /etc/init.d/clearcase stop
    root@[both]:/ # rm /etc/rc2.d/S77clearcase /etc/rc0.d/K35clearcase
    root@[both]:/ # mv /etc/init.d/clearcase /etc/init.d/cl.clearcase
    
  5. Define logical hostname karc-cc as an ClearCase alternate hostname.
  6. root@[both]:/ # echo karc-cc > /var/adm/rational/clearcase/config/alternate_hostnames
    
  7. Move registry configuration on shared FS.
  8. (on karblade only)
    root@karblade:/ # cldg switch -n karblade ccset
    root@karblade:/ # mount /local/cc
    root@karblade:/ # cp -rp /var/adm/rational/clearcase/rgy /local/cc/rgy
    (on both)
    root@[both]:/ # mv /var/adm/rational/clearcase/rgy /var/adm/rational/clearcase/rgy.old
    root@[both]:/ # ln -s /local/cc/rgy /var/adm/rational/clearcase/rgy
    
  9. Every host should have separate license set. But license host should be the same.
  10. root@karblade:/ # vi /var/adm/rational/clearcase/license.db
    root@karblade:/ # echo karc-cc > /var/adm/rational/clearcase/config/license_host
    
Finally we can start with Data Service for ClearCase.

Thursday, September 24, 2009

Sun Cluster Data Service for ClearCase. Sun Cluster Software


"Νενικήκαμεν" (Nenikékamen, 'We have won.')
“Rejoice! We conquer!”
Probably Eukles (but officially Pheidippides)


I did not die. But I am also not allow to announce "Nenikékamen" because I run only the half of Marathon this Sunday.
Installation:
root@karblade:/ # mount kars:sol/cluster/inst /mnt
root@karblade:/ # cd /mnt/Solaris_sparc/
root@karblade:/mnt/Solaris_sparc # ./installer

Install
Sun Cluster 3.2 1/09
Sun Cluster Agents 3.2 1/09
     Sun Cluster HA for NFS
All Shared Components
Monitoring Console 1.0 Update 1
Check 'Configure later'

root@karblade:/mnt/Solaris_sparc # cd
root@karblade:/ # umount /mnt
Configuration:
root@karblade:/ # scinstall

1) Create a new cluster or add a cluster node

1) Create a new cluster

Do you want to continue (yes/no) [yes]?  

1) Typical

What is the name of the cluster you want to establish?  karc

Node name (Control-D to finish):  karu60
Node name (Control-D to finish):  ^D

This is the complete list of nodes:
karblade
karu60

Is it correct (yes/no) [yes]?  

Select the first cluster transport adapter for "karblade":

1) ce0
2) ce1
3) hme0
4) Other

Option:  1

Will this be a dedicated cluster transport adapter (yes/no) [yes]?  

Searching for any unexpected network traffic on "ce0" ... 

Verification completed. No traffic was detected over a 10 second 
sample period.

Select the second cluster transport adapter for "karblade":

1) ce0
2) ce1
3) hme0
4) Other

Option:  2

Will this be a dedicated cluster transport adapter (yes/no) [yes]?  

Searching for any unexpected network traffic on "ce1" ... done
Verification completed. No traffic was detected over a 10 second 
sample period.

Do you want to disable automatic quorum device selection (yes/no) [no]?  

Is it okay to create the new cluster (yes/no) [yes]?  

Interrupt cluster creation for cluster check errors (yes/no) [no]?  
After reboot, check quorum and device configuration:
root@karblade:/ # clq status

=== Cluster Quorum ===

--- Quorum Votes Summary ---

Needed   Present   Possible
------   -------   --------
2        3         3


--- Quorum Votes by Node ---

Node Name       Present       Possible       Status
---------       -------       --------       ------
karu60          1             1              Online
karblade        1             1              Online


--- Quorum Votes by Device ---

Device Name       Present      Possible      Status
-----------       -------      --------      ------
d4                1            1             Online

root@karblade:/ # cldev show

=== DID Device Instances ===                   

DID Device Name:                                /dev/did/rdsk/d1
Full Device Path:                                karu60:/dev/rdsk/c0t0d0
Replication:                                     none
default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d2
Full Device Path:                                karu60:/dev/rdsk/c0t1d0
Replication:                                     none
default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d3
Full Device Path:                                karu60:/dev/rdsk/c0t6d0
Replication:                                     none
default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d4
Full Device Path:                                karblade:/dev/rdsk/c3t1d0
Full Device Path:                                karu60:/dev/rdsk/c1t1d0
Replication:                                     none
default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d5
Full Device Path:                                karblade:/dev/rdsk/c3t4d0
Full Device Path:                                karu60:/dev/rdsk/c1t4d0
Replication:                                     none
default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d6
Full Device Path:                                karblade:/dev/rdsk/c0t1d0
Replication:                                     none
default_fencing:                                 global

DID Device Name:                                /dev/did/rdsk/d7
Full Device Path:                                karblade:/dev/rdsk/c0t0d0
Replication:                                     none
default_fencing:                                 global
Solaris Volume Manager configuration:
root@karblade:/ # metaset -s ccset -a -h karblade karu60
root@karblade:/ # cldg switch -n karblade ccset
root@karblade:/ # metaset -s ccset -a /dev/did/rdsk/d4 /dev/did/rdsk/d5
root@karblade:/ # metaset -s ccset

Set name = ccset, Set number = 1

Host                Owner
  karblade           Yes
  karu60             

Driv Dbase

d4   Yes  

d5   Yes  
root@karblade:/ # cat <<EOTAB >/etc/lvm/md.tab
> ccset/d0 -m ccset/d10
>   ccset/d10 1 1 /dev/did/rdsk/d4s0
>   ccset/d20 1 1 /dev/did/rdsk/d5s0
> EOTAB
root@karblade:/ # metainit -s ccset -a
ccset/d10: Concat/Stripe is setup
ccset/d20: Concat/Stripe is setup
ccset/d0: Mirror is setup
root@karblade:/ # metattach -s ccset ccset/d0 ccset/d20
ccset/d0: submirror ccset/d20 is attached
root@karblade:/ # metastat -s ccset
ccset/d0: Mirror
    Submirror 0: ccset/d10
      State: Okay         
    Submirror 1: ccset/d20
      State: Resyncing    
    Resync in progress: 0 % done
    Pass: 1
    Read option: roundrobin (default)
    Write option: parallel (default)
    Size: 35323904 blocks (16 GB)

ccset/d10: Submirror of ccset/d0
    State: Okay         
    Size: 35323904 blocks (16 GB)
    Stripe 0:
        Device   Start Block  Dbase        State Reloc Hot Spare
        d4s0            0     No            Okay   No  


ccset/d20: Submirror of ccset/d0
    State: Resyncing    
    Size: 35358848 blocks (16 GB)
    Stripe 0:
        Device   Start Block  Dbase        State Reloc Hot Spare
        d5s0            0     No            Okay   No  

Device Relocation Information:
Device   Reloc  Device ID
d5   No         -
d4   No         -
root@karblade:/ # newfs /dev/md/ccset/rdsk/d0
/dev/md/ccset/rdsk/d0: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/ccset/rdsk/d0: Unable to find Media type. Proceeding with system determined parameters.
newfs: construct a new file system /dev/md/ccset/rdsk/d0: (y/n)? y
/dev/md/ccset/rdsk/d0: Unable to find Media type. Proceeding with system determined parameters.
/dev/md/ccset/rdsk/d0: Unable to find Media type. Proceeding with system determined parameters.
Warning: 4096 sector(s) in last cylinder unallocated
/dev/md/ccset/rdsk/d0:  35323904 sectors in 5750 cylinders of 48 tracks, 128 sectors
        17248.0MB in 360 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
 32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
......
super-block backups for last 10 cylinder groups at:
 34410272, 34508704, 34607136, 34705568, 34804000, 34902432, 35000864,
 35099296, 35197728, 35296160

On both nodes:

root@[both]:/ # mkdir -p /local/cc
root@[both]:/ # cat <<ELOVF >>/etc/vfstab
> /dev/md/ccset/dsk/d0 /dev/md/ccset/rdsk/d0 /local/cc ufs 2 no logging
> ELOVF
Time to start with ClearCase installation.

Wednesday, September 16, 2009

Sun Cluster Data Service for ClearCase. Solaris

Mit trivialen Science-Fiction-Romanen hat "Solaris" nichts gemeinsam.
("Solaris" has nothing common with trivial Science-Fiction.)
Comment on German about Roman Solaris by Stanisław Lem

"Solaris" raises deep questions about science and the nature of human consciousness,
and lets the viewer draw his own conclusions.
MDb user comment about Film Solaris (1972) by Andrei Tarkovsky


I like Stanisław Lem's Solaris. Most of his Science-Fiction books I read already in late 80th on Russian. But Solaris did not impress me that time (I was 14). F.e. The Invincible was much more interesting for teen. Solaris really impressed me in 2002 when I read it the second time. Unfortunately it stayed my favorite Lem's book only for a little while. Because just after Solaris I read his philosophical essays "Summa Technologiae" and they secure the first position in my favorite books list. Written in 1964 "Summa" stays actual nowadays as never before. It is a big shame that English translation does not exist till now! There are only some chapters translated by Frank Prengel.

But back to Solaris. Sun Solaris I mean. Automation of Solaris OS installation was part of my job during last 8 year. Therefore I did not expect any difficulties with it. The only thing to pay attention is a disk layouts:
  • 32 Mb on root disk for Solaris Volume Manager replicas
  • 512 Mb /globaldevices slice
  • 2 Gb swap space (cluster software needs more swap)
I've put Solaris U7 DVD in karblade drive and ... faced the first problem. DVD drive does not work. I've seen the same situation a lot of times in the lab. What could you ever expect from optical drive that was not used for years? I did not want to plug it out and clean and decided to set up a boot and install server on my gentoo workstation. When I started with karu60, I realized that it was a correct choice - Ultra 60 has (I forget about) only CD-ROM drive.

The second problem I got was on-board Ethernet NIC on karu60. It is working but when I tried to use it as a boot device, strange error raised. Briefly search in Internet shows that it might be a corrupted EEPROM. Does not matter, I've plugged the cable in one of the Gigabit NIC and plugged it back after installation.

Operation system post configuration:
  1. Put missed entries in /etc/hosts (loghost entry should refer to local IP address)
  2. 192.168.10.41   karblade
    192.168.10.42   karu60
    192.168.10.46   karc-cc
    
  3. Prepare ssh inter connection on both machines
  4. root@[both]:/ # vi /etc/ssh/sshd_config (set PermitRootLogin yes)
    root@[both]:/ # svcadm refresh ssh
    root@[both]:/ # mkdir ~/.ssh
    root@[both]:/ # chmod 700 ~/.ssh
    root@[both]:/ # cd ~/.ssh
    
    on karblade
    root@karblade://.ssh # ssh-keygen -t dsa
    root@karblade://.ssh # cat id_dsa.pub > authorized_keys
    root@karblade://.ssh # chmod 600 authorized_keys
    root@karblade://.ssh # scp * karu60:/.ssh/
    
  5. Switch public network connection back to hme0 on karu60.
  6. root@karu60:/ # cd /etc
    root@karu60:/etc # mv hostname.ce1 hostname.hme0
    root@karu60:/etc # ifconfig hme0 plumb
    root@karu60:/etc # ifconfig ce1 unplumb
    (plug in public network patch cord back in on-board connector)
    root@karu60:/etc # ifconfig hme0 inet 192.168.10.42 netmask + broadcast + up
    Setting netmask of hme0 to 255.255.255.0
    root@karu60:/ # ping karblade
    karblade is alive
    
  7. Add entries in /etc/system on both hosts
  8. root@[both]:/ # cat <<EOS >> /etc/system 
    > set ce:ce_taskq_disable=1
    > exclude:lofs
    > set rstchown=0
    > EOS
    
  9. Disable nfs4 on both host
  10. root@[both]:/ # echo NFS_SERVER_VERSMAX=3 >> /etc/default/nfs
    
  11. Create Solaris Volume Manager replicas
  12. root@karblade:/ # metadb -af -c 3 c0t0d0s7
    root@karu60:/ # metadb -af -c 3 c0t0d0s7
    
  13. Set some environment variables.
  14. root@[both]:/ # cat <<EOP >> /etc/profile 
    > if [ -n "$PATH" ]; then
    >   PATH=/usr/cluster/bin:$PATH
    > else
    >   PATH=/usr/cluster/bin
    > fi
    > 
    > if [ -n "$MANPATH" ]; then
    >   MANPATH=/usr/cluster/man:$MANPATH
    > else
    >   MANPATH=/usr/cluster/man:/usr/man
    > fi
    > 
    > PAGER=less
    > export PATH MANPATH PAGER
    > EOP
    
  15. Reboot with reconfigure. (I've installed OS with StorEdge switched off. Now we need disk entries in devfs).
  16. root@[both]:/ # touch /reconfigure
    (switch StorEdge on)
    root@[both]:/ # init 6
    
  17. Check the disks
  18. root@karblade:/ # ls -la /dev/dsk/*s2
    lrwxrwxrwx   1 root     root          38 Sep 16 11:58 /dev/dsk/c0t0d0s2 -> ../../devices/pci@1f,0/ide@d/dad@0,0:c
    lrwxrwxrwx   1 root     root          37 Sep 16 11:58 /dev/dsk/c0t1d0s2 -> ../../devices/pci@1f,0/ide@d/sd@1,0:c
    lrwxrwxrwx   1 root     root          57 Sep 16 13:35 /dev/dsk/c3t1d0s2 -> ../../devices/pci@1f,0/pci@5/pci@1/SUNW,isptwo@4/sd@1,0:c
    lrwxrwxrwx   1 root     root          57 Sep 16 13:35 /dev/dsk/c3t4d0s2 -> ../../devices/pci@1f,0/pci@5/pci@1/SUNW,isptwo@4/sd@4,0:c
    
    root@karu60:/ # ls -la /dev/dsk/*s2
    lrwxrwxrwx   1 root     root          41 Sep 16 12:24 /dev/dsk/c0t0d0s2 -> ../../devices/pci@1f,4000/scsi@3/sd@0,0:c
    lrwxrwxrwx   1 root     root          41 Sep 16 12:24 /dev/dsk/c0t1d0s2 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:c
    lrwxrwxrwx   1 root     root          41 Sep 16 12:24 /dev/dsk/c0t6d0s2 -> ../../devices/pci@1f,4000/scsi@3/sd@6,0:c
    lrwxrwxrwx   1 root     root          43 Sep 16 13:36 /dev/dsk/c1t1d0s2 -> ../../devices/pci@1f,4000/scsi@3,1/sd@1,0:c
    lrwxrwxrwx   1 root     root          43 Sep 16 13:36 /dev/dsk/c1t4d0s2 -> ../../devices/pci@1f,4000/scsi@3,1/sd@4,0:c
    
Next: Sun Cluster Software Installation

Tuesday, September 15, 2009

Sun Cluster Data Service for ClearCase. Prepare

If builders built buildings the way programmers wrote programs, the first woodpecker to come along would destroy civilization.
Weinberg's Second Law

Building Solaris Cluster based on sparc at home is not a trivial Task. Just because lack of hardware. I had already almost everything except additional Ethernet PCI adapters. But, thanks Werner and Alex, I got two nice Dual Gigabit Ethernet PCI adapters for my tests.

Cluster name: karc
Logical hostname: karc-cc

Hardware:
  • SunBlade 100 (karblade)
  • Sun Ultra 60 (karu60)
  • Sun StorEdge MultiPack UltraSCSI Drive Box
  • 2 x 18 Gb SCSI hard disks
  • PCI UltraSCSI host adapter for Blade 100
  • 2 x SCSI 68-68 pin cables
  • 2 x Dual Gigabit Ethernet PCI adapters for cluster interconnects
  • 2 x Gigabit Patch Cords
IP addresses and hostnames:
  • 192.168.10.41   karblade
  • 192.168.10.42   karu60
  • 192.168.10.46   karc-cc
Software:
  • Solaris 10 U7 5/09
  • Sun Cluster 3.2 1/09
  • IBM Rational ClearCase 7.0.1.0 IFIX01
Next:  Sun Cluster Data Service for ClearCase. Hardware Configuration

Monday, September 14, 2009

Sun Cluster Data Service for ClearCase. Prehistory

"I don't expect wetware to work as logically as software."
(C) AI Jane
ClearCase as a High Available cluster service? "Good Idea!" guessed we in early 2001. But closer analyzes and tests with Sun Cluster showed - ClearCase is not cluster aware. And we had to go away from this "Good idea". Actually, we did not miss a lot - ClearCase was really stable over years and (at least I do not remember) we did not have any problems with Availability.

Times changed, Rational Software was purchased by IBM, Sun Cluster rebrended to Solaris Cluster, our project was outsourced, I left the company. But a few days ago I visited former colleagues and we discussed old times. I looked on the Internet later that evening and ... times really changed... found on official IBM site Support Policy for High Availability clustering and ClearCase.

Unfortunately, support for Solaris Cluster exists only in the form of not jet implemented change request RATLC00585651. But the main step is done - ClearCase is cluster aware. And the topic of Configuring Rational ClearCase for high availability in ClearCase Administrator's Guide gives enough information for new analyzes and tests with Solaris Cluster.

I am looking currently for new projects and have a bit of free time. Why not to try? "Good Idea!" Lets see how far I can go this time.

Next:  Sun Cluster Data Service for ClearCase. Prepare