Oracle : install RAC 11g on RedHat 5.8 with VMware

De Wiki de Romain RUDIGER
Aller à : navigation, rechercher

This is not an install how-to but just my notes of my first RAC instance installation and configuration.

The servers:

  • lab-1a
  • lab-1b

The network configuration is:

  • lab-1a.novalan.priv 192.168.0.55
  • lab-1a-vip.novalan.priv 192.168.0.57
  • lab-1b.novalan.priv 192.168.0.56
  • lab-1b-vip.novalan.priv 192.168.0.57
  • lab-1-scan.novalan.priv 192.168.0.60 .61 .62

Prepare the servers

Network configuration:

for srv in lab-1a lab-1b; do echo $srv; ssh $srv "grep -Ev "^#" /etc/sysconfig/network-scripts/ifcfg-eth*; ip ad sh | grep eth0"; done
lab-1a
/etc/sysconfig/network-scripts/ifcfg-eth0:DEVICE=eth0
/etc/sysconfig/network-scripts/ifcfg-eth0:BOOTPROTO=dhcp
/etc/sysconfig/network-scripts/ifcfg-eth0:HWADDR=00:0C:29:7C:24:7D
/etc/sysconfig/network-scripts/ifcfg-eth0:ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-eth0:0:DEVICE=eth0:0
/etc/sysconfig/network-scripts/ifcfg-eth0:0:BOOTPROTO=static
/etc/sysconfig/network-scripts/ifcfg-eth0:0:IPADDR=10.1.1.1
/etc/sysconfig/network-scripts/ifcfg-eth0:0:NETMASK=255.255.255.252
/etc/sysconfig/network-scripts/ifcfg-eth0:0:NETWORK=10.1.1.0
/etc/sysconfig/network-scripts/ifcfg-eth0:0:BROADCAST=10.1.1.3
/etc/sysconfig/network-scripts/ifcfg-eth0:0:ONBOOT=yes
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.0.55/24 brd 192.168.0.255 scope global eth0
    inet 10.1.1.1/30 brd 10.1.1.3 scope global eth0:0
lab-1b
/etc/sysconfig/network-scripts/ifcfg-eth0:DEVICE=eth0
/etc/sysconfig/network-scripts/ifcfg-eth0:BOOTPROTO=dhcp
/etc/sysconfig/network-scripts/ifcfg-eth0:HWADDR=00:0C:29:DF:A9:46
/etc/sysconfig/network-scripts/ifcfg-eth0:ONBOOT=yes
/etc/sysconfig/network-scripts/ifcfg-eth0:0:DEVICE=eth0:0
/etc/sysconfig/network-scripts/ifcfg-eth0:0:BOOTPROTO=static
/etc/sysconfig/network-scripts/ifcfg-eth0:0:IPADDR=10.1.1.2
/etc/sysconfig/network-scripts/ifcfg-eth0:0:NETMASK=255.255.255.252
/etc/sysconfig/network-scripts/ifcfg-eth0:0:NETWORK=10.1.1.0
/etc/sysconfig/network-scripts/ifcfg-eth0:0:BROADCAST=10.1.1.3
/etc/sysconfig/network-scripts/ifcfg-eth0:0:ONBOOT=yes
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.0.56/24 brd 192.168.0.255 scope global eth0
    inet 10.1.1.2/30 brd 10.1.1.3 scope global eth0:0

Install additional packages:

for srv in lab-1a lab-1b; do echo $srv; ssh $srv "yum install -y gcc.x86_64 gcc-c++.x86_64 glibc-devel.x86_64 glibc-headers kernel-headers libstdc++-devel.x86_64 glibc-headers.x86_64 sysstat.x86_64 elfutils-libelf-devel.i386 libaio-devel.x86_64"; done

Extend the shared memory FS:

for srv in lab-1a lab-1b; do ssh $srv "umount tmpfs; mount -t tmpfs shmfs -o size=1500m /dev/shm"; done
root@nas:/data# for srv in lab-1a lab-1b; do ssh $srv "grep tmpfs /etc/fstab; sed --in-place -r \"s/tmpfs[[:space:]]+defaults/tmpfs   size=1500m/\" /etc/fstab; grep tmpfs /etc/fstab"; done
tmpfs                   /dev/shm                tmpfs   defaults        0 0
tmpfs                   /dev/shm                tmpfs   size=1500m        0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
tmpfs                   /dev/shm                tmpfs   size=1500m        0 0

Custom the kernel parameters:

for srv in lab-1a lab-1b; do ssh $srv "echo \"
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
net.ipv4.ip_local_port_range = 32768 61000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
fs.file-max = 6815744\" >> /etc/sysctl.conf; sysctl -p"; done

Change the user limits for dba group:

for srv in lab-1a lab-1b; do ssh $srv "echo \"@dba soft nproc 2047
@dba hard nproc 16384
@dba soft nofile 4096
@dba hard nofile 65536\" >> /etc/security/limits.conf"; done

Create the dba group and the oracle user (password=oracle). Set also ssh keys (required by OUI):

for srv in lab-1a lab-1b; do ssh $srv "groupadd -g 200 dba; useradd -g 200 -u 200 -g dba -p \\\$1\\\$F4/Jz5QE\$/BuAWExpu3UmAPz3BJbEg1 oracle"; done
for srv in lab-1a lab-1b; do scp set_ssh_key.sh oracle@$srv:; ssh oracle@$srv "./set_ssh_key.sh lab-1a lab-1b"; done
oracle@lab-1a's password:
set_ssh_key.sh                                                                                                             100% 2965     2.9KB/s   00:00
oracle@lab-1a's password:
Public key not exist, generating it...
Generating public/private dsa key pair.
Created directory '/home/oracle/.ssh'.
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
f3:f9:a3:7d:6b:7d:9b:ad:8b:d7:c1:ed:67:37:fe:61 oracle@lab-1A
INFO-1-Processing lab-1a...
Warning: Permanently added 'lab-1a,192.168.0.55' (RSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-with-mic,password).
INFO-1-lab-1a processed.
INFO-2-Processing lab-1b...
Warning: Permanently added 'lab-1b,192.168.0.56' (RSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-with-mic,password).
INFO-2-lab-1b processed.
INFO-Number of processed server(s): 2
INFO-Number of sucessful processed server(s): 2
oracle@lab-1b's password:
set_ssh_key.sh                                                                                                             100% 2965     2.9KB/s   00:00
oracle@lab-1b's password:
Public key not exist, generating it...
Generating public/private dsa key pair.
Created directory '/home/oracle/.ssh'.
Your identification has been saved in /home/oracle/.ssh/id_dsa.
Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.
The key fingerprint is:
10:13:98:0b:79:ba:f3:3b:66:10:be:fa:36:06:2a:65 oracle@lab-1B
INFO-1-Processing lab-1a...
Warning: Permanently added 'lab-1a,192.168.0.55' (RSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-with-mic,password).
INFO-1-lab-1a processed.
INFO-2-Processing lab-1b...
Warning: Permanently added 'lab-1b,192.168.0.56' (RSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,gssapi-with-mic,password).
INFO-2-lab-1b processed.
INFO-Number of processed server(s): 2
INFO-Number of sucessful processed server(s): 2

Set-up or disable the time service:

for srv in lab-1a lab-1b; do echo $srv; ssh $srv "grep -E "^server " /etc/ntp.conf; sed --in-place -r \"s/OPTIONS=\\\"/OPTIONS=\\\"-x /\" /etc/sysconfig/ntpd; grep OPTIONS /etc/sysconfig/ntpd; chkconfig ntpd on; service ntpd restart; sleep 4; ntpq -p"; done
lab-1a
server 192.168.0.254
server 127.127.1.0
10 Nov 21:07:10 ntpdate[8395]: adjust time server 192.168.0.254 offset 0.121783 sec
Starting ntpd: [  OK  ]
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 qube.novalan.pr 94.23.243.53     3 u    3   64    1    1.147  120.802   0.001
 LOCAL(0)        .LOCL.          10 l    2   64    1    0.000    0.000   0.001
lab-1b
server 192.168.0.254
server 127.127.1.0
10 Nov 21:07:14 ntpdate[7845]: adjust time server 192.168.0.254 offset 0.202162 sec
Starting ntpd: [  OK  ]
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 qube.novalan.pr 94.23.243.53     3 u    3   64    1    1.492  201.279   0.001
 LOCAL(0)        .LOCL.          10 l    2   64    1    0.000    0.000   0.001

Setup Grid home and Oracle Base directory :

for srv in lab-1a lab-1b; do echo $srv; ssh $srv "mkdir -p /app/grid/11.2.0; mkdir -p /app/oracle; chown -R oracle:dba /app/"; done


Install the ASM packages:

for srv in lab-1a lab-1b; do ssh $srv "rpm -Uvh /mnt/appz/Oracle/oracleasm*"; done
warning: /mnt/appz/Oracle/oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ##################################################
oracleasm-support           ##################################################
oracleasm-2.6.18-308.el5    ##################################################
oracleasmlib                ##################################################
warning: /mnt/appz/Oracle/oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ##################################################
oracleasm-support           ##################################################
oracleasm-2.6.18-308.el5    ##################################################
oracleasmlib                ##################################################

List of the devices:

root@nas:~# for srv in lab-1a lab-1b; do echo $srv; ssh $srv "fdisk -l | grep Disk"; done
lab-1a
Disk /dev/sda: 48.3 GB, 48318382080 bytes
Disk /dev/sdb: 2147 MB, 2147483648 bytes
Disk /dev/sdc: 10.7 GB, 10737418240 byteslab-1b
Disk /dev/sda: 48.3 GB, 48318382080 bytes
Disk /dev/sdb: 943 MB, 943718400 bytes
Disk /dev/sdc: 1073 MB, 1073741824 bytes
  • sda is the local disk
  • sdb sdc are the shared disks (iSCSI on a FreeNas)
2GB for the Oracle Cluster Registry and the Voting data
10GB for the database

Create a partition on each shared devices like this:

fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-1024, default 1): 1
Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1024):
Using default value 1024
Command (m for help): w

Configure the ASM library on each node:

oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

/usr/sbin/oracleasm init
Set the disk usable by ASM:
/usr/sbin/oracleasm createdisk OCRVOTING /dev/sdb1
Writing disk header: done
Instantiating disk: done
/usr/sbin/oracleasm createdisk DATA /dev/sdc1
Writing disk header: done
Instantiating disk: done

List ASM handled disks:

/usr/sbin/oracleasm listdisks
DATA
OCRVOTING

On the other node, scan for the newly created devices:

/usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCRVOTING"
Instantiating disk "DATA"

Install cvuqdisk in order to allow OUI to check the shared devices:

for srv in lab-1a lab-1b; do echo $srv; ssh $srv "export CVUQDISK_GRP=dba; rpm -iv /mnt/appz/Oracle/RDBMS-11.2.0.3_Linux64bit/grid/rpm/cvuqdisk-1.0.9-1.rpm; rpm -q cvuqdisk"; done
lab-1a
Preparing packages for installation...
cvuqdisk-1.0.9-1
lab-1b
Preparing packages for installation...
cvuqdisk-1.0.9-1

Check the nodes with Cluster Verification Utility

cd /mnt/appz/Oracle/Cluster-Verification-Utility/bin
./cluvfy stage -pre crsinst -n lab-1a,lab-1b -r 11gR2

Install Oracle Grid Infrastructure and Oracle Real Application Clusters

Uncompress the Grid package and run OUI:

cd /mnt/appz/Oracle/RDBMS-11.2.0.3_Linux64bit/
unzip p10404530_112030_Linux-x86-64_3of7.zip
cd grid/
./runInstaller
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step01.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step02.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step03.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step04.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step04-eth.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step05.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step06.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step08.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step09.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step09-1.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step09-2.png

At the end of the Step 9, you need to run the root.sh script to set oraenv, dbhome, coraenv ; also generate the Oracle Local Registry and add the clusterware entries in the inittab.

On the first node:

[root@lab-1A ~]# /app/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /app/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
OLR initialization - successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'lab-1a'
CRS-2676: Start of 'ora.mdnsd' on 'lab-1a' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'lab-1a'
CRS-2676: Start of 'ora.gpnpd' on 'lab-1a' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'lab-1a'
CRS-2672: Attempting to start 'ora.gipcd' on 'lab-1a'
CRS-2676: Start of 'ora.cssdmonitor' on 'lab-1a' succeeded
CRS-2676: Start of 'ora.gipcd' on 'lab-1a' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'lab-1a'
CRS-2672: Attempting to start 'ora.diskmon' on 'lab-1a'
CRS-2676: Start of 'ora.diskmon' on 'lab-1a' succeeded
CRS-2676: Start of 'ora.cssd' on 'lab-1a' succeeded

ASM created and started successfully.

Disk Group OCRVOTING created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 5659546298e64f91bf1cc3d696ebd4af.
Successfully replaced voting disk group with +OCRVOTING.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   5659546298e64f91bf1cc3d696ebd4af (ORCL:OCRVOTING) [OCRVOTING]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'lab-1a'
CRS-2676: Start of 'ora.asm' on 'lab-1a' succeeded
CRS-2672: Attempting to start 'ora.OCRVOTING.dg' on 'lab-1a'
CRS-2676: Start of 'ora.OCRVOTING.dg' on 'lab-1a' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'lab-1a'
CRS-2676: Start of 'ora.registry.acfs' on 'lab-1a' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

On the second node:

[root@lab-1B ~]# /app/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /app/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node lab-1a, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step09-3.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-Step10.png

Check the install

Looking at the process:

 1825 ?        Ss     0:00 /bin/sh /etc/init.d/init.ohasd run
 1855 ?        Ssl    0:02 /app/grid/11.2.0/bin/ohasd.bin reboot
 2287 ?        S      0:00 [oks_wkq]
 2331 ?        S      0:00 [acfsioerrlog]
 2332 ?        S<     0:00 [acfs_bast0]
 2333 ?        S<     0:00 [acfs_bast1]
 2334 ?        S<     0:00 [acfs_bast2]
 2335 ?        S<     0:00 [acfs_bast3]
 2336 ?        S<     0:00 [acfs_bast4]
 2337 ?        S<     0:00 [acfs_bast5]
 2338 ?        S<     0:00 [acfs_bast6]
 2339 ?        S<     0:00 [acfs_bast7]
 2600 ?        Ssl    0:02 /app/grid/11.2.0/bin/orarootagent.bin
 2855 ?        D      0:00 [OksPlogThread]
 3359 ?        Ssl    0:00 /app/grid/11.2.0/bin/oraagent.bin
 3371 ?        Ssl    0:00 /app/grid/11.2.0/bin/mdnsd.bin
 3384 ?        Ssl    0:00 /app/grid/11.2.0/bin/gpnpd.bin
 3404 ?        SLl    0:00 /app/grid/11.2.0/bin/cssdmonitor
 3407 ?        Sl     0:00 /app/grid/11.2.0/bin/gipcd.bin
 3427 ?        SLl    0:00 /app/grid/11.2.0/bin/cssdagent
 3441 ?        SLl    0:02 /app/grid/11.2.0/bin/ocssd.bin
 3545 ?        Sl     0:00 /app/grid/11.2.0/bin/octssd.bin
 3568 ?        SLl    0:01 /app/grid/11.2.0/bin/osysmond.bin
 3663 ?        Ss     0:00 asm_pmon_+ASM1
 3665 ?        Ss     0:00 asm_psp0_+ASM1
 3667 ?        Ss     0:00 asm_vktm_+ASM1
 3671 ?        Ss     0:00 asm_gen0_+ASM1
 3673 ?        Ss     0:00 asm_diag_+ASM1
 3675 ?        Ss     0:00 asm_ping_+ASM1
 3677 ?        Ss     0:00 asm_dia0_+ASM1
 3679 ?        Ss     0:00 asm_lmon_+ASM1
 3681 ?        Ss     0:00 asm_lmd0_+ASM1
 3683 ?        Ss     0:00 asm_lms0_+ASM1
 3687 ?        Ss     0:00 asm_lmhb_+ASM1
 3689 ?        Ss     0:00 asm_mman_+ASM1
 3691 ?        Ss     0:00 asm_dbw0_+ASM1
 3696 ?        Ss     0:00 asm_lgwr_+ASM1
 3699 ?        Ss     0:00 asm_ckpt_+ASM1
 3702 ?        Ss     0:00 asm_smon_+ASM1
 3704 ?        Ss     0:00 asm_rbal_+ASM1
 3706 ?        Ss     0:00 asm_gmon_+ASM1
 3708 ?        Ss     0:00 asm_mmon_+ASM1
 3710 ?        Ss     0:00 asm_mmnl_+ASM1
 3712 ?        Ss     0:00 asm_lck0_+ASM1
 6802 ?        Ss     0:00 asm_pz99_+ASM1
 3716 ?        Ss     0:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
 3729 ?        Ssl    0:02 /app/grid/11.2.0/bin/crsd.bin reboot
 3745 ?        Ssl    0:00 /app/grid/11.2.0/bin/evmd.bin
 3825 ?        S      0:00  \_ /app/grid/11.2.0/bin/evmlogger.bin -o /app/grid/11.2.0/evm/log/evmlogger.info -l /app/grid/11.2.0/evm/log/evmlogger.log
 3748 ?        Ss     0:00 oracle+ASM1_ocr (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
 3750 ?        Ss     0:00 asm_asmb_+ASM1
 3752 ?        Ss     0:00 oracle+ASM1_asmb_+asm1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
 4133 ?        Ssl    0:00 /app/grid/11.2.0/bin/oraagent.bin
 4160 ?        Ss     0:00 oracle+ASM1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
 4457 ?        Ssl    0:02 /app/grid/11.2.0/bin/orarootagent.bin
 4486 ?        Ss     0:00 /app/grid/11.2.0/opmn/bin/ons -d
 4487 ?        Sl     0:00  \_ /app/grid/11.2.0/opmn/bin/ons -d
 4586 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER_SCAN2 -inherit
 4600 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER_SCAN3 -inherit
 4633 ?        Ssl    0:00 /app/grid/11.2.0/bin/scriptagent.bin
 4653 ?        Sl     0:07 /app/grid/11.2.0/jdk/jre//bin/java -server -Xcheck:jni -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -D
 5125 ?        SLl    0:00 /app/grid/11.2.0/bin/ologgerd -M -d /app/grid/11.2.0/crf/db/lab-1a
 5699 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER -inherit
Process Description
bin/ohasd.bin Oracle High Availability Services (spawn gpnpd, GIPC, mDNS, GNS)
gpnpd.bin Grid Plug and Play (gpnpd): GPNPD provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes node have the most recent profile.
bin/gipcd.bin Grid Interprocess Communication (GIPC): A helper daemon for the communications infrastructure. Currently has no functionality; to be activated in a later release.
bin/mdnsd.bin Multicast Domain Name Service (mDNS)
bin/orarootagent.bin Oracle Root Agent (orarootagent): A specialized oraagent process that helps crsd manage resources owned by root, such as the network, and the Grid virtual IP address.
bin/oraagent.bin Oracle Agent (oraagent): Extends clusterware to support Oracle-specific requirements and complex resources. Runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g release 1 (11.1).
bin/ocssd.bin
/app/grid/11.2.0/bin/cssdagent
/app/grid/11.2.0/bin/cssdmonitor
Cluster Synchronization Service (CSS): Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using certified third-party clusterware, then CSS processes interfaces with your clusterware to manage node membership information.

The cssdagent process monitors the cluster and provides I/O fencing. This service formerly was provided by Oracle Process Monitor Daemon (oprocd), also known as OraFenceService on Windows. A cssdagent failure results in Oracle Clusterware restarting the node.

bin/crsd.bin Cluster Ready Services (CRS): The primary program for managing high availability operations in a cluster. The CRS daemon (crsd) manages cluster resources based on the configuration information that is stored in OCR for each resource.

This includes start, stop, monitor, and failover operations. The crsd process generates events when the status of a resource changes. When you have Oracle RAC installed, the crsd process monitors the Oracle database instance, listener, and so on, and automatically restarts these components when a failure occurs.

bin/octssd.bin Cluster Time Synchronization Service (CTSS): Provides time management in a cluster for Oracle Clusterware.
bin/evmd.bin
bin/evmlogger.bin
Event Management (EVM): A background process that publishes events that Oracle Clusterware creates.
opmn/bin/ons Oracle Notification Service (ONS): A publish and subscribe service for communicating Fast Application Notification (FAN) events.
*+ASM1 The ASM database instance ONE. One the other node, we will find the instance 2 (+ASM2).

Look at the listeners:

Node A:
 4586 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER_SCAN2 -inherit
 4600 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER_SCAN3 -inherit
 5699 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER -inherit
Node B:
 7437 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER_SCAN1 -inherit
 8305 ?        Ssl    0:00 /app/grid/11.2.0/bin/tnslsnr LISTENER -inherit
  • 4586 4600 7437: SCAN listeners
  • 5699 8305: ASM listeners

Check the IP:

Node A:
sudo /sbin/ip -f inet ad sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.0.55/24 brd 192.168.0.255 scope global eth0
    inet 10.1.1.1/30 brd 10.1.1.3 scope global eth0:0
    inet 169.254.52.143/16 brd 169.254.255.255 scope global eth0:1
    inet 192.168.0.57/24 brd 192.168.0.255 scope global secondary eth0:2
    inet 192.168.0.62/24 brd 192.168.0.255 scope global secondary eth0:4
    inet 192.168.0.60/24 brd 192.168.0.255 scope global secondary eth0:5
Node B:
sudo /sbin/ip -f inet ad sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000
    inet 192.168.0.56/24 brd 192.168.0.255 scope global eth0
    inet 10.1.1.2/30 brd 10.1.1.3 scope global eth0:0
    inet 169.254.116.24/16 brd 169.254.255.255 scope global eth0:1
    inet 192.168.0.61/24 brd 192.168.0.255 scope global secondary eth0:2
    inet 192.168.0.58/24 brd 192.168.0.255 scope global secondary eth0:3
  • OUI has detected the 3 registered IP of lab-1-scan.novalan.priv. He bind three SCAN listeners on these IP.
  • Each node vip IP are also available (.57 and .58)

Listeners configuration:

[oracle@lab-1A ~]$ cat /app/grid/11.2.0/network/admin/listener.ora
LISTENER=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))))            # line added by Agent
LISTENER_SCAN3=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN3))))                # line added by Agent
LISTENER_SCAN2=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN2))))                # line added by Agent
LISTENER_SCAN1=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1))))                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN1=ON                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN2=ON                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER_SCAN3=ON                # line added by Agent
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON              # line added by Agent

[oracle@lab-1A ~]$ cat /app/grid/11.2.0/network/admin/endpoints_listener.ora
LISTENER_LAB-1A=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=lab-1A-vip)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.0.55)(PORT=1521)(IP=FIRST))))            # line added by Agent

Stop / Start / Status

Status des noeuds:

srvctl status srvpool -a
Server pool name: Free
Active servers count: 2
Active server names: lab-1a,lab-1b
NAME=lab-1a STATE=ONLINE
NAME=lab-1b STATE=ONLINE
Server pool name: Generic
Active servers count: 0
Active server names:

Status des application:

srvctl status nodeapps
VIP lab-1A-vip is enabled
VIP lab-1A-vip is running on node: lab-1a
VIP lab-1B-vip is enabled
VIP lab-1B-vip is running on node: lab-1b
Network is enabled
Network is running on node: lab-1a
Network is running on node: lab-1b
GSD is disabled
GSD is not running on node: lab-1a
GSD is not running on node: lab-1b
ONS is enabled
ONS daemon is running on node: lab-1a
ONS daemon is running on node: lab-1b

Status des instances ASM et du diskgroup OCRVOTING:

srvctl status asm -a
ASM is running on lab-1b,lab-1a
ASM is enabled.
srvctl status diskgroup -g OCRVOTING
Disk Group OCRVOTING is running on lab-1b,lab-1a

Status du listener SCAN:

srvctl status scan
SCAN VIP scan1 is enabled
SCAN VIP scan1 is running on node lab-1b
SCAN VIP scan2 is enabled
SCAN VIP scan2 is running on node lab-1a
SCAN VIP scan3 is enabled
SCAN VIP scan3 is running on node lab-1a

Status des services CRS:

crsctl check cluster -all
**************************************************************
lab-1a:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
lab-1b:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Complete CRS status:

crsctl status resource -w "TYPE co 'ora'" -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.OCRVOTING.dg
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.asm
               ONLINE  ONLINE       lab-1a                   Started
               ONLINE  ONLINE       lab-1b                   Started
ora.gsd
               OFFLINE OFFLINE      lab-1a
               OFFLINE OFFLINE      lab-1b
ora.net1.network
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.ons
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.registry.acfs
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lab-1b
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.cvu
      1        ONLINE  ONLINE       lab-1a
ora.lab-1a.vip
      1        ONLINE  ONLINE       lab-1a
ora.lab-1b.vip
      1        ONLINE  ONLINE       lab-1b
ora.oc4j
      1        ONLINE  ONLINE       lab-1a
ora.scan1.vip
      1        ONLINE  ONLINE       lab-1b
ora.scan2.vip
      1        ONLINE  ONLINE       lab-1a
ora.scan3.vip
      1        ONLINE  ONLINE       lab-1a

Display the OCR state:

ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       2736
         Available space (kbytes) :     259384
         ID                       :  259084113
         Device/File Name         : +OCRVOTING
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

STOP the cluster node B:

[root@lab-1B ~]# /app/grid/11.2.0/bin/crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lab-1b'
CRS-2673: Attempting to stop 'ora.crsd' on 'lab-1b'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lab-1b'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'lab-1b'
CRS-2677: Stop of 'ora.registry.acfs' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'lab-1b'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.lab-1b.vip' on 'lab-1b'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'lab-1b'
CRS-2677: Stop of 'ora.lab-1b.vip' on 'lab-1b' succeeded
CRS-2672: Attempting to start 'ora.lab-1b.vip' on 'lab-1a'
CRS-2677: Stop of 'ora.scan1.vip' on 'lab-1b' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'lab-1a'
CRS-2677: Stop of 'ora.asm' on 'lab-1b' succeeded
CRS-2676: Start of 'ora.lab-1b.vip' on 'lab-1a' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'lab-1a' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'lab-1a'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'lab-1a' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'lab-1b'
CRS-2677: Stop of 'ora.net1.network' on 'lab-1b' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lab-1b' has completed
CRS-2677: Stop of 'ora.crsd' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.mdnsd' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.crf' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.ctssd' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.evmd' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.asm' on 'lab-1b'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'lab-1b'
CRS-2677: Stop of 'ora.crf' on 'lab-1b' succeeded
CRS-2677: Stop of 'ora.evmd' on 'lab-1b' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'lab-1b' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'lab-1b' succeeded
CRS-2677: Stop of 'ora.asm' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'lab-1b'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'lab-1b'
CRS-2677: Stop of 'ora.cssd' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'lab-1b'
CRS-2677: Stop of 'ora.gipcd' on 'lab-1b' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'lab-1b'
CRS-2677: Stop of 'ora.drivers.acfs' on 'lab-1b' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'lab-1b' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lab-1b' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[oracle@lab-1A ~]$ crsctl status resource -w "TYPE co 'ora'" -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       lab-1a
ora.OCRVOTING.dg
               ONLINE  ONLINE       lab-1a
ora.asm
               ONLINE  ONLINE       lab-1a                   Started
ora.gsd
               OFFLINE OFFLINE      lab-1a
ora.net1.network
               ONLINE  ONLINE       lab-1a
ora.ons
               ONLINE  ONLINE       lab-1a
ora.registry.acfs
               ONLINE  ONLINE       lab-1a
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.cvu
      1        ONLINE  ONLINE       lab-1a
ora.lab-1a.vip
      1        ONLINE  ONLINE       lab-1a
ora.lab-1b.vip
      1        ONLINE  INTERMEDIATE lab-1a                   FAILED OVER
ora.oc4j
      1        ONLINE  ONLINE       lab-1a
ora.scan1.vip
      1        ONLINE  ONLINE       lab-1a
ora.scan2.vip
      1        ONLINE  ONLINE       lab-1a
ora.scan3.vip
      1        ONLINE  ONLINE       lab-1a

Ths command stop everything (listeners, remove IP, stop ClusterWare services, asm). You can see that the other node take all the ressources owned by node B.

START the cluster node B:

[root@lab-1B ~]# /app/grid/11.2.0/bin/crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@lab-1B ~]# /app/grid/11.2.0/bin/crsctl status resource -w "TYPE co 'ora'" -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.OCRVOTING.dg
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.asm
               ONLINE  ONLINE       lab-1a                   Started
               ONLINE  ONLINE       lab-1b                   Started
ora.gsd
               OFFLINE OFFLINE      lab-1a
               OFFLINE OFFLINE      lab-1b
ora.net1.network
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.ons
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
ora.registry.acfs
               ONLINE  ONLINE       lab-1a
               ONLINE  ONLINE       lab-1b
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lab-1b
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       lab-1a
ora.cvu
      1        ONLINE  ONLINE       lab-1a
ora.lab-1a.vip
      1        ONLINE  ONLINE       lab-1a
ora.lab-1b.vip
      1        ONLINE  ONLINE       lab-1b
ora.oc4j
      1        ONLINE  ONLINE       lab-1a
ora.scan1.vip
      1        ONLINE  ONLINE       lab-1b
ora.scan2.vip
      1        ONLINE  ONLINE       lab-1a
ora.scan3.vip
      1        ONLINE  ONLINE       lab-1a

Install a database instance

Before starting the OUI, you must check your env:

[oracle@lab-1A ~]$ env | grep ORA
ORACLE_SID=+ASM
ORACLE_BASE=/app/oracle
ORACLE_HOME=/app/grid/11.2.0
[oracle@lab-1A ~]$ export ORACLE_HOME=/app/oracle/product/11.2.0/
[oracle@lab-1A ~]$ export ORACLE_BASE=/app/oracle/
[oracle@lab-1A ~]$ unset ORACLE_SID

Creation of the DiskGroup DATA with '/app/grid/11.2.0/bin/asmca':

Oracle Grid Infrastructure 11g 11.2.0.3-installer-ASMCA-Step01.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-ASMCA-Step02.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-ASMCA-Step03.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-ASMCA-Step04.png

You can now install the database, the OUI will detect the RAC configuration :

Oracle Grid Infrastructure 11g 11.2.0.3-installer-DB-Step05.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-DB-Step07.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-DB-Step07-1.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-DB-Step09.png

Check the database install

Setup the user profile (you can put this in the bash_profile):

ORACLE_HOME=/app/oracle/product/11.2.0
ORACLE_BASE=/app/oracle
ORACLE_UNQNAME=orcl
INSTANCE_NUMBER=$(ps ax -o cmd | grep pmon_$ORACLE_UNQNAME | grep -v grep | sed "s/ora_pmon_${ORACLE_UNQNAME}//" | grep -v sed)
if [[ -z $INSTANCE_NUMBER ]]; then
        echo "No instance found for the database $ORACLE_UNQNAME."
else
        ORACLE_SID=orcl${INSTANCE_NUMBER}
        export ORACLE_HOME ORACLE_BASE ORACLE_SID ORACLE_UNQNAME
fi
export PATH=$PATH:$ORACLE_HOME/bin

Display the tnsnames.ora configuration of this DB:

cat /app/oracle/product/11.2.0/network/admin/tnsnames.ora
# tnsnames.ora Network Configuration File: /app/oracle/product/11.2.0/network/admin/tnsnames.ora
# Generated by Oracle configuration tools.

ORCL =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = lab-1-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.novalan.priv)
    )
  )

Connect to the database through the scan listener:

[oracle@lab-1B ~]$ /app/oracle/product/11.2.0/bin/sqlplus system/Oracle123@lab-1-scan.novalan.priv:1521/orcl.novalan.priv

SQL*Plus: Release 11.2.0.3.0 Production on Sun Nov 11 19:59:12 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> set linesize 150
SQL> column HOST_NAME format a20
SQL> select INSTANCE_NUMBER, INSTANCE_NAME, HOST_NAME, VERSION, STATUS from v$instance;

INSTANCE_NUMBER INSTANCE_NAME    HOST_NAME            VERSION           STATUS
--------------- ---------------- -------------------- ----------------- ------------
              1 orcl1            lab-1A               11.2.0.3.0        OPEN

SQL> column FILE_NAME format a50
SQL> select FILE_NAME, TABLESPACE_NAME, BYTES from dba_data_files;

FILE_NAME                                          TABLESPACE_NAME                     BYTES
-------------------------------------------------- ------------------------------ ----------
+DATA/orcl/datafile/users.259.799097971            USERS                             5242880
+DATA/orcl/datafile/undotbs1.258.799097971         UNDOTBS1                        131072000
+DATA/orcl/datafile/sysaux.257.799097969           SYSAUX                          597688320
+DATA/orcl/datafile/system.256.799097969           SYSTEM                          754974720
+DATA/orcl/datafile/example.264.799098081          EXAMPLE                         328335360
+DATA/orcl/datafile/undotbs2.265.799098277         UNDOTBS2                         26214400

6 rows selected.

In OEM:

Oracle Grid Infrastructure 11g 11.2.0.3-installer-DB-OEM-Monitor.png
Oracle Grid Infrastructure 11g 11.2.0.3-installer-DB-OEM-Map.png