Implementation of 11gR2 RAC on Linux

Procedure for creating a 2 node 11g RAC Database. This note provides step-by-step instructions to install and configure Oracle 11g R2 RAC Database on Cent OS. It also includes steps to configure Linux for Oracle. This walkthrough uses a two nodes cluster using shared SCSI disks between the nodes. The note assumes that the hardware is in place, OS is installed, network is setup (public and private networks) and the shared disks are also in place.

Our configuration uses the following:

Software and Hardware Requirements

Hardware Requirements

  • Oracle requires 1.5 gigabytes (GB) of physical memory.
  • Swap space must be equal to the amount of RAM allocated to the system.
  • Oracle's temporary space (/tmp) must be at least 1 GB in size.
  • A monitor that supports resolution of 1024 x 768 to correctly display the Oracle Universal Installer (OUI)

The following table describes the disk space required for an Oracle installation:

Table 1-1 / Minimum Disk Space Requirements
Software Installation Location / Size Required
Grid Infrastructure home
Oracle Database home / Atleast 4.5 GB Space
Atleast 4 GB of Space
Shared storage disk space / Sizes of database and Flashback Recovery Area
(Depends on the Requirement)

Two Machines each with:

  • 1 CPU
  • 8 GB memory
  • 100 GB local disk with OS
  • 3 x 10 GB shared disks for RAC

Machine Names

Rac1 and Rac2

Software

OS: Cent OS

Oracle Clusterware 11g

ASM LIB RPM’S according to kernel version of OS

Oracle 11g RAC R2 Database software

This guide is divided into three main parts.

Part I: Configure Linux for Oracle

Part II: Prepare the Shared Disks

Part III: Install Oracle Software.

Network Requirements

  • It is recommended that you ensure each node contains at least two network interface cards (NICs). One NIC for public network and one NICs for private network to ensure high availability of the Oracle RAC cluster.
  • Public and private interface names are must be the same on all nodes. For example, if eth0 is used as the public interface on node one, all other nodes require eth0 as the public interface.
  • All public interfaces for each node should be able to communicate with all nodes within the cluster.
  • All private interfaces for each node should be able to communicate with all nodes within the cluster.
  • The hostname of each node must follow the RFC 952 standard ( Hostnames that include an underscore ("_") are not permitted.
  • Each node in the cluster requires the following IP address:
  • One public IP address
  • One private IP address
  • One virtual IP address
  • Three single client access name (SCAN) addresses for the cluster

Operating System Requirements

  • Red Hat Enterprise Linux 5.x AS x86_64
  • Oracle Linux 5.x AS x86_64
  • Cent OS
  • Configuring DNS SERVERS (if using dns and SCAN)

dns>#rpm -qa | grep cache
#rpm -qa | grep bind
#cd /var/named/chroot/var/named
named>#ls for*
#ls rev*
#cd
#service named start
#service named stop
#service named start
#service named restart
#dig dns.oracle.com (for checking dns is properly working)
#cd /var/named/chroot/var/named
#cat for

2. Configuration to be done on RAC servers
rac1>#hostname
#vi /etc/resolv.conf
name server 192.168.233.224
:wq
#ssh 192.168.233.224

rac2>#vi /etc/resolv.conf
name server 192.168.233.224
:wq
rac1> nslookup cluster-scan (where ns is name server)
#lslookup rac2
#nslookup dns.oracle.com

Installation

Step 1: (Node 1)Step 2: (Node 2)
#cat /etc/passwd (or) cat /etc/shadow Same as Step 1
#userdel -r oracle
#cat /etc/group
#groupdeloinstall
#groupdeldba

Step 3: (Node 1)Step: 4(Node 2)
#groupadd -g 501 oinstallSame as Step 3
#groupadd -g 502 dba
#groupadd -g 503 oper
#groupadd -g 504 asmadmin
#groupadd -g 506 asmdba
#groupadd -g 507 asmoper
#useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
#passwd
#useradd -u 502 -g oinstall -G dba,asmdba oracle
#passwd oracle
#id grid
#id oracle

Step 5:(Node 1)Step 6:(Node 2) same as step 5
Creating directory structures for grid home,

dbhome,oracle base and oracle inventory

#mkdir -p /u01/app/grid
#chown -R grid:oinstall /u01/app/grid
#chmod -R /u01/app/grid
#mkdir -p /u01/product/11.2.0/grid_home
#chown -R grid:oinstall /u01/product/11.2.0/grid_home
#chmod -R 775 /u01/product/11.2.0/grid_home

Note: Grid home should not be under grid base

#mkdir -p /u01/app/oracle
#chown -R oracle:oinstall /u01/app/oracle
#chmod -R 775 /u01/app/oracle
#mkdir -p /u01/app/oracle/product/11.2.0/db_home
#chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_home
#chmod -R 775 /u01/app/oracle/product/11.2.0/db_home

Creating directory for inventory
#mkdir -p /u01/app/oraInventory
#chown -R grid:oinstall /u01/app/oraInventory
#chmod -R 775 /u01/app/oraInventory

Step 7: (Node 1)
Configuring Kernel Parameters

Note: If OS is Redhat we need to configure the kernel parameters
as oracle installation document and if os is OEL by default
KERNEL parameters are set
#cat /etc/sysctl.conf | more

# Vi /etc/sysctl.conf
fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1054504960

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default=262144

net.core.rmem_max=4194304

net.core.wmem_default=262144

net.core.wmem_max=1048586

# /sbin/sysctl –p

#scp /etc/sysctl.conf rac2:/etc/sysctl.conf

Add the following lines to the "/etc/pam.d/login" file, if it does not already exist.

session required pam_limits.so

Change the setting of SELinux to permissive by editing the "/etc/selinux/config" file, making sure the SELINUX flag is set as follows.

SELINUX=permissive

Alternatively, this alteration can be done using the GUI tool (System > Administration > Security Level and Firewall). Click on the SELinux tab and disable the feature.

If you have the Linux firewall enabled, you will need to disable or configure it, The following is an example of disabling the firewall.

# serviceiptables stop

# chkconfigiptables off

Step 8: (Node 1) Configuring shell limits

#vi /etc/security/limits.conf
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
:wq

#cd /etc/security
#scplimits.conf rac2:/etc/security/limits.conf
#cd

Step 9 : Node 1 Configuring Profile
#vi /etc/profile
if [ $USER = “oracle” ] || [ $USER = “grid” ]; then
if [ $SHELL = ” /bin/ksh” ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
:wq!

#cd /etc
#scp profile rac2:/etc/profile

Step 10: (Node 1) Configuring /etc/hosts
rac1>vi /etc/hosts
127.0.0.1 localhost.localdomainlocalhost
192.168.233.224 dns dns.oracle.com (if we r using dns server)

***PUBLIC IPS***
192.168.233.40rac1 rac1.oracle.com
192.168.233.41 rac2 rac2.oracle.com

***PRIVATE IPS***
192.168.233.50 rac1-priv
192.168.233.51 rac2-priv

***VIRTUAL IPS***
192.168.233.60 rac1-vip
192.168.233.61rac2-vip

***SCAN IPS***
192.168.233.101 cluster-scan
192.168.233.102 cluster-scan
192.168.233.102 cluster-scan

#cd /etc/
#scp hosts rac2:/etc/hosts

Step 11: (Node 1)
Creating the required no. of partitions in the shared storage for asmdiskgroups

#fdisk -l
#fdisk /dev/sdb
:p
:n
p
1
enter
+2g
:n
E
2
enter
enter
p
:n
L
enter
+10g
n
L
enter
+10g
n
L
enter
+10g
p
:wq

Step: 12 (Node 1)Step: 13 (Node 2)

# partprobe#partprobe

Note: If you are implementing ASM using asmlib
interface,incase of redhatlinux we need to download
asm rpm’s based on kernel version

Note: Incase of OEL all asm rpm’s are installed by default
except asmlib which needs to be installed manually

Add Required RPMs.

Oracle 11.2.0.3 requires extra RPMs from the install media.

The following command should load all needed RPMs for 11.2.0.3 Grid and Database.

[root@rac1 and rac2 ~]# cd "/media/CentOS_6.3_Final/Packages/"

rpm -ivhcompat-libstdc++-33-3.2.3-69.el6.*.rpm \

elfutils-devel-0.152-1.el6.x86_64.rpm \

elfutils-libelf-devel-0.152-1.el6.x86_64.rpm \

gcc-c++-4.4.6-4.el6.x86_64.rpm \

glibc-2.12-1.80.el6.i686.rpm \

glibc-devel-2.12-1.80.el6.i686.rpm \

libaio-devel-0.3.107-10.el6.x86_64.rpm \

libaio-0.3.107-10.el6.i686.rpm \

libgcc-4.4.6-4.el6.i686.rpm \

libstdc++-devel-4.4.6-4.el6.x86_64.rpm \

libtool-ltdl-2.2.6-15.5.el6.i686.rpm \

nss-softokn-freebl-3.12.9-11.el6.i686.rpm \

readline-6.0-4.el6.i686.rpm \

ncurses-libs-5.7-3.20090208.el6.i686.rpm \

libcap-2.16-5.5.el6.i686.rpm \

libattr-2.4.44-7.el6.i686.rpm \

compat-libcap1-1.10-1.*.rpm

rac1>rpm -qa | grep oracle
#uname -r
#cd /opt
opt># rpm -ivhoracleasmlib*
#scp -r oracleasmlib* rac2:/opt step 10: (Node 2)
rac2>rpm -qa | grep oracle
#cd /opt
opt> # ls
#rpm -ivhoracleasmlib*

Step 14: (Node 1)
Install cvu (cluster verification utility)
disk rpm from the media

#cd /opt
opt> # ls
#cd grid/
#ls
#cd rpm
#ls
#rpm -ivhcvuqdisk….rpm
#scpcvuqdisk…rpm rac2:/opt step 12:( Node 2)
rac1> # cd /opt
opt> # rpm -ivhcvuqdisk*

Step 15-16: (Node 1 and Node 2)

rac1 & rac2> #oracleasm configure -i

:grid
:asmadmin
:y
:y
:done

Step 17:(Node 1)
# oracleasm init

Step 18:(Node 1) Step 19: (Node 2)
rac1> # fdisk–l rac2> # oracleasm scandisks
#oracleasmcreatedisk VOL1 /dev/sdb5 # oracleasmlistdisks
#oracleasmcreatedisk VOL2 /dev/sdb6
#oracleasmcreatedisk VOL3 /dev/sdb7

#oracleasm scandisks
#oracleasmlistdisks

Step 20:(Node 1)

Installing oracle grid Infrastructure services:
#su – grid
$cd /opt
$ls
$cd grid
$ls
$./runcluvfy.sh stage -post hwos -n rac1,rac2 -verbose
$./runInstaller
.skip software updates
next
.Install and configure oracle grid infrastructure for cluster
next
next
.Advanced installation
next
next
cluster name: lnx-cluster
scan name:cluster-scan
scan port:1521
uncheck configure DNS
next
click on add
Hostname:rac2
virtual IP name:rac2-vip
click on ok
click on ssh connectivity
OS password :racdba
click on setup
click on ok
click on test
click on ok
next
select eth0
choose public
select eth1
choose private
select vibro
choose do not use
next
.choose asm
next
Disk group name : ASM_DG_CRS
.external redundancy
select on disk path
next
specify password:racdba
click on yes
next
.DO NOT use inteligent management
next
next
software location:/u01/product/11.2.0/grid_home
next
Inventory Directory: /u01/app/oraInventory
next
click on ignore all
next
INSTALL
rac1> /u01/app/oraInventory/oraInstRoot.sh
rac1> /u01/product/11.2.0/grid_home/root.sh
click on ok
click on next
click on close
Note: Cluster verification utility fails its ok ( its a bug)

Note:If root.sh fails on 2nd node as error ASM failed root.crs.pl failed
solution:on all nodes modify

step 1: # vi /etc/sysconfig/oracleasm
ORACLEASM_SCANORDER=”dm”
ORACLEASM_SCANEXCLUDE=”sd”

step 2: Restart asmlib on all nodes except 1st nodes
#/etc/init.d/oracleasm restart

step 3: Deconfigure root.sh (except 1st node)
$grid_home/crs/install/rootcrs.pl -verbose -deconfig -force

step 4: Run root.sh on 2nd node from
#cd /grid_home/
#./root.sh

Step 21: Installing Oracle Binaries
# su – oracle
$ xhost +
$vncserver
$cd /opt
$ls
$cd database
$ls
$shrunInstaller
.uncheck i wish to receive security updates
next
click on yes
skip software updates
next
.Install database software only
next
.oracle racinstalltion
click on SSH connectivity
oracle password : oracle
click on setup
click on ok
click on test
click on ok
next
next
.enterprise edition
next
oracle base: /u01/app/oracle
oracle_home: /u01/app/oracle/product/11.2.0/db_home
next
next
Ignore all
next
Install
rac1>/u01/app/oracle/product/11.2.0/db_home
click on ok
click on close

Step 22: Configuring ASM (Node 1)
configuring bash profile
#ssh rac1
#su – grid
#vi .bash_profile
export ORACLE_HOME=/u01/product/11.2.0/grid_home
export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin
:wq
$. .bash_profile
$scp .bash_profile rac2:/home/grid

Step 23: Configuring ASMCA (Node 1)
rac1>$ asmca
click on create
disk group name :ASM_DG_DATA
.external redundancy
select one disk path
click on ok
click on create
disk group name :ASM_DG_FRA
.external redundancy
select one disk path
click on ok
click on exit
Yes

Step 24: Configuring bash profile (Node 1)

rac1> #su – oracle
rac1> $vi .bash_profile
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_home
export PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin
:wq
$. .bash_profile
$scp .bash_profile rac2:/home/oracle

Step 25: Configuring ASMCA (Node 1)

rac1> $dbca
.oracle rac database
next
.create database
next
next
Global database name:hrms
select all
next
next
next
.use the same administrative password
password: racdba
next
Yes
.use common location for all datafiles
Database file location:
next
specify asm password:
.enable archiving
click on browse
.select ASM_DG_FRA
click on ok
click on edit archive mode parameters
next ----next----next---next
finish
ok (It will take time more than 30 mints)

Implementation of 11g RAC on LinuxPage 1