vbox環境搭建oracle11g RAC過程

安裝環境

主機操做系統:windows 10 
虛擬機Vbox:兩臺Oracle Linux R6 U7 x86_64 
Oracle Database software: Oracle11gR2 
Cluster software: Oracle grid infrastructure 11gR2 php

本次安裝過程當中,原來已有的安裝軟件有些問題,折騰了好長時間,最後從新下載的七個壓縮文件,用到了前三個,七個壓縮包內容以下:html

 

p102025301120——Linux-x86-64_1of7.zip             database安裝介質

 

p102025301120——Linux-x86-64_2of7.zip             database安裝介質

 

p102025301120——Linux-x86-64_3of7.zip             grid安裝介質

 

p102025301120——Linux-x86-64_4of7.zip             client安裝介質

 

p102025301120——Linux-x86-64_5of7.zip             gateways安裝介質

 

p102025301120——Linux-x86-64_6of7.zip             example

 

p102025301120——Linux-x86-64_7of7.zip             deinstall

 

 

SWAP大小必定要注意,爲物理內存的1.5倍爲好。java

共享存儲:ASMnode

[root@rac1 ~]# lsb_release -a
LSB Version:    :base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
Distributor ID: OracleServer
Description:    Oracle Linux Server release 6.5
Release:        6.5
Codename:       n/a
[root@rac1 ~]# uname -r
3.8.13-16.2.1.el6uek.x86_64

硬件配置要求: 
- 每一個服務器節點至少須要2塊網卡,一個對外網絡接口,一個私有網路接口(心跳)。 
- 若是你經過OUI安裝Oracle集羣軟件,須要保證每一個節點用於外網或私網接口(網卡名)保證一致。好比,node1使用eth0做爲對外接口,node2就不能使用eth1做爲對外接口。mysql

IP配置要求: 
這裏不採用DHCP方式,指定靜態的scan ip(scan ip能夠實現集羣的負載均衡,由集羣軟件按狀況分配給某一節點)。 
每一個節點分配一個ip、一個虛擬ip、一個私有ip。 
其中ip、vip和scan-ip須要在同一個網段。linux

非GNS下手動配置IP實例:ios

非GNS下手動配置IP實例:c++

Identity Home Node Host Node Given Name Type Address
RAC-1 Public RAC1 RAC1 rac1 Public 192.168.177.101
RAC-1 VIP RAC1 RAC1 rac1-vip Public 192.168.177.201
RAC-1 Private RAC1 RAC1 rac1-priv Private 192.168.139.101
RAC2 RAC2 RAC2 rac2 Public 192.168.177.102
RAC2 VIP RAC2 RAC2 rac2-vip Public 192.168.177.202
RAC2 Private RAC2 RAC2 rac2-priv Private 192.168.139.102
SCAN IP none Selected by Oracle Clusterware scan-ip virtual 192.168.177.110

  

二. 建立操做系統

1.oraliux 6.x
磁盤分區20G  swap 不能少於內存,最好大於內存1.5倍
選擇包時除了 BASEBASE BASE自帶的包外,選中如下項:redis

  • Compatibility libraries
  • ftp server
  • gnome-desktop
  • x windows system
  • Development tools 
  • Chinese support

包能夠經過oralinux一鍵安裝包,自動調整linux環境,以知足oracle的安裝。sql

2.關閉防火牆及selinux(在兩個節點node一、node2上都配置),不然在安裝grid時,有可能卡在65%處。

[root@rac1 ~]# service iptables stop
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@rac1 ~]# chkconfig iptables off
[root@rac1 ~]#  sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
[root@rac1 ~]# setenforce 0
[root@rac1 ~]# 

 

3. 把光盤設置爲本地YUM源:

mv /etc/yum.repos.d/CentOS-Base.repo CentOS-Base.repo.bak
vim /etc/yum.repos.d/CentOS_Media.repo

[c6-media]
name=CentOS-$releasever - Media
baseurl=file:///media/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6
yum clean all
yum make cache

oralinux下:

[root@vmac6 ~]# cd /etc/yum.repos.d
[root@vmac6 yum.repos.d]# mv public-yum-ol6.repo public-yum-ol6.repo.bak
[root@vmac6 yum.repos.d]# touch public-yum-ol6.repo
[root@vmac6 yum.repos.d]# vim public-yum-ol6.repo
[oel6]
name = Enterprise Linux 6.3 DVD
baseurl=file:///media/"OL6.3 x86_64 Disc 1 20120626"/Server
gpgcheck=0
enabled=1

4.細節說明: 

 安裝Oracle Linux時,注意分配兩個網卡,一個網卡爲Host Only方式,用於兩臺虛擬機節點的通信,另外一個網卡爲Nat方式,用於鏈接外網,後面再手動分配靜態IP。每臺主機的內存和swap規劃爲至少2.5G。硬盤規劃爲:boot 500M,其餘空間分配爲LVM方式管理,LVM劃分2.5G爲swap,其餘爲/。 
兩臺Oracle Linux主機名爲rac一、rac2 
注意這裏安裝的兩個操做系統最好在不一樣的硬盤中,不然I/O會很吃力。

檢查內存和swap大小

[root@rac1 ~]# grep MemTotal /proc/meminfo
MemTotal:        2552560 kB
[root@rac1 ~]# grep SwapTotal /proc/meminfo
SwapTotal:       2621436 kB

若是swap過小,swap調整方法:  

經過此種方式進行swap 的擴展,首先要計算出block的數目。具體爲根據須要擴展的swapfile的大小,以M爲單位。block=swap分區大小*1024, 例如,須要擴展64M的swapfile,則:block=64*1024=65536.

而後作以下步驟:

#dd if=/dev/zero of=/swapfile bs=1024 count=65536 
#mkswap /swapfile 
#swapon /swapfile 
#vi /etc/fstab 
增長/swapf swap swap defaults 0 0 
# cat /proc/swaps 或者# free –m //查看swap分區大小 
# swapoff /swapf //關閉擴展的swap分區

 

5.配置網絡

(1)配置ip 
//這裏的網關有vmware中網絡設置決定,eth0爲鏈接外網,eth0內網心跳 
//rac1主機下: 
[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 
IPADDR=192.168.177.101 
PREFIX=24 
GATEWAY=192.168.177.1 
DNS1=192.168.177.1

[root@rac1 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 
IPADDR=192.168.139.101 
PREFIX=24

//rac2主機下 
[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 
IPADDR=192.168.177.102 
PREFIX=24 
GATEWAY=192.168.177.1 
DNS1=192.168.177.1

[root@rac2 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth1 
IPADDR=192.168.139.102 
PREFIX=24

(2)配置hostname 
//rac1主機下 
[root@rac1 ~]# vi /etc/sysconfig/network 
NETWORKING=yes 
HOSTNAME=rac1 
GATEWAY=192.168.177.1 
NOZEROCONF=yes

//rac2主機下 
[root@rac2 ~]# vi /etc/sysconfig/network 
NETWORKING=yes 
HOSTNAME=rac2 
GATEWAY=192.168.177.1 
NOZEROCONF=yes

(3)配置hosts 
rac1和rac2均要添加: 
[root@rac1 ~]# vi /etc/hosts 
192.168.177.101 rac1 
192.168.177.201 rac1-vip 
192.168.139.101 rac1-priv

192.168.177.102 rac2 
192.168.177.202 rac2-vip 
192.168.139.102 rac2-priv

192.168.177.110 scan-ip

6.添加用戶和組及新建安裝目錄

/usr/sbin/groupadd -g 1000 oinstall
/usr/sbin/groupadd -g 1020 asmadmin
/usr/sbin/groupadd -g 1021 asmdba
/usr/sbin/groupadd -g 1022 asmoper
/usr/sbin/groupadd -g 1031 dba
/usr/sbin/groupadd -g 1032 oper
useradd -u 1100 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid
useradd -u 1101 -g oinstall -G dba,asmdba,oper oracle
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir /u01/app/oracle
chown -R grid:oinstall /u01
chown oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/
[root@rac1 ~]# passwd grid
[root@rac1 ~]# passwd oracle

7. 修改內核參數

[root@rac1 ~]# vi /etc/sysctl.conf #preinstall包已經修改

[root@rac1 ~]# vim /etc/security/limits.conf #須要增長grid內容
# grid-rdbms-server-11gR2-preinstall setting for nofile soft limit is 1024
grid   soft   nofile    1024
# grid-rdbms-server-11gR2-preinstall setting for nofile hard limit is 65536
grid   hard   nofile    65536
# grid-rdbms-server-11gR2-preinstall setting for nproc soft limit is 2047
grid   soft   nproc    2047
# grid-rdbms-server-11gR2-preinstall setting for nproc hard limit is 16384
grid   hard   nproc    16384
# grid-rdbms-server-11gR2-preinstall setting for stack soft limit is 10240KB
grid   soft   stack    10240
# grid-rdbms-server-11gR2-preinstall setting for stack hard limit is 32768KB
grid   hard   stack    32768
配置login
[root@rac1 ~]# vi /etc/pam.d/login 
session required pam_limits.so 

8.修改用戶環境變量

[root@rac1 ~]# su - grid
[grid@rac1 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1  # RAC1
export ORACLE_SID=+ASM2  # RAC2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022

[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ vim .bash_profile
export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1  # RAC1
export ORACLE_SID=orcl2  # RAC2
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

$ source .bash_profile使配置文件生效

9.克隆第二個節點,並進行共享存儲設置

修改2號機器的網絡
cd /etc/udev/rules.d
vim 70-persistent-net.rules
編輯這個文件 vim /etc/udev/rules.d/70-persistent-net.rules。把eth2和eth3的mac地址,複製到eth0和eth1中,刪除掉原來的eth2和eth3,注意eth0對應公網mac,eth1對應私網mac地址
start_udev
或者重啓計算機。
修改eth0和eth1的IP配置可參考此處
vim /etc/sysconfig/network 修改機器名
vim /etc/hosts

手動在vbox中添加多塊共享盤,並在節點二上添加各磁盤。

建立存儲

 

而後用udv綁定這幾個盤,命令在下面的腳本中。也可參考
 
cd /dev
ls -l sd*

兩個節點上執行以下腳本,綁定共享磁盤

for i in b c d e f g ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\""      >> /etc/udev/rules.d/99-oracle-asmdevices.rules
done
 /sbin/start_udev

查看:

ls -l asm*
[root@rac1 dev]# ls -l asm*
brw-rw---- 1 grid asmadmin 8, 16 Apr 27 12:00 asm-diskb
brw-rw---- 1 grid asmadmin 8, 32 Apr 27 12:00 asm-diskc
brw-rw---- 1 grid asmadmin 8, 48 Apr 27 12:00 asm-diskd
brw-rw---- 1 grid asmadmin 8, 64 Apr 27 12:00 asm-diske
brw-rw---- 1 grid asmadmin 8, 80 Apr 27 12:00 asm-diskf
brw-rw---- 1 grid asmadmin 8, 96 Apr 27 12:00 asm-diskg

 

VMware建立共享存儲方式參考: 

進入VMware安裝目錄,cmd命令下:

C:\Program Files (x86)\VMware\VMware Workstation>
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\ocr2.vmdk
vmware-vdiskmanager.exe -c -s 1000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\votingdisk.vmdk
vmware-vdiskmanager.exe -c -s 20000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\data.vmdk
vmware-vdiskmanager.exe -c -s 10000Mb -a lsilogic -t 2 F:\VMware\RAC\Sharedisk\backup.vmdk

實例:  

C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 d:\vpc\rac\share\ocr.vmdk
Creating disk 'd:\vpc\rac\share\ocr.vmdk'
  Create: 100% done.
Virtual disk creation successful.


C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 5g -a lsilogic -t 2 d:\vpc\rac\share\data.vmdk
Creating disk 'd:\vpc\rac\share\data.vmdk'
  Create: 100% done.
Virtual disk creation successful.

C:\Program Files (x86)\VMware\VMware Workstation>vmware-vdiskmanager.exe -c -s 2g -a lsilogic -t 2 d:\vpc\rac\share\fra.vmdk
Creating disk 'd:\vpc\rac\share\fra.vmdk'
  Create: 100% done.
Virtual disk creation successful.

注 -a指定磁盤類型 -t表示直接劃分一個預分配空間的文件。

這裏建立了兩個1G的ocr盤,一個1G的投票盤,一個20G的數據盤,一個10G的備份盤。 

10.配置oracle用戶ssh互信
這是很關鍵的一步,雖然官方文檔中聲稱安裝GI和RAC的時候OUI會自動配置SSH,但爲了在安裝以前使用CVU檢查各項配置,仍是手動配置互信更優。
[root@node1 ~]# su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
[root@node1 ~]# su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

[root@node2 ~]# su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
[root@node2 ~]# su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

在節點1上進行互信配置:
[root@node1 ~]# su - grid
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys
[root@node1 ~]# su - oracle
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

在node1把存儲公鑰信息的驗證文件傳送到node2上
[root@node1 ~]# su - grid
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

[root@node1 ~]# su - oracle
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

驗證ssh配置是否正確,以gird、oracle用戶在兩個節點node一、node2上都配置執行:
[root@node1 ~]# su - grid
設置驗證文件的權限
chmod 600 ~/.ssh/authorized_keys

啓用用戶一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add

驗證ssh配置是否正確
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date

[root@node1 ~]# su - oracle
設置驗證文件的權限
chmod 600 ~/.ssh/authorized_keys

啓用用戶一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add

驗證ssh配置是否正確
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
若是不須要輸入密碼就能夠輸出時間,說明ssh驗證配置成功。必須把以上命令在兩個節點都運行,每個命令在第一次執行的時候須要輸入yes。
若是不運行這些命令,即便ssh驗證已經配好,安裝clusterware的時候也會出現錯誤:
The specified nodes are not clusterable
由於,配好ssh後,還須要在第一次訪問時輸入yes,纔算是真正的無障礙訪問其餘服務器。
請謹記,SSH互信須要實現的就是各個節點之間能夠無密碼進行SSH訪問。

環境配置

默認狀況下,下面操做在每一個節點下均要進行,密碼均設置oracle

1. 經過SecureCRT創建命令行鏈接

    • sqlplus中Backspace出現^H的亂碼 
      Options->Session Options->Terminal->Emulation->Mapped Keys->Other mappings 
      勾選Backspace sends delete

    • vi中不能使用delete和home 
      Options->Session Options->Terminal->Emulation 
      設置Terminal爲Linux 
      勾選Select an alternate keyboard emulation爲Linux

  

 

三、關閉NTP及端口範圍參數修改(在兩個節點node一、node2上都配置)

Oracle建議使用Oracle Cluster Time Synchronization Service,所以關閉刪除NTP

[root@node1 ~]# service ntpd stop

[root@node1 ~]# chkconfig ntpd off

[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.old

[root@node1 ~]# rm -rf /var/run/ntpd.pid

 

四、檢查TCP/UDP端口範圍

# cat /proc/sys/net/ipv4/ip_local_port_range

若是已經顯示9000 65500,就不用進行下面的步驟了

# echo 9000 65500 > /proc/sys/net/ipv4/ip_local_port_range

# vim /etc/sysctl.conf

# 添加此行:

# TCP/UDP port range

net.ipv4.ip_local_port_range = 9000 65500

# 重啓網絡

# /etc/rc.d/init.d/network restart

同步時間(在兩個節點node一、node2上都配置)

[root@node1 ~]# date -s 23:29:00

[root@node1 ~]# ssh 192.168.7.12 date;date

[root@node1 ~]# clock -w

 

date -s 03/07/2017     時間設定成2017年3月7日

date -s 23:29:00          時間設置成晚上23點29分0秒

clock -w                      同步bios時鐘,強制將系統時間寫入 

五、 系統文件設置

(1)內核參數設置: 
[root@rac1 ~]# vi /etc/sysctl.conf 
kernel.msgmnb = 65536 
kernel.msgmax = 65536 
kernel.shmmax = 68719476736 
kernel.shmall = 4294967296 
fs.aio-max-nr = 1048576 
fs.file-max = 6815744 
kernel.shmall = 2097152 
kernel.shmmax = 1306910720 
kernel.shmmni = 4096 
kernel.sem = 250 32000 100 128 
net.ipv4.ip_local_port_range = 9000 65500 
net.core.rmem_default = 262144 
net.core.rmem_max = 4194304 
net.core.wmem_default = 262144 
net.core.wmem_max = 1048586 
net.ipv4.tcp_wmem = 262144 262144 262144 
net.ipv4.tcp_rmem = 4194304 4194304 4194304

這裏後面檢測要改 
kernel.shmmax = 68719476736

確認修改內核 
[root@rac1 ~]# sysctl -p

也能夠採用Oracle Linux光盤中的相關安裝包來調整 
[root@rac1 Packages]# pwd 
/mnt/cdrom/Packages 
[root@rac1 Packages]# ll | grep preinstall 
-rw-r–r– 1 root root 15524 Dec 25 2012 oracle-rdbms-server-11gR2-preinstall-1.0-7.el6.x86_64.rpm

(2)配置oracle、grid用戶的shell限制 
[root@rac1 ~]# vi /etc/security/limits.conf 
grid soft nproc 2047 
grid hard nproc 16384 
grid soft nofile 1024 
grid hard nofile 65536 
oracle soft nproc 2047 
oracle hard nproc 16384 
oracle soft nofile 1024 
oracle hard nofile 65536

(3)配置login 
[root@rac1 ~]# vi /etc/pam.d/login 
session required pam_limits.so 

(4)安裝須要的軟件包 

  1. binutils-2.20.51.0.2-5.11.el6 (x86_64) 
    compat-libcap1-1.10-1 (x86_64) 
    compat-libstdc++-33-3.2.3-69.el6 (x86_64) 
    compat-libstdc++-33-3.2.3-69.el6.i686 
    gcc-4.4.4-13.el6 (x86_64) 
    gcc-c++-4.4.4-13.el6 (x86_64) 
    glibc-2.12-1.7.el6 (i686) 
    glibc-2.12-1.7.el6 (x86_64) 
    glibc-devel-2.12-1.7.el6 (x86_64) 
    glibc-devel-2.12-1.7.el6.i686 
    ksh 
    libgcc-4.4.4-13.el6 (i686) 
    libgcc-4.4.4-13.el6 (x86_64) 
    libstdc++-4.4.4-13.el6 (x86_64) 
    libstdc++-4.4.4-13.el6.i686 
    libstdc++-devel-4.4.4-13.el6 (x86_64) 
    libstdc++-devel-4.4.4-13.el6.i686 
    libaio-0.3.107-10.el6 (x86_64) 
    libaio-0.3.107-10.el6.i686 
    libaio-devel-0.3.107-10.el6 (x86_64) 
    libaio-devel-0.3.107-10.el6.i686 
    make-3.81-19.el6 
    sysstat-9.0.4-11.el6 (x86_64)

    這裏使用的是配置本地源的方式,本身先進行配置: 
    [root@rac1 ~]# mount /dev/cdrom /mnt/cdrom/ 
    [root@rac1 ~]# vi /etc/yum.repos.d/dvd.repo 
    [dvd] 
    name=dvd 
    baseurl=file:///mnt/cdrom 
    gpgcheck=0 
    enabled=1 
    [root@rac1 ~]# yum clean all 
    [root@rac1 ~]# yum makecache 
    [root@rac1 ~]# yum install gcc gcc-c++ glibc* glibc-devel* ksh libgcc* libstdc++* libstdc++-devel* make sysstat

vbox下,執行yum install oracle-rdbms-server-11gR2-preinstall-1.0-6.el6

參考http://www.cnblogs.com/ld1977/articles/6767918.html

六、配置grid和oracle用戶環境變量

Oracle_sid須要根據節點不一樣進行修改 
[root@rac1 ~]# su - grid 
[grid@rac1 ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=+ASM1  # RAC1
export ORACLE_SID=+ASM2  # RAC2
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib
umask 022

須要注意的是ORACLE_UNQNAME數據庫名,建立數據庫時指定多個節點是會建立多個實例,ORACLE_SID指的是數據庫實例名

[root@rac1 ~]# su - oracle 
[oracle@rac1 ~]$ vi .bash_profile

export TMP=/tmp
export TMPDIR=$TMP
export ORACLE_SID=orcl1  # RAC1
export ORACLE_SID=orcl2  # RAC2
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export TNS_ADMIN=$ORACLE_HOME/network/admin
export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib

$ source .bash_profile使配置文件生效

七、配置oracle用戶ssh互信

這是很關鍵的一步,雖然官方文檔中聲稱安裝GI和RAC的時候OUI會自動配置SSH,但爲了在安裝以前使用CVU檢查各項配置,仍是手動配置互信更優。

配置過程以下:
各節點生成Keys:
[root@node1 ~]# su - grid
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa
[root@node1 ~]# su - oracle
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa
ssh-keygen -t dsa

在節點1上進行互信配置:
[root@node1 ~]# su - grid
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys
[root@node1 ~]# su - oracle
touch ~/.ssh/authorized_keys
cd ~/.ssh
ssh rac1 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_rsa.pub >> authorized_keys
ssh rac1 cat ~/.ssh/id_dsa.pub >> authorized_keys
ssh rac2 cat ~/.ssh/id_dsa.pub >> authorized_keys

在node1把存儲公鑰信息的驗證文件傳送到node2上
[root@node1 ~]# su - grid
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

[root@node1 ~]# su - oracle
scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys

驗證ssh配置是否正確,以gird、oracle用戶在兩個節點node一、node2上都配置執行:
[root@node1 ~]# su - grid
設置驗證文件的權限
chmod 600 ~/.ssh/authorized_keys

啓用用戶一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add

驗證ssh配置是否正確
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date

[root@node1 ~]# su - oracle
設置驗證文件的權限
chmod 600 ~/.ssh/authorized_keys

啓用用戶一致性
exec /usr/bin/ssh-agent $SHELL
/usr/bin/ssh-add

驗證ssh配置是否正確
ssh rac1 date
ssh rac2 date
ssh rac1-priv date
ssh rac2-priv date
若是不須要輸入密碼就能夠輸出時間,說明ssh驗證配置成功。必須把以上命令在兩個節點都運行,每個命令在第一次執行的時候須要輸入yes。
若是不運行這些命令,即便ssh驗證已經配好,安裝clusterware的時候也會出現錯誤:
The specified nodes are not clusterable
由於,配好ssh後,還須要在第一次訪問時輸入yes,纔算是真正的無障礙訪問其餘服務器。
請謹記,SSH互信須要實現的就是各個節點之間能夠無密碼進行SSH訪問。

須要注意的是生成密鑰時不設置密碼,受權文件權限爲600,同時須要兩個節點互相ssh經過一次。 

八、配置磁盤

使用asm管理存儲須要裸盤,前面配置了共享硬盤到兩臺主機上。配置裸盤的方式有兩種(1)oracleasm添加(2)/etc/udev/rules.d/60-raw.rules配置文件添加(字符方式幫綁定udev) (3)腳本方式添加(塊方式綁定udev,速度比字符方式快,最新的方法,推薦用此方式)

在配置裸盤以前須要先格式化硬盤:

fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
最後 w 命令保存更改

重複步驟,格式化其餘盤,獲得以下分區 
[root@rac1 ~]# ls /dev/sd* 
/dev/sda /dev/sda2 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sda1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
  

添加裸盤:沒用上的步驟

[root@rac1 ~]# vi /etc/udev/rules.d/60-raw.rules
ACTION=="add",KERNEL=="/dev/sdb1",RUN+='/bin/raw /dev/raw/raw1 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="17",RUN+="/bin/raw /dev/raw/raw1 %M %m"
ACTION=="add",KERNEL=="/dev/sdc1",RUN+='/bin/raw /dev/raw/raw2 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="33",RUN+="/bin/raw /dev/raw/raw2 %M %m"
ACTION=="add",KERNEL=="/dev/sdd1",RUN+='/bin/raw /dev/raw/raw3 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="49",RUN+="/bin/raw /dev/raw/raw3 %M %m"
ACTION=="add",KERNEL=="/dev/sde1",RUN+='/bin/raw /dev/raw/raw4 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="65",RUN+="/bin/raw /dev/raw/raw4 %M %m"
ACTION=="add",KERNEL=="/dev/sdf1",RUN+='/bin/raw /dev/raw/raw5 %N"
ACTION=="add",ENV{MAJOR}=="8",ENV{MINOR}=="81",RUN+="/bin/raw /dev/raw/raw5 %M %m"

KERNEL=="raw[1-5]",OWNER="grid",GROUP="asmadmin",MODE="660"

[root@rac1 ~]# start_udev 
Starting udev:                                             [  OK  ]
[root@rac1 ~]# ll /dev/raw/
total 0
crw-rw---- 1 grid asmadmin 162, 1 Apr 13 13:51 raw1
crw-rw---- 1 grid asmadmin 162, 2 Apr 13 13:51 raw2
crw-rw---- 1 grid asmadmin 162, 3 Apr 13 13:51 raw3
crw-rw---- 1 grid asmadmin 162, 4 Apr 13 13:51 raw4
crw-rw---- 1 grid asmadmin 162, 5 Apr 13 13:51 raw5
crw-rw---- 1 root disk     162, 0 Apr 13 13:51 rawctl

這裏須要注意的是配置的,先後都不能有空格,不然會報錯。最後看到的raw盤權限必須是grid:asmadmin用戶。  

方法(3):沒用上步驟

[root@rac1 ~]# for i in b c d e f ;
do
echo "KERNEL==\"sd*\", BUS==\"scsi\", PROGRAM==\"/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/\$name\", RESULT==\"`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`\", NAME=\"asm-disk$i\", OWNER=\"grid\", GROUP=\"asmadmin\", MODE=\"0660\"">> /etc/udev/rules.d/99-oracle-asmdevices.rules
done

[root@rac1 ~]# start_udev 
Starting udev:                                             [  OK  ]

[root@rac1 ~]# ll /dev/*asm* 
brw-rw—- 1 grid asmadmin 8, 16 Apr 27 18:52 /dev/asm-diskb 
brw-rw—- 1 grid asmadmin 8, 32 Apr 27 18:52 /dev/asm-diskc 
brw-rw—- 1 grid asmadmin 8, 48 Apr 27 18:52 /dev/asm-diskd 
brw-rw—- 1 grid asmadmin 8, 64 Apr 27 18:52 /dev/asm-diske 
brw-rw—- 1 grid asmadmin 8, 80 Apr 27 18:52 /dev/asm-diskf

用這種方式添加,在後面的添加asm磁盤組的時候,須要指定Change Diskcovery Path爲/dev/*asm*

安裝ASM這塊按這個教程沒成功,用的如下方法:

root@node1 ~]# cd /tmp/oracle
[root@node1 ~]# rpm -ivh kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm
[root@node1 ~]# rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm 
[root@node1 ~]# rpm -Uvh oracleasm-support-2.1.8-1.el6.x86_64.rpm
[root@node1 ~]# rpm -ivh cvuqdisk-1.0.9-1.rpm

在安裝 kmod-oracleasm-2.0.6.rh1-3.el6.x86_64.rpm時報錯,內核不對,後查找在
http://rpm.pbone.net/index.php3/stat/4/idpl/30518374/dir/scientific_linux_6/com/kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm.html上面下載的2.0.8版本,能夠在linux6.7上安裝
Download 
mirror.switch.ch	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
ftp.rediris.es	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
ftp.pbone.net	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
ftp.icm.edu.pl	 	kmod-oracleasm-2.0.8-5.el6_7.x86_64.rpm
    

我使用的在vmware虛擬機手動添加各個硬盤,兩個節點都增長:

添加磁盤後,給新的磁盤分區

在node1上
[root@node1 ~]# fdisk /dev/sdb
m(幫助)
p(查看)
n(新建分區)p 1 1 1 
p(查看)
w(保存)
q(退出)

fdisk /dev/sdc
fdisk /dev/sdd
fdisk /dev/sde
fdisk /dev/sdf

配置ASM Libaray(必須在兩個節點node一、node2上都配置)

root@node1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

[root@node1 ~]# /usr/sbin/oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm

建立ASM磁盤(只需在node1上操做)

/usr/sbin/oracleasm createdisk VDK001 /dev/sdb1
/usr/sbin/oracleasm createdisk VDK002 /dev/sdc1
/usr/sbin/oracleasm createdisk VDK003 /dev/sdd1
/usr/sbin/oracleasm createdisk VDK004 /dev/sde1
/usr/sbin/oracleasm createdisk VDK005 /dev/sdf1
...........

[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK001 /dev/sdb1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK002 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK003 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK004 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK005 /dev/sdf1
Writing disk header: done
Instantiating disk: done

排錯
Marking disk "VOL5" as an ASM disk: [FAILED]  
            -----失敗的緣由 沒識別  須要先執行/sbin/partprobe
/etc/init.d/oracleasm createdisk VDK001 /dev/sdb1
/etc/init.d/oracleasm createdisk VDK002 /dev/sdc1
/etc/init.d/oracleasm createdisk VDK003 /dev/sdd1
/etc/init.d/oracleasm createdisk VDK004 /dev/sde1
/etc/init.d/oracleasm createdisk VDK005 /dev/sdf1

刪除已有的磁盤
/etc/init.d/oracleasm deletedisk vdk001

加載掃描ASM盤(必須在兩個節點node一、node2上都配置)

/etc/init.d/oracleasm scandisks
/etc/init.d/oracleasm listdisks

[root@node1 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@node1 ~]# /etc/init.d/oracleasm listdisks
VDK001
VDK002
VDK003
VDK004
VDK005

[root@node2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@node2 ~]# /etc/init.d/oracleasm listdisks
VDK001
VDK002
VDK003
VDK004
VDK005

Device ..... is already labeled for ASM disk .....的錯誤.note
[root@node1 oracle]# /usr/sbin/oracleasm createdisk VDK004 /dev/sde1
Device "/dev/sde1" is already labeled for ASM disk "VDK001"
[root@node1 oracle]# /usr/sbin/oracleasm renamedisk -f /dev/sde1 VDK004
Writing disk header: done
Instantiating disk "VDK004": done

安裝grid軟件

libaio-0.3.105(i386)、compat-libstdc++-33-3.2.3(i386)、libaio-devel(i386)、libgcc(i386)、libstdc++(i386)、unixODBC(i386)、

unixODBC-devel(i386)、pdksh、幾個包在執行預檢查時失敗,後續忽略,能夠過去。

安裝前全面檢查(DNS可忽略)(只需在node1主機上操做):

[root@node1 ~]# su - grid
[root@node1 ~]# cd /tmp/oracle/grid
[root@node1 ~]# ./runcluvfy.sh comp nodecon -n oracle-rac1,oracle-rac2 -verbose

[grid@oraclerac1 grid]$ ./runcluvfy.sh comp nodecon -n oraclerac1,oraclerac2 -verbose

Verifying node connectivity 

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status                  
  ------------------------------------  ------------------------
  oraclerac1                           passed                  
  oraclerac2                           passed                  

Verification of the hosts config file successful


Interface information for node "oraclerac1"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.7.11    192.168.7.0     0.0.0.0         192.168.7.1     08:00:27:17:68:C7 1500  
 eth2   172.16.16.1     172.16.16.0     0.0.0.0         192.168.7.1     08:00:27:00:27:D9 1500  


Interface information for node "oraclerac2"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU   
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.7.12    192.168.7.0     0.0.0.0         192.168.7.1     08:00:27:AB:1D:38 1500  
 eth2   172.16.16.2     172.16.16.0     0.0.0.0         192.168.7.1     08:00:27:7A:74:50 1500  


Check: Node connectivity of subnet "192.168.7.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  oraclerac1[192.168.7.11]       oraclerac2[192.168.7.12]       yes             
Result: Node connectivity passed for subnet "192.168.7.0" with node(s) oraclerac1,oraclerac2


Check: TCP connectivity of subnet "192.168.7.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  oraclerac1:192.168.7.11        oraclerac2:192.168.7.12        passed          
Result: TCP connectivity check passed for subnet "192.168.7.0"


Check: Node connectivity of subnet "172.16.16.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  oraclerac1[172.16.16.1]        oraclerac2[172.16.16.2]        yes             
Result: Node connectivity passed for subnet "172.16.16.0" with node(s) oraclerac1,oraclerac2


Check: TCP connectivity of subnet "172.16.16.0"
  Source                          Destination                     Connected?      
  ------------------------------  ------------------------------  ----------------
  oraclerac1:172.16.16.1         oraclerac2:172.16.16.2         passed          
Result: TCP connectivity check passed for subnet "172.16.16.0"


Interfaces found on subnet "192.168.7.0" that are likely candidates for VIP are:
oraclerac1 eth0:192.168.7.11
oraclerac2 eth0:192.168.7.12

Interfaces found on subnet "172.16.16.0" that are likely candidates for a private interconnect are:
oraclerac1 eth2:172.16.16.1
oraclerac2 eth2:172.16.16.2
Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.7.0".
Subnet mask consistency check passed for subnet "172.16.16.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed


Verification of node connectivity was successful.
2一、安裝grid軟件
[root@node1 ~]# export DISPLAY=:0.0
[root@node1 ~]# xhost +
access control disabled, clients can connect from any host
[root@node1 ~]# su - grid
[grid@node1 ~]$ xhost +
access control disabled, clients can connect from any host
[grid@oraclerac1 grid]$ ./runInstaller

定義集羣名字,SCAN Name 爲hosts中定義的scan-ip,取消GNS 

界面只有第一個節點rac1,點擊「Add」把第二個節點rac2加上 

配置ASM,這裏選擇前面配置的裸盤raw1,raw2,raw3,冗餘方式爲External即不冗餘。由於是不用於,因此也能夠只選一個設備。這裏的設備是用來作OCR註冊盤和votingdisk投票盤的。 

 

安裝grid,在執行/u01/app/11.2.0/grid/root.sh時出現了ohasd failed,報出以下錯誤

Adding daemon to inittab
CRS-4124: Oracle High Availability Services startup failed.
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.

  

解決方案參考http://www.cnblogs.com/ld1977/articles/6765341.html 

根據提示查看日誌

[grid@rac1 grid]$ vi /u01/app/oraInventory/logs/installActions2016-04-10_04-57-29PM.log
命令模式查找錯誤:/ERROR
WARNING:
INFO: Completed Plugin named: Oracle Cluster Verification Utility
INFO: Checking name resolution setup for "scan-ip"...
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "scan-ip" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "scan-ip" (IP address: 192.168.2
48.110) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "scan-i
p"
INFO: Verification of SCAN VIP and Listener setup failed

由錯誤日誌可知,是由於沒有配置resolve.conf,能夠忽略 

安裝grid清單位置 

至此grid集羣軟件安裝完成

2.安裝grid後的資源檢查

以grid用戶執行如下命令。 
[root@rac1 ~]# su - grid

檢查crs狀態

[grid@rac1 ~]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

  

檢查Clusterware資源

[grid@rac1 ~]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    rac1        
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    rac1        
ora.OCR.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    rac1        
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    rac1        
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE               
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    rac1        
ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    rac1        
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    rac1        
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    rac1        
ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    rac1        
ora.rac1.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac1.ons   application    0/3    0/0    ONLINE    ONLINE    rac1        
ora.rac1.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac1        
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    rac2        
ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    rac2        
ora.rac2.gsd   application    0/5    0/0    OFFLINE   OFFLINE               
ora.rac2.ons   application    0/3    0/0    ONLINE    ONLINE    rac2        
ora.rac2.vip   ora....t1.type 0/0    0/0    ONLINE    ONLINE    rac2        
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    rac1 

  

檢查集羣節點

[grid@rac1 ~]$ olsnodes -n
rac1    1
rac2    2

檢查兩個節點上的Oracle TNS監聽器進程

[grid@rac1 ~]$ ps -ef|grep lsnr|grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}'
LISTENER_SCAN1
LISTENER

  

確認針對Oracle Clusterware文件的Oracle ASM功能: 
若是在 Oracle ASM 上暗轉過了OCR和表決磁盤文件,則以Grid Infrastructure 安裝全部者的身份,使用給下面的命令語法來確認當前正在運行已安裝的Oracle ASM:

[grid@rac1 ~]$ srvctl status asm -a
ASM is running on rac2,rac1
ASM is enabled.
相關文章
相關標籤/搜索