基於CentOS與VmwareStation10搭建Oracle11G RAC 64集羣環境

1.資源準備 php

最近,在VmwareStation 10虛擬機上,基於CentOS5.4安裝Oracle 11g RAC,並把過程記錄下來.剛開始時,是基於CentOS 6.4安裝Oracle 11g RAC, 沒有成功,主要是Oracle 11g RAC安裝,沒有針對CentOS 6.4的內核的一些包.css

  本文內容詳實,包括安裝過程當中遇到的一些問題,也單獨編一章節,第四章節的FAQ.html

  http://blog.chinaunix.net/xmlrpc.php?r=blog/article&id=4681351&uid=29655480java

1.1.  軟件準備

  • SecureCRT:用於客戶機經過SSH鏈接LINUX
  • VmWareStation10:

VMware-workstation-full-10.0.1-1379776.exenode

5C4A7-6Q20J-6ZD58-K2C72-0AKPE  (已測,可用)linux

1Y0W5-0W205-7Z8J0-C8C5M-9A6MFc++

NF2A7-AYK43-CZCT8-FCAN6-CA84shell

4A4QH-6Q195-XZW10-6K8N2-A3CJX數據庫

5A6ZT-20JD2-LZWZ9-H38N0-92L62bootstrap

  • CentOS5.4: CentOS-5.4-x86_64-bin-DVD1.iso、CentOS-5.4-x86_64-bin-DVD2.iso
  • Oracle 11g: linux.x64_11gR2_database_1of2.zip、linux.x64_11gR2_database_1of2.zip、linux.x64_11gR2_grid.zip

                 http://public-yum.oracle.com

                 http://mirrors.163.com/centos/

                 https://www.centos.org/download/mirrors/ 

      http://download.chinaunix.net/download.php?id=30562&ResourceID=12271

  • Oracle ASMlib下載地址:        

        http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

 

1.2.  硬件準備

  • Windows環境:
  • 虛擬機環境:
  •  

  • 2.搭建環境-2.1建立虛擬機 

 

2.1.1. 建立虛擬機節點1

  

2.1.2.  建立虛擬機節點2

操做如節點1.

 

2.2. 安裝操做系統CentOS5.4

  兩個虛擬機都安裝,此步驟在建立虛擬機節點時:

2.3.配置共享磁盤

2.3.1.建立共享磁盤

  在cmd中進入WMware Workstation 10.0 安裝目錄:

1.建立存儲Oracle Clusterware文件  (Oracle Cluster Registry and voting disk) 的磁盤:

vmware-vdiskmanager.exe -c -s 4Gb -a   lsilogic  -t  2  "E:\SoftwareInstall\vmware\SharedDiskASM"\ShareDiskOCR.vmdk

2.建立存儲Oracle共享數據文件的磁盤:

vmware-vdiskmanager.exe -c -s 20Gb -a  lsilogic -t 2 "E:\SoftwareInstall\vmware\SharedDiskASM"\ShareDiskData01.vmdk

vmware-vdiskmanager.exe -c -s 20Gb -a  lsilogic -t 2 "E:\SoftwareInstall\vmware\SharedDiskASM"\ShareDiskData02.vmdk

 

vmware-vdiskmanager.exe -c -s 5Gb -a   lsilogic  -t  2  "E:\SoftwareInstall\vmware\SharedDiskASM"\ShareDiskFlash.vmdk 

2.3.2. 編輯虛擬機配置文件

  關閉兩臺虛擬機,用記事本打開虛擬機名字.wmx,到虛擬機的目錄好比 E:\SoftwareInstall\vmware\linuxrac1,直接編輯*.vmx 文件, 加上下面語句(全部虛擬機節點):

scsi1:1.deviceType = "disk"

scsi1:1.present = "TRUE"

scsi1:1.fileName = "E:\SoftwareInstall\vmware\SharedDiskASM\ShareDiskOCR.vmdk"

scsi1:1.mode = "independent-persistent"

scsi1:1.redo = ""

 

scsi1:2.deviceType = "disk"

scsi1:2.present = "TRUE"

scsi1:2.fileName = "E:\SoftwareInstall\vmware\SharedDiskASM\ShareDiskData01.vmdk"

scsi1:2.mode = "independent-persistent"

scsi1:2.redo = ""

 

scsi1:3.deviceType = "disk"

scsi1:3.present = "TRUE"

scsi1:3.fileName = "E:\SoftwareInstall\vmware\SharedDiskASM\ShareDiskData02.vmdk"

scsi1:3.mode = "independent-persistent"

scsi1:3.redo = ""

 

scsi1:4.deviceType = "disk"

scsi1:4.present = "TRUE"

scsi1:4.fileName = "E:\SoftwareInstall\vmware\SharedDiskASM\ShareDiskFlash.vmdk"

scsi1:4.mode = "independent-persistent"

scsi1:4.redo = ""

scsi1.pciSlotNumber = "37"

usb:0.present = "TRUE"

usb:0.deviceType = "hid"

usb:0.port = "0"

usb:0.parent = "-1"

  注意:這個文件中的每一行都不能重複,不然會報錯, 並且不要去改變文件的編碼格式

(若是提示須要存儲爲其餘編碼格式,好比unicode, 那麼就是拷貝的格式有問題,須要手工寫入 )。

最後開啓虛擬機程序(注意,必定要從新啓動vmware界面程序),查看每一個節點虛擬機

Devices部分,在未開啓各個虛擬機的時候就應該能夠看到磁盤掛載狀況 。  而後開機

再次確認 。  若是在未開啓時沒有看到磁盤信息, 那麼就是寫入vmx文件的語法有問題,

能夠手工寫入(而不是拷貝)。   

  固然也能夠經過VMWare圖形界面創建磁盤,注意共享磁盤選擇SCSI 1而不是0才能夠。我這裏就是用的圖形界面方式,總之不論是圖形界面仍是命令建立,最後添加到虛擬機裏的每個磁盤屬性應當以下圖所示,

注意選擇磁盤時候須要選取 SCSI 1:1,SCSI 1:2,我這裏有四個共享磁盤,那麼就依次是SCSI 1:1,SCSI 1:2SCSI 1:3,SCSI 1:4

 

2.4.安裝JDK

2.4.1.準備JDK

  在百度搜索:JDK下載 

2.4.2.上傳JDK

put E:\軟件安裝文件\jdk-8u11-linux-x64.rpm /home/linuxrac1/Downloads

put E:\軟件安裝文件\linux.x64_11gR2_database_1of2.zip /home/linuxrac1/Downloads

put E:\軟件安裝文件\linux.x64_11gR2_database_2of2.zip /home/linuxrac1/Downloads

put E:\軟件安裝文件\jdk-8u11-linux-x64.rpm /home/linuxrac1/Downloads 

2.4.3.安裝JDK

 

安裝JDK:

[linuxrac1@localhost Downloads]# su root

Password:

[linuxrac1@localhost Downloads]# rpm -ivh jdk-8u11-linux-x64.rpm

Preparing...          ###########################################[100%]

1:jdk                   ###########################################[100%]

Unpacking JAR files...

          rt.jar...

          jsse.jar...

          charsets.jar...

          tools.jar...

          localedata.jar...

          jfxrt.jar...

 

[root@localhost Downloads]#

 

total 16

lrwxrwxrwx 1 root root   16 Sep  3 18:40 default -> /usr/java/latest

drwxr-xr-x 8 root root 4096 Sep  3 18:40 jdk1.8.0_11

lrwxrwxrwx 1 root root   21 Sep  3 18:40 latest -> /usr/java/jdk1.8.0_11

[root@linuxrac1 java]# cd jdk1.8.0_11

[root@linuxrac1 jdk1.8.0_11]# ls

bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt

COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt

db         lib             release

include    LICENSE         src.zip

[root@linuxrac1 jdk1.8.0_11]# pwd

/usr/java/jdk1.8.0_11

 

2.4.4.配置JDK環境變量

 

查看安裝JDK的目錄:/usr/java/

[root@linuxrac1 ~]# cd /usr/java/

[root@linuxrac1 java]# ll

total 16

lrwxrwxrwx 1 root root   16 Sep  3 18:40 default -> /usr/java/latest

drwxr-xr-x 8 root root 4096 Sep  3 18:40 jdk1.8.0_11

lrwxrwxrwx 1 root root   21 Sep  3 18:40 latest -> /usr/java/jdk1.8.0_11

[root@linuxrac1 java]# cd jdk1.8.0_11

[root@linuxrac1 jdk1.8.0_11]# ls

bin        javafx-src.zip  man          THIRDPARTYLICENSEREADME-JAVAFX.txt

COPYRIGHT  jre             README.html  THIRDPARTYLICENSEREADME.txt

db         lib             release

include    LICENSE         src.zip

[root@linuxrac1 jdk1.8.0_11]# pwd

/usr/java/jdk1.8.0_11

 

編輯profile

[root@linuxrac1 ~]# cd /etc

[root@linuxrac1 etc]# ls profile

profile

[root@linuxrac1 etc]# vi profile

# /etc/profile

 

# System wide environment and startup programs, for login setup

# Functions and aliases go in /etc/bashrc

pathmunge () {

        if ! echo $PATH | /bin/egrep -q "(^|:)$1($|:)" ; then

           if [ "$2" = "after" ] ; then

              PATH=$PATH:$1

           else

              PATH=$1:$PATH

           fi

        fi

}

 

# ksh workaround

if [ -z "$EUID" -a -x /usr/bin/id ]; then

        EUID=`id -u`

        UID=`id -ru`

fi

 

# Path manipulation

# if [ "$EUID" = "0" ]; then

        pathmunge /sbin

        pathmunge /usr/sbin

        pathmunge /usr/local/sbin

# fi

# set environment by HondaHsu 2014

JAVA_HOME=/usr/java/jdk1.8.0_11

JRE_HOME=/usr/java/jdk1.8.0_11/jre

PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tool.jar:$JRE_HOME/lib:

export JAVA_HOME JRE_HOME PATH CLASSPATH

 

[root@linuxrac1 etc]# java -version

java version "1.8.0_11"

Java(TM) SE Runtime Environment (build 1.8.0_11-b12)

Java HotSpot(TM) 64-Bit Server VM (build 25.11-b03, mixed mode)

2.5. 配置網絡 

2.5.1. 配置網絡

  Oracle Rac數據庫涉及到公用網絡和私有網絡,所以要作網絡劃分和IP地址規劃,下表列出了要安裝的RAC數據庫對應的IP地址、主機名以及網絡鏈接類型:

Rac1

主機名

IP址址

子網

網絡類型

解析方式

Eth0

10.10.97.161

255.255.255.0

公用網絡

 

Eth1

192.168.2.116

255.255.255.0

私有網絡

 

 

10.10.97.181

255.255.255.0

虛擬網絡

 

/etc/hosts

#eth0 public

10.10.97.161  linuxrac1

10.10.97.167  linuxrac2

#eth1 private

192.168.2.116  linuxrac1-pri

192.168.2.216  linuxrac2-priv

#virtual

10.10.97.181  linuxrac1-vip

10.10.97.183  linuxrac2-vip

#scan

10.10.97.193  linuxrac-scan

 

 

Rac2

主機名

IP址址

子網

網絡類型

解析方式

Eth0

10.10.97.167

255.255.255.0

公用網絡

 

Eth1

192.168.2.216

255.255.255.0

私有網絡

 

 

10.10.97.183

255.255.255.0

虛擬網絡

 

/etc/hosts

#eth0 public

10.10.97.161  linuxrac1

10.10.97.167  linuxrac2

#eth1 private

192.168.2.116  linuxrac1-pri

192.168.2.216  linuxrac2-priv

#virtual

10.10.97.181  linuxrac1-vip

10.10.97.183  linuxrac2-vip

#scan

10.10.97.193  linuxrac-scan

  公網與私網IP設置在網卡上,虛擬IP不用設置.        

 

  經過ifdown,ifup使配置的IP生效:

[root@linuxrac2 Desktop]# ifdown eth0

Device state: 3 (disconnected)

[root@linuxrac2 Desktop]# ifdown eth1

Device state: 3 (disconnected)

[root@linuxrac2 Desktop]# ifup eth0

Active connection state: activating

Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/2

state: activated

Connection activated

[root@linuxrac2 Desktop]# ifup eth1

Active connection state: activated

Active connection path: /org/freedesktop/NetworkManager/ActiveConnection/3

2.6. 安裝Oracle所依賴的必要包

2.6.1. 檢查Oracle所依賴的必要rpm包

 

[root@localhost /]#rpm -q binutils compat-libstdc elfutils-libelf elfutils-libelf-devel expat gcc

 

2.6.2. 準備Oracle所依賴的必要包

  • Linux安裝文件的CentOS-5.4-x86_64-bin-DVD1\Packages文件夾查找缺乏的依賴包:

 

  • yum 使用http://mirror.centos.org的鏡像更新:http://mirror.centos.org/centos/5/os/x86_64/CentOS/

 

  另外,也能夠經過修改yum.repos.d下的CenOS的鏡像倉庫地址,如訪問163的鏡像,操做以下:

  1)     備份系統原來的CentOS-Base.repo文件:

[root@localhost /]#cd /etc/yum.repos.d/

[root@localhost /]#cp -a CentOS-Base.repo CentOS-Base.repo.bak

  2)     用vi打開CentOS-Base.repo文件:

[root@localhost /]#vim CentOS-Base.repo

  而後,按Insert鍵進入文本編輯狀態

  3)     修改後的CentOS-Base.repo文件,代碼以下:

  # CentOS-Base.repo

# CentOS-Base.repo 

# This file uses a new mirrorlist system developed by Lance Davis for CentOS. 
# The mirror system uses the connecting IP address of the client and the 
# update status of each mirror to pick mirrors that are updated to and 
# geographically close to the client. You should use this for CentOS updates 
# unless you are manually picking other mirrors. 

# If the mirrorlist= does not work for you, as a fall back you can try the 
# remarked out baseurl= line instead. 

#

[base] 
name=CentOS-$releasever - Base 
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os 
#baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ 
baseurl=http://mirrors.163.com/centos/$releasever/os/$basearch/ 
gpgcheck=1 
#gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6

#released updates 
[updates] 
name=CentOS-$releasever - Updates 
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates 
#baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/ 
baseurl=http://mirrors.163.com/centos/$releasever/updates/$basearch/ 
gpgcheck=1 
#gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6

#packages used/produced in the build but not released 
[addons] 
name=CentOS-$releasever - Addons 
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=addons 
#baseurl=http://mirror.centos.org/centos/$releasever/addons/$basearch/ 
baseurl=http://mirrors.163.com/centos/$releasever/addons/$basearch/ 
gpgcheck=1 
#gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6

#additional packages that may be useful 
[extras] 
name=CentOS-$releasever - Extras 
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras 
#baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/ 
baseurl=http://mirrors.163.com/centos/$releasever/extras/$basearch/ 
gpgcheck=1 
#gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6

#additional packages that extend functionality of existing packages 
[centosplus] 
name=CentOS-$releasever - Plus 
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus 
#baseurl=http://mirror.centos.org/centos/$releasever/centosplus/$basearch/ 
baseurl=http://mirrors.163.com/centos/$releasever/centosplus/$basearch/ 
gpgcheck=1 
enabled=0 
#gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5

gpgkey=http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-6

  保存:先按Esc,而後按「:」,再:「wq」。

2.6.3. 上傳到虛擬機LINUX上

經過SecureCRT上傳:oracle 所依賴的一些包

put  E:\temp\ rpms.zip/home/linuxrac2/Downloads

 

  而後解壓:

解壓所上傳的壓縮包:

put  E:\temp\ rpms.zip  /home/linuxrac2/Downloads

2.6.4.安裝Oracle所依賴的必要包

  例如:

[root@localhost rpms]# rpm -ivh pdksh-5.2.14-37.el5_8.1.x86_64.rpm

[root@localhost rpms]# rpm -ivh unixODBC-2.2.14-12.el6_3.x86_64.rpm

[root@localhost rpms]# rpm -ivh unixODBC-devel-2.2.14-12.el6_3.x86_64.rpm

[root@localhost rpms]# rpm -ivh gcc-4.4.7-3.el6.x86_64.rpm

 

  • 在安裝 oracle x64 時,須要安裝 32 位的 compat-libstdc++,可是,直接運行安裝時,會因爲依賴關係,安裝不能完成,若是手工輸入這麼多的包,就過於複雜了。更關鍵的是,它列出的名字與包的名字並不一一對應。

[root@localhost  rpms]#rpm -ivh compat-libstdc++-33-3.2.3-69.el6.i686.rpm
error: Failed dependencies:
    libc.so.6 is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libc.so.6(GLIBC_2.0) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libc.so.6(GLIBC_2.1) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libc.so.6(GLIBC_2.1.3) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libc.so.6(GLIBC_2.2) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libc.so.6(GLIBC_2.3) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libgcc_s.so.1 is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libgcc_s.so.1(GCC_3.0) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libgcc_s.so.1(GCC_3.3) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libgcc_s.so.1(GLIBC_2.0) is needed by compat-libstdc++-33-3.2.3-69.el6.i686
    libm.so.6 is needed by compat-libstdc++-33-3.2.3-69.el6.i686

 

  1)     創建光盤的掛接點
      mkdir /media/cdrom
      這個路徑是必須的,由於,系統中設置的光盤路徑就是這個。也可手工

  2)     掛接光盤(注,掛載設備類型及內容請參考CentOS-Media.repo中的描述)
      mount /dev/dvd  /media/cdrom/

  3)     下面就可使用了。
      命令格式:
      yum --disablerepo=\* --enablerepo=c6-media [命令]
      好比:
      查詢組:
      yum --disablerepo=\* --enablerepo=c6-media grouplist
      安裝 X11。
      yum --disablerepo=\* --enablerepo=c6-media groupinstall "X11"
      若是不是組,好比,安裝 perl:
      yum --disablerepo=\* --enablerepo=c6-media install "perl"

  •  因而,咱們就能夠用下面的命令,它會自動把依賴的包所有裝上:

[root@localhost  rpms]#yum --disablerepo=\* --enablerepo=c6-media install compat-libstdc++-33-3.2.3-69.el6.i686.rpm

  1.下載repo文件:

[root@localhost etc]# cd yum.repos.d

[root@localhost yum.repos.d]# wget http://mirrors.163.com/.help/CentOS6-Base-163.repo

 

  2.備份並替換系統的repo文件

[root@localhost yum.repos.d]# mv CentOS-Base.repo CentOS-Base.repo.bak

[root@localhost yum.repos.d]# mv CentOS6-Base-163.repo CentOS-Base.repo

  3.執行yum源更新

[root@localhost yum.repos.d]# yum clean all

[root@localhost yum.repos.d]# yum makecache

[root@localhost yum.repos.d]# yum update

 

2.7.配置資源與參數

2.7.1. 修改主機名稱

[root@linuxrac1 ~]# cd /etc/sysconfig

[root@linuxrac1 sysconfig]# vi network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=linuxrac1

[root@linuxrac1 sysconfig]#

 

[root@linuxrac2 ~]# cd /etc/sysconfig

[root@linuxrac2 sysconfig]# vi network

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=linuxrac2

[root@linuxrac2 sysconfig]#

 

2.7.2. 配置用戶,組,目錄和權

  建立組以前要確認一下/etc/group及/etc/passwd下的組及用戶,確保每一個節點上的uid及gid 一致 (固然也能夠建組的時候加入id號,groupadd -g 501 oinstall) .

  根據規劃:

  Grid Infrastructure操做系統用戶grid , 主組爲oinstall輔助組爲asmadmin, asmdba, asmoper

  Oracle RAC操做系統用戶oracle , 主組爲oinstall , 輔助組爲dba, oper , asmdba

[root@localhost /]# groupadd oinstall

[root@localhost /]# groupadd dba

[root@localhost /]# groupadd oper

[root@localhost /]# groupadd asmadmin

[root@localhost /]# groupadd asmdba

[root@localhost /]# groupadd asmoper

[root@localhost /]# useradd -g oinstall -G dba,asmdba,asmadmin,asmoper grid

[root@localhost /]# useradd -g oinstall -G dba,oper,asmdba oracle

[root@localhost /]# echo -n oracle|passwd --stdin grid

Changing password for user grid.

passwd: all authentication tokens updated successfully.

[root@localhost /]# echo -n oracle|passwd --stdin oracle

Changing password for user oracle.

passwd: all authentication tokens updated successfully.

[root@localhost /]# mkdir -p /u01/app/11.2.0/grid

[root@localhost /]# mkdir -p /u01/app/grid

[root@localhost /]# mkdir -p /u01/app/oracle

[root@localhost /]# chown grid:oinstall /u01/app/11.2.0/grid

[root@localhost /]# chown grid:oinstall /u01/app/grid

[root@localhost /]# chown -R oracle:oinstall /u01/app/oracle

[root@localhost /]# chmod -R 777 /u01/

[root@localhost /]# chown -R grid:oinstall /u01

 

2.7.3.修改系統內核參數/etc/sysctl.conf

[root@linuxrac1 etc]# vi sysctl.conf

# add parameter for oracle

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 1073741824

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

 

2.7.4.配置/etc/security/limits.conf

[root@linuxrac1]# vi /etc/security/limits.conf

#add parameter for oracle and grid

oracle soft nproc   2047

oracle hard nproc  16384

oracle soft nofile   1024

oracle hard nofile  65536

oracle soft stack   10240

grid soft nproc    2047

grid hard nproc   16384

grid soft nofile    1024

grid hard nofile   65536

grid soft stack    10240

 

2.7.5. 配置/etc/profile

[root@linuxrac1 etc]# vi profile

# for oracle

if [ \$USER = "oracle" ] || [ \$USER = "grid" ];then

        if [ \$SHELL = "/bin/ksh" ];then

                ulimit -p 16384 ulimit -n 65536

        else

                ulimit -u 16384 -n 65536

        fi

                umask 022

fi

#############################

export PATH=$PATH:/u01/app/11.2.0/grid/bin

#color of grep

alias grep='grep --color=auto'

2.8.配置用戶環境

2.8.1. 配置節點RAC1

配置grid用戶環境變量:

cat >> /home/grid/.bash_profile <<EOF

export TMP=/tmp;

export TMPDIR=\$TMP;

export ORACLE_HOSTNAME=linuxrac1;

export ORACLE_SID=+ASM1;

export ORACLE_BASE=/u01/app/grid;

export ORACLE_HOME=/u01/app/11.2.0/grid;

export NLS_DATE_FORMAT="yy-mm-dd HH24:MI:SS";

export PATH=\$ORACLE_HOME/bin:\$PATH;

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;

EOF

 

配置oracle用戶環境變量:

cat >> /home/oracle/.bash_profile <<EOF

export TMP=/tmp;

export TMPDIR=\$TMP;

export ORACLE_HOSTNAME= linuxrac1;

export ORACLE_BASE=/u01/app/oracle;

export ORACLE_HOME=\$ORACLE_BASE/product/11.2.0/db_1;

export ORACLE_UNQNAME=prod;

export ORACLE_SID=prod1;

export ORACLE_TERM=xterm;

export PATH=/usr/sbin:\$PATH;

export PATH=\$ORACLE_HOME/bin:\$PATH;

export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib;

export CLASSPATH=\$ORACLE_HOME/JRE:\$ORACLE_HOME/jlib:\$ORACLE_HOME/rdbms/jlib; export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;

EOF

2.8.2.配置節點RAC2

配置grid用戶環境變量:

cat >> /home/grid/.bash_profile <<EOF

export TMP=/tmp;

export TMPDIR=\$TMP;

export ORACLE_HOSTNAME= linuxrac2;

export ORACLE_SID=+ASM2;

export ORACLE_BASE=/u01/app/grid;

export ORACLE_HOME=/u01/app/11.2.0/grid;

export NLS_DATE_FORMAT="yy-mm-dd HH24:MI:SS";

export PATH=\$ORACLE_HOME/bin:\$PATH;

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;

EOF

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

配置oracle用戶環境變量:

cat >> /home/oracle/.bash_profile <<EOF

export TMP=/tmp;

export TMPDIR=\$TMP;

export ORACLE_HOSTNAME=linuxrac2;

export ORACLE_BASE=/u01/app/oracle;

export ORACLE_HOME=\$ORACLE_BASE/product/11.2.0/db_1;

export ORACLE_UNQNAME=prod;

export ORACLE_SID=prod2;

export ORACLE_TERM=xterm;

export PATH=/usr/sbin:\$PATH;

export PATH=\$ORACLE_HOME/bin:\$PATH;

export LD_LIBRARY_PATH=\$ORACLE_HOME/lib:/lib:/usr/lib;

export CLASSPATH=\$ORACLE_HOME/JRE:\$ORACLE_HOME/jlib:\$ORACLE_HOME/rdbms/jlib; export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";

export NLS_LANG=AMERICAN_AMERICA.ZHS16GBK;

EOF

2.9.配置用戶等效性(可選項)

  Oracle 11g r2 ssh也能夠在安裝過程當中配置.

2.9.1. grid用戶等效性

1.如下均以grid用戶執行: 在兩個節點的grid主目錄分別建立.ssh目錄,並賦予權限

linuxrac1

[grid@linuxrac1 ~]$mkdir ~/.ssh

[grid@linuxrac1 ~]$chmod 755 ~/.ssh

[grid@linuxrac1 ~]$ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_rsa.

Your public key has been saved in /home/grid/.ssh/id_rsa.pub.

The key fingerprint is:

7a:7b:62:31:da:07:88:0d:22:46:46:28:d1:cc:87:e1 grid@linuxrac1

[grid@linuxrac1 ~]$ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

19:3b:fc:23:85:8d:f4:58:7d:f6:fd:80:99:ce:f8:52 grid@linuxrac1

 

linuxrac2

[grid@linuxrac2 ~]$ mkdir ~/.ssh

[grid@linuxrac2 ~]$ chmod 755 ~/.ssh

[grid@linuxrac2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_rsa.

Your public key has been saved in /home/grid/.ssh/id_rsa.pub.

The key fingerprint is:

69:8c:94:2b:2b:a4:8d:33:82:8f:b0:49:03:a1:1a:b9 grid@linuxrac2

 

[grid@linuxrac2 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/grid/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/grid/.ssh/id_dsa.

Your public key has been saved in /home/grid/.ssh/id_dsa.pub.

The key fingerprint is:

1f:4d:e7:3f:c7:4d:d8:f0:55:f0:eb:c1:ea:74:93:24 grid@linuxrac2

 

以上用默認配置,一路回車便可

 

linuxrac1

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh grid@linuxrac2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh grid@linuxrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh grid@linuxrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

grid@linuxrac1 ~]$ cd .ssh

[grid@linuxrac1 .ssh]$ ll

total 48

-rw-r--r-- 1 grid oinstall 2000 Sep 25 00:48 authorized_keys

-rw------- 1 grid oinstall  668 Sep 25 00:43 id_dsa

-rw-r--r-- 1 grid oinstall  604 Sep 25 00:43 id_dsa.pub

-rw------- 1 grid oinstall 1675 Sep 25 00:42 id_rsa

-rw-r--r-- 1 grid oinstall  396 Sep 25 00:42 id_rsa.pub

-rw-r--r-- 1 grid oinstall  404 Sep 25 00:48 known_hosts

 

linuxrac2

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh grid@linuxrac1 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh grid@linuxrac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh grid@linuxrac1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

2.創建等效性 rac1,rac2雙節點執行

[grid@linuxrac1 ~]$ exec ssh-agent $SHELL

[grid@linuxrac1 ~]$ ssh-add

Identity added: /home/grid/.ssh/id_rsa (/home/grid/.ssh/id_rsa)

Identity added: /home/grid/.ssh/id_dsa (/home/grid/.ssh/id_dsa)

[grid@linuxrac1 ~]$ ssh linuxrac1 date

[grid@linuxrac1 ~]$ ssh linuxrac1-priv date

[grid@linuxrac1 ~]$ ssh linuxrac2 date

[grid@linuxrac1 ~]$ ssh linuxrac2-priv date

ssh linuxrac1 date; ssh linuxrac2 date

[grid@linuxrac2 ~]$ exec ssh-agent $SHELL

[grid@linuxrac2 ~]$ ssh-add

Identity added: /home/grid/.ssh/id_rsa (/home/grid/.ssh/id_rsa)

Identity added: /home/grid/.ssh/id_dsa (/home/grid/.ssh/id_dsa)

[grid@linuxrac2 ~]$ ssh linuxrac1 date

[grid@linuxrac2 ~]$ ssh linuxrac1-priv date

[grid@linuxrac2 ~]$ ssh linuxrac2 date

[grid@linuxrac2 ~]$ ssh linuxrac2-priv date

2.9.2. oracle 用戶等效性

如下均以oracle用戶執行

linuxrac1

[oracle @linuxrac1 ~]$mkdir ~/.ssh

[oracle @linuxrac1 ~]$chmod 755 ~/.ssh

[oracle @linuxrac1 ~]$ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

e9:2b:1a:2b:ac:5f:91:be:0f:84:17:d7:bd:b7:15:d2 oracle@linuxrac1

[oracle @linuxrac1 ~]$ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

f5:0f:f5:0c:55:37:6a:08:ef:06:07:37:65:25:4a:15 oracle@linuxrac1

 

linuxrac2

[oracle @linuxrac2 ~]$ mkdir ~/.ssh

[oracle @linuxrac2 ~]$ chmod 755 ~/.ssh

[oracle @linuxrac2 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_rsa.

Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.

The key fingerprint is:

56:47:a0:94:67:44:d9:31:12:57:44:08:9d:84:25:a1 oracle@linuxrac2

 

[oracle @linuxrac2 ~]$ ssh-keygen -t dsa

Generating public/private dsa key pair.

Enter file in which to save the key (/home/oracle/.ssh/id_dsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/oracle/.ssh/id_dsa.

Your public key has been saved in /home/oracle/.ssh/id_dsa.pub.

The key fingerprint is:

ae:f0:06:77:62:33:86:dc:f4:0d:d9:c6:38:5e:cb:61 oracle@linuxrac2

 

以上用默認配置,一路回車便可

linuxrac1

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh oracle@linuxrac2 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh oracle@linuxrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh oracle@linuxrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

[oracle@linuxrac1 ~]$ cd .ssh

[oracle@linuxrac1 .ssh]$ ll

total 48

-rw-r--r-- 1 oracle oinstall 2008 Sep 25 02:20 authorized_keys

-rw------- 1 oracle oinstall  668 Sep 25 02:09 id_dsa

-rw-r--r-- 1 oracle oinstall  606 Sep 25 02:09 id_dsa.pub

-rw------- 1 oracle oinstall 1675 Sep 25 02:09 id_rsa

-rw-r--r-- 1 oracle oinstall  398 Sep 25 02:09 id_rsa.pub

-rw-r--r-- 1 oracle oinstall  404 Sep 25 02:20 known_hosts

linuxrac2

cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh oracle@linuxrac1 cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

ssh oracle@linuxrac1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh oracle@linuxrac1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

 

創建等效性 rac1,rac2雙節點執行

[oracle@linuxrac1 ~]$ exec ssh-agent $SHELL

[oracle@linuxrac1 ~]$ ssh-add

Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)

[oracle@linuxrac1 ~]$ ssh linuxrac1 date

[oracle@linuxrac1 ~]$ ssh linuxrac1-priv date

[oracle@linuxrac1 ~]$ ssh linuxrac2 date

[oracle@linuxrac1 ~]$ ssh linuxrac2-priv date

 

[oracle@linuxrac2 ~]$ exec ssh-agent $SHELL

[oracle@linuxrac2 ~]$ ssh-add

Identity added: /home/oracle/.ssh/id_rsa (/home/oracle/.ssh/id_rsa)

Identity added: /home/oracle/.ssh/id_dsa (/home/oracle/.ssh/id_dsa)

 

  • The authenticity of host '<host>' can't be established.  

 

  解決辦法:在鏈接目標機上執行ssh  -o StrictHostKeyChecking=no  xxxx(機器名)

2.10.配置用戶NTF服務

2.10.1.配置節點RAC1

1)      

[root@linuxrac1 sysconfig]#sed -i 's/OPTIONS/#OPTIONS/g' /etc/sysconfig/ntpd

2)      

[root@linuxrac1 sysconfig]#cat >> /etc/sysconfig/ntpd << EOF

> OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

EOF

3)      

[root@linuxrac1 sysconfig]#mv /etc/ntp.conf /etc/ntp.confbak

4)      

[root@linuxrac1 sysconfig]# cat > /etc/ntp.conf << EOF

> restrict 0.0.0.0 mask 0.0.0.0 nomodify

> server 127.127.1.0

> fudge 127.127.1.0 stratum 10

> driftfile /var/lib/ntp/drift

> broadcastdelay 0.008

> authenticate no

> keys /etc/ntp/keys

> EOF

2.10.2.配置節點RAC2

1)      

[root@linuxrac2 sysconfig]# sed -i 's/OPTIONS/#OPTIONS/g' /etc/sysconfig/ntpd

2)      

[root@linuxrac2sysconfig]# cat >> /etc/sysconfig/ntpd << EOF

> OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

EOF

3)      

[root@linuxrac2 sysconfig]# mv /etc/ntp.conf /etc/ntp.confbak

4)      

[root@linuxrac2 sysconfig]# cat >> /etc/ntp.conf << XL

> restrict default kod nomodify notrap nopeer noquery

> restrict 10.10.97.0mask 255.255.255.0 nomodify notrap

> restrict 127.0.0.1

> server 10.10.97.168

> server 127.127.1.0 # local clock

> fudge 127.127.1.0 stratum 10

> driftfile /var/lib/ntp/drift

> broadcastdelay 0.008

> keys /etc/ntp/keys

XL

2.10.3.啓動服務(雙節點)

[root@linuxrac1 etc] #service ntpd restart

2.10.4.系統啓動自動加載

[root@linuxrac1 etc] #chkconfig ntpd on

3.1.安裝並配置ASM驅動

3.3.1.檢查內核

[root@linuxrac2 etc]# uname -r

2.6.18-164.el5

  下載如下rpm(注意rpm包版本和Linux內核版本一致):

  Oracle ASMlib下載地址:http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

oracleasmlib-2.0.4-1.el5.x86_64.rpm

oracleasm-support-2.1.7-1.el5.ppc64.rpm

3.1.2.安裝oracleasm包(全部節點執行)

[root@linuxrac1]# su root

[root@linuxrac1]# ll

total 128

-rw-r--r--. 1 root root 33840 Aug  5  2014 oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

-rw-r--r--. 1 root root 13300 Aug  5  2014 oracleasmlib-2.0.4-1.el5.x86_64.rpm

-rw-r--r--. 1 root root 74984 Aug  5  2014 oracleasm-support-2.1.7-1.el5.ppc64.rpm

[root@linuxrac1 ~]#  rpm -ivh oracleasm-support-2.1.8-1.el5.x86_64.rpm

warning: oracleasm-support-2.1.8-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-support      ########################################### [100%]

[root@linuxrac1 ~]# rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-2.6.18-164.el########################################### [100%]

[root@linuxrac1 ~]# rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm

warning: oracleasmlib-2.0.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasmlib           ########################################### [100%]

[oracle@linuxrac2 home]$ scp linuxrac1@192.168.2.106:/home/linuxrac1/Downloads/2.ziplinuxrac2@192.168.2.206:/home/linuxrac2/Downloads

linuxrac1@192.168.2.106's password:

linuxrac2@192.168.2.206's password:

2.zip                                                                                                               100%  110KB 110.5KB/s   00:00   

Connection to 192.168.2.106 closed.

[oracle@linuxrac2 home]$ su root

[root@linuxrac2 linuxrac2]# cd Downloads

[root@linuxrac2 Downloads]# ll

total 7736

-rw-r--r--. 1 linuxrac2 linuxrac2  113141 Aug  7 19:45 2.zip

drwxrwxr-x. 2 linuxrac2 linuxrac2    4096 Jul 25 15:05 rpms

-rw-rw-r--. 1 linuxrac2 linuxrac2 7801347 Jul 26 01:27 rpms.zip

[root@linuxrac2 Downloads]# unzip 2.zip

[root@linuxrac2 ]# ll

total 128

-rw-r--r--. 1 root root 33840 Aug  5 16:28 kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm

-rw-r--r--. 1 root root 13300 Aug  5 15:59 oracleasmlib-2.0.4-1.el6.x86_64.rpm

-rw-r--r--. 1 root root 74984 Aug  5 15:52 oracleasm-support-2.1.8-1.el6.x86_64.rpm

Last login: Tue Sep 23 00:39:40 2014

[root@linuxrac2 ~]# rpm -ivh oracleasm-support-2.1.8-1.el5.x86_64.rpm

warning: oracleasm-support-2.1.8-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-support      ########################################### [100%]

[root@linuxrac2 ~]# rpm -ivh oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm

warning: oracleasm-2.6.18-164.el5-2.0.5-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasm-2.6.18-164.el########################################### [100%]

[root@linuxrac2 ~]# rpm -ivh oracleasmlib-2.0.4-1.el5.x86_64.rpm

warning: oracleasmlib-2.0.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                ########################################### [100%]

   1:oracleasmlib           ########################################### [100%]

[root@linuxrac1 etc]# rpm -qa | grep oracleasm

oracleasmlib-2.0.4-1.el5

oracleasm-support-2.1.8-1.el5

oracleasm-2.6.18-164.el5-2.0.5-1.el5

3.1.3. 初始化 asmlib(在全部節點執行)

啓動oracle asmlib:

[root@linuxrac1 etc]# /etc/init.d/oracleasm start

Initializing the Oracle ASMLib driver: [  OK  ]

Scanning the system for Oracle ASMLib disks: [  OK  ]

啓用asmlib:

[root@linuxrac1 etc]# /etc/init.d/oracleasm enable

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [  OK  ]

Scanning the system for Oracle ASMLib disks: [  OK  ]

以root用戶配置節點linuxrac1

[root@linuxrac1 /]# /etc/init.d/oracleasm configure -i

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting <ENTER> without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: oinstall

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver:                     [  OK  ]

Scanning the system for Oracle ASMLib disks:              [  OK  ]

[root@linuxrac1 /]# /usr/sbin/oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

以root用戶配置節點linuxrac2

[root@linuxrac2 /]# oracleasm configure -i

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM library

driver.  The following questions will determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting <ENTER> without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: oinstall

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver:                     [  OK  ]

Scanning the system for Oracle ASMLib disks:               [  OK  ]

[root@linuxrac2 /]# oracleasm init

Creating /dev/oracleasm mount point: /dev/oracleasm

Loading module "oracleasm": oracleasm

Mounting ASMlib driver filesystem: /dev/oracleasm

3.1.4. 磁盤分區(節點1執行)

1.查詢磁盤

[root@linuxrac1 dev]# fdisk -l

 

Disk /dev/sda: 32.2 GB, 32212254720 bytes

255 heads, 63 sectors/track, 3916 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0005464c

 

Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          39      307200   83  Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2              39         545     4063232   82  Linux swap / Solaris

Partition 2 does not end on cylinder boundary.

/dev/sda3             545        3917    27085824   83  Linux

 

Disk /dev/sdb: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/sde: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/sdc: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

 

Disk /dev/sdd: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

2.磁盤分區,注意分區的Last cylinder的值不能選錯,選擇default 值就好。

[root@linuxrac1 /]# fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel

Building a new DOS disklabel with disk identifier 0x52362e93.

Changes will remain in memory only, until you decide to write them.

After that, of course, the previous content won't be recoverable.

 

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

 

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to

         switch off the mode (command 'c') and change display units to

         sectors (command 'u').

 

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-522, default 1): 1

Last cylinder, +cylinders or +size{K,M,G} (1-522, default 522): 522

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

  /dev/sdc,/dev/sdd,/dev/sde執行相同操做。

3.1.5.建立 asm 磁盤,在節點1執行

1.使用oracleasm createdisk 建立ASM磁盤label:在一個節點執行便可

[root@linuxrac1 /]# /etc/init.d/oracleasm createdisk OCR_VOTE /dev/sdb1

Marking disk "OCR_VOTE" as an ASM disk: [  OK  ]

[root@linuxrac1 /]# oracleasm createdisk OCR_VOTE /dev/sdb1

Device "/dev/sdb1" is already labeled for ASM disk "OCR_VOTE"

[root@linuxrac1 /]# oracleasm createdisk DATA /dev/sdc1

Writing disk header: done

Instantiating disk: done

[root@linuxrac1 /]# oracleasm createdisk DATA2 /dev/sdd1

Writing disk header: done

Instantiating disk: done

[root@linuxrac1 /]# oracleasm createdisk FRA /dev/sde1

Writing disk header: done

Instantiating disk: done

[root@linuxrac1 /]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

2.查看建立後的ASM磁盤

[root@linuxrac1 /]# oracleasm listdisks

DATA

DATA2

FRA

OCR_VOTE

3.檢查磁盤是否已掛載在oracleasm文件系統:

[root@linuxrac1 /]# ls -l /dev/oracleasm/disks

total 0

brw-rw---- 1 grid oinstall 8, 33 Sep 24 18:29 DATA

brw-rw---- 1 grid oinstall 8, 49 Sep 24 18:29 DATA2

brw-rw---- 1 grid oinstall 8, 65 Sep 24 18:29 FRA

brw-rw---- 1 grid oinstall 8, 17 Sep 24 18:29 OCR_VOTE

 

3.1.6. 節點2掃描識別ASM磁盤

節點2掃描識別ASM磁盤

root@linuxrac2 etc]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "OCR_VOTE"

Instantiating disk "DATA"

Instantiating disk "DATA2"

Instantiating disk "FRA"

3.2.安裝 cvuqdisk 軟件包

3.2.1. 準備Oracle  Grid安裝包

上傳Grid 、Oracle 安裝文件:

sftp> put E:\Software\linux.x64_11gR2_grid.zip /root/

Uploading linux.x64_11gR2_grid.zip to /root/linux.x64_11gR2_grid.zip

  100% 1028220KB   7616KB/s 00:02:15     AA/s 00:01:44 ETA

sftp> put E:\Software\linux.x64_11gR2_database_2of2.zip /root/

Uploading linux.x64_11gR2_database_2of2.zip to /root/linux.x64_11gR2_database_2of2.zip

  100% 1085367KB   7333KB/s 00:02:28     0:00:03 ETA

sftp> put E:\Software\linux.x64_11gR2_database_1of2.zip /root/

Uploading linux.x64_11gR2_database_1of2.zip to /root/linux.x64_11gR2_database_1of2.zip

  100% 1210223KB   7708KB/s 00:02:37     AA:08 ETA

3.2.2.安裝cvuqdisk軟件包(全部節點執行)

1.解壓linux.x64_11gR2_grid.zip

[root@linuxrac1 /]#unzip linux.x64_11gR2_grid.zip -d /home/grid/

[root@linuxrac1 grid]# cd grid

[root@linuxrac1 grid]# ll

total 76

drwxr-xr-x  9 root root 4096 Aug 16  2009 doc

drwxr-xr-x  4 root root 4096 Aug 15  2009 install

drwxrwxr-x  2 root root 4096 Aug 15  2009 response

drwxrwxr-x  2 root root 4096 Aug 15  2009 rpm

-rwxrwxr-x  1 root root 3795 Jan 28  2009 runcluvfy.sh

-rwxr-xr-x  1 root root 3227 Aug 15  2009 runInstaller

drwxrwxr-x  2 root root 4096 Aug 15  2009 sshsetup

drwxr-xr-x 14 root root 4096 Aug 15  2009 stage

-rw-r--r--  1 root root 4228 Aug 17  2009 welcome.html

2.在兩節點安裝cvuqdisk

[root@linuxrac1 grid]# cd rpm

[root@linuxrac1 rpm]# ll

total 12

-rw-rw-r-- 1 root root 8173 Jul 14  2009 cvuqdisk-1.0.7-1.rpm

[root@linuxrac1 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm

Preparing...                ########################################### [100%]

Using default group oinstall to install package

   1:cvuqdisk               ########################################### [100%]

[root@linuxrac2 grid]# cd grid

[root@linuxrac2 grid]# ls

doc  install  response  rpm  runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html

[root@linuxrac2 grid]# cd rpm

[root@linuxrac2 rpm]# ll

total 12

-rw-rw-r-- 1 root root 8173 Jul 14  2009 cvuqdisk-1.0.7-1.rpm

[root@linuxrac2 rpm]# rpm -ivh cvuqdisk-1.0.7-1.rpm

Preparing...                ########################################### [100%]

Using default group oinstall to install package

   1:cvuqdisk               ########################################### [100%]

3.3. 安裝前檢查

1.檢查節點連通性

[grid@linuxrac1 grid]$ ./runcluvfy.sh stage -post hwos -n linuxrac1,linuxrac2 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "linuxrac1"

  Destination Node                      Reachable?             

  ------------------------------------  ------------------------

  linuxrac1                             yes                    

  linuxrac2                             yes                    

Result: Node reachability check passed from node "linuxrac1"

Checking user equivalence...

Check: User equivalence for user "grid"

  Node Name                             Comment                

  ------------------------------------  ------------------------

  linuxrac2                             passed                 

  linuxrac1                             passed                 

Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

  Node Name     Status                    Comment                

  ------------  ------------------------  ------------------------

  linuxrac2     passed                                           

  linuxrac1     passed                                           

 

Verification of the hosts config file successful

Interface information for node "linuxrac2"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ----------------- ------

 eth0   10.10.97.167    10.10.97.0      0.0.0.0         10.10.97.232    00:0C:29:E8:8D:F9 1500 

 eth1   192.168.2.216   192.168.2.0     0.0.0.0         10.10.97.232    00:0C:29:E8:8D:03 1500 

 

Interface information for node "linuxrac1"

 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU  

 ------ --------------- --------------- --------------- --------------- ----------------- ------

 eth0   10.10.97.161    10.10.97.0      0.0.0.0         10.10.97.232    00:0C:29:89:82:48 1500 

 eth1   192.168.2.116   192.168.2.0     0.0.0.0         10.10.97.232    00:50:56:23:6A:3E 1500 

 

Check: Node connectivity of subnet "10.10.97.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  linuxrac2:eth0                  linuxrac1:eth0                  yes            

Result: Node connectivity passed for subnet "10.10.97.0" with node(s) linuxrac2,linuxrac1

 

Check: TCP connectivity of subnet "10.10.97.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  linuxrac1:10.10.97.161          linuxrac2:10.10.97.167          passed         

Result: TCP connectivity check passed for subnet "10.10.97.0"

 

Check: Node connectivity of subnet "192.168.2.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  linuxrac2:eth1                  linuxrac1:eth1                  yes            

Result: Node connectivity passed for subnet "192.168.2.0" with node(s) linuxrac2,linuxrac1

 

Check: TCP connectivity of subnet "192.168.2.0"

  Source                          Destination                     Connected?     

  ------------------------------  ------------------------------  ----------------

  linuxrac1:192.168.2.116         linuxrac2:192.168.2.216         passed         

Result: TCP connectivity check passed for subnet "192.168.2.0"

 

Interfaces found on subnet "10.10.97.0" that are likely candidates for VIP are:

linuxrac2 eth0:10.10.97.167

linuxrac1 eth0:10.10.97.161

 

Interfaces found on subnet "192.168.2.0" that are likely candidates for a private interconnect are:

linuxrac2 eth1:192.168.2.216

linuxrac1 eth1:192.168.2.116

 

Result: Node connectivity check passed

 

Checking for multiple users with UID value 0

Result: Check for multiple users with UID value 0 passed

 

Post-check for hardware and operating system setup was successful.

 

3.4.安裝Grid Infrastructure

3.4.1.安裝Grid

1.運行 grid的安裝文件runInstaller

[grid@linuxrac1 grid]$ ./runInstaller

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 120 MB.   Actual 14708 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 5945 MB    Passed

Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-09-26_12

 

                      安裝圖 略

 

 

以root用戶按順序執行

1.[root@rac1 ~]# /u01/app/oraInventory/orainstRoot.sh

2.[root@rac2 ~]# /u01/app/oraInventory/orainstRoot.sh

3.[root@rac1 ~]# /u01/app/11.2.0/grid/root.sh

4.[root@rac2 ~]# /u01/app/11.2.0/grid/root.sh 運行完腳本後,點擊ok完成安裝.若是不能rac-scan,那麼會報錯

 

[root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

 

[root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

 

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

 

[root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]: /usr/local/bin

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2014-10-17 00:24:16: Parsing the host name

2014-10-17 00:24:16: Checking for super user privileges

2014-10-17 00:24:16: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

  root wallet

  root wallet cert

  root cert export

  peer wallet

  profile reader wallet

  pa wallet

  peer wallet keys

  pa wallet keys

  peer cert request

  pa cert request

  peer cert

  pa cert

  peer root cert TP

  profile reader root cert TP

  pa root cert TP

  peer pa cert TP

pa peer cert TP

  profile reader pa cert TP

  profile reader peer cert TP

  peer user cert

  pa user cert

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

CRS-2672: Attempting to start 'ora.gipcd' on 'linuxrac1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'linuxrac1'

CRS-2676: Start of 'ora.gipcd' on 'linuxrac1' succeeded

CRS-2676: Start of 'ora.mdnsd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'linuxrac1'

CRS-2676: Start of 'ora.gpnpd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'linuxrac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'linuxrac1'

CRS-2672: Attempting to start 'ora.diskmon' on 'linuxrac1'

CRS-2676: Start of 'ora.diskmon' on 'linuxrac1' succeeded

CRS-2676: Start of 'ora.cssd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'linuxrac1'

CRS-2676: Start of 'ora.ctssd' on 'linuxrac1' succeeded

 

ASM created and started successfully.

 

DiskGroup OCR_VOTE created successfully.

 

clscfg: -install mode specified

Successfully accumulated necessary OCR keys.

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

CRS-2672: Attempting to start 'ora.crsd' on 'linuxrac1'

CRS-2676: Start of 'ora.crsd' on 'linuxrac1' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk 4657b29f8f874f22bfd2d3d9ace93e9f.

Successfully replaced voting disk group with +OCR_VOTE.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   4657b29f8f874f22bfd2d3d9ace93e9f (ORCL:OCR_VOTE) [OCR_VOTE]

Located 1 voting disk(s).

CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac1'

CRS-2677: Stop of 'ora.crsd' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac1'

CRS-2677: Stop of 'ora.asm' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'linuxrac1'

CRS-2677: Stop of 'ora.ctssd' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'linuxrac1'

CRS-2677: Stop of 'ora.cssdmonitor' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'linuxrac1'

CRS-2677: Stop of 'ora.cssd' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'linuxrac1'

CRS-2677: Stop of 'ora.gpnpd' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'linuxrac1'

CRS-2677: Stop of 'ora.gipcd' on 'linuxrac1' succeeded

CRS-2673: Attempting to stop 'ora.mdnsd' on 'linuxrac1'

CRS-2677: Stop of 'ora.mdnsd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.mdnsd' on 'linuxrac1'

CRS-2676: Start of 'ora.mdnsd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'linuxrac1'

CRS-2676: Start of 'ora.gipcd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'linuxrac1'

CRS-2676: Start of 'ora.gpnpd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'linuxrac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'linuxrac1'

CRS-2672: Attempting to start 'ora.diskmon' on 'linuxrac1'

CRS-2676: Start of 'ora.diskmon' on 'linuxrac1' succeeded

CRS-2676: Start of 'ora.cssd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'linuxrac1'

CRS-2676: Start of 'ora.ctssd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'linuxrac1'

CRS-2676: Start of 'ora.asm' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'linuxrac1'

CRS-2676: Start of 'ora.crsd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'linuxrac1'

CRS-2676: Start of 'ora.evmd' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'linuxrac1'

CRS-2676: Start of 'ora.asm' on 'linuxrac1' succeeded

CRS-2672: Attempting to start 'ora.OCR_VOTE.dg' on 'linuxrac1'

CRS-2676: Start of 'ora.OCR_VOTE.dg' on 'linuxrac1' succeeded

 

linuxrac1     2014/10/17 00:28:25     /u01/app/11.2.0/grid/cdata/linuxrac1/backup_20141017_002825.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500 MB.   Actual 5945 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

[root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh

Running Oracle 11g root.sh script...

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/11.2.0/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]: /usr/local/bin

The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying dbhome to /usr/local/bin ...

The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying oraenv to /usr/local/bin ...

The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)

[n]: y

   Copying coraenv to /usr/local/bin ...

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root.sh script.

Now product-specific root actions will be performed.

2014-10-17 00:30:02: Parsing the host name

2014-10-17 00:30:02: Checking for super user privileges

2014-10-17 00:30:02: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

Creating trace directory

LOCAL ADD MODE

Creating OCR keys for user 'root', privgrp 'root'..

Operation successful.

Adding daemon to inittab

CRS-4123: Oracle High Availability Services has been started.

ohasd is starting

ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node linuxrac1, number 1, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

CRS-2672: Attempting to start 'ora.mdnsd' on 'linuxrac2'

CRS-2676: Start of 'ora.mdnsd' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'linuxrac2'

CRS-2676: Start of 'ora.gipcd' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'linuxrac2'

CRS-2676: Start of 'ora.gpnpd' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'linuxrac2'

CRS-2676: Start of 'ora.cssdmonitor' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'linuxrac2'

CRS-2672: Attempting to start 'ora.diskmon' on 'linuxrac2'

CRS-2676: Start of 'ora.diskmon' on 'linuxrac2' succeeded

CRS-2676: Start of 'ora.cssd' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.ctssd' on 'linuxrac2'

CRS-2676: Start of 'ora.ctssd' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'linuxrac2'

CRS-2676: Start of 'ora.asm' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'linuxrac2'

CRS-2676: Start of 'ora.crsd' on 'linuxrac2' succeeded

CRS-2672: Attempting to start 'ora.evmd' on 'linuxrac2'

CRS-2676: Start of 'ora.evmd' on 'linuxrac2' succeeded

linuxrac2     2014/10/17 00:32:27     /u01/app/11.2.0/grid/cdata/linuxrac2/backup_20141017_003227.olr

Preparing packages for installation...

cvuqdisk-1.0.7-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Updating inventory properties for clusterware

Starting Oracle Universal Installer...

 

Checking swap space: must be greater than 500 MB.   Actual 5945 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oraInventory

'UpdateNodeList' was successful.

 

3.4.2.確認集羣軟件安裝成功

查看集羣軟件安裝結果

[root@linuxrac1 /]# crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....ER.lsnr ora....er.type ONLINE    ONLINE    linuxrac1  

ora....N1.lsnr ora....er.type ONLINE    ONLINE    linuxrac1  

ora....VOTE.dg ora....up.type ONLINE    ONLINE    linuxrac1  

ora.asm        ora.asm.type   ONLINE    ONLINE    linuxrac1  

ora.eons       ora.eons.type  ONLINE    ONLINE    linuxrac1  

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....SM1.asm application    ONLINE    ONLINE    linuxrac1  

ora....C1.lsnr application    ONLINE    ONLINE    linuxrac1  

ora....ac1.gsd application    OFFLINE   OFFLINE              

ora....ac1.ons application    ONLINE    ONLINE    linuxrac1  

ora....ac1.vip ora....t1.type ONLINE    ONLINE    linuxrac1  

ora....SM2.asm application    ONLINE    ONLINE    linuxrac2  

ora....C2.lsnr application    ONLINE    ONLINE    linuxrac2  

ora....ac2.gsd application    OFFLINE   OFFLINE              

ora....ac2.ons application    ONLINE    ONLINE    linuxrac2  

ora....ac2.vip ora....t1.type ONLINE    ONLINE    linuxrac2  

ora....network ora....rk.type ONLINE    ONLINE    linuxrac1  

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              

ora.ons        ora.ons.type   ONLINE    ONLINE    linuxrac1  

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    linuxrac1

 

[root@linuxrac1 /]# crs_stat -t -v

Name           Type           R/RA   F/FT   Target    State     Host       

----------------------------------------------------------------------

ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    linuxrac1  

ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    linuxrac1  

ora....VOTE.dg ora....up.type 0/5    0/     ONLINE    ONLINE    linuxrac1  

ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    linuxrac1  

ora.eons       ora.eons.type  0/3    0/     ONLINE    ONLINE    linuxrac1  

ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE              

ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    linuxrac1  

ora....C1.lsnr application    0/5    0/0    ONLINE    ONLINE    linuxrac1  

ora....ac1.gsd application    0/5    0/0    OFFLINE   OFFLINE              

ora....ac1.ons application    0/3    0/0    ONLINE    ONLINE    linuxrac1  

ora....ac1.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    linuxrac1  

ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    linuxrac2  

ora....C2.lsnr application    0/5    0/0    ONLINE    ONLINE    linuxrac2  

ora....ac2.gsd application    0/5    0/0    OFFLINE   OFFLINE               

ora....ac2.ons application    0/3    0/0    ONLINE    ONLINE    linuxrac2  

ora....ac2.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    linuxrac2  

ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    linuxrac1  

ora.oc4j       ora.oc4j.type  0/5    0/0    OFFLINE   OFFLINE              

ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    linuxrac1  

ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    linuxrac1

 

3.4.3.建立ASM磁盤組

  本次任務將建立 3 個asm 磁盤組,分別爲:DATA,FRA。其中DATA 將存放數據庫文件;FRA 存放閃迴文件.

在grid 用戶下,執行 asmca,啓動 asm 磁盤組建立嚮導。

在grid用戶下,執行asmca

[grid@linuxrac1 ~]$ pwd

/home/grid

[grid@linuxrac1 ~]$ asmca

3.5.安裝oracle11gr2 database 軟件與建立數據庫

3.5.1.安裝Oracle 11gr2 Database

  以oracle 用戶登陸到節點一,切換到軟件安裝目錄,執行安裝。

在oracle用戶安裝,首先將兩個Oracle安裝文件解壓在一個文件夾中

[root@linuxrac1 ~]# su oracle

[oracle@linuxrac1 root]$ cd /home/oracle

[oracle@linuxrac1 ~]$ ll

total 4

drwxr-xr-x 8 root root 4096 Aug 20  2009 database

[oracle@linuxrac1 ~]$ cd database

[oracle@linuxrac1 database]$ ./runInstaller

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 120 MB.   Actual 5388 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 5860 MB    Passed

Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-10-17_06-01-14AM. Please wait ...

 

 

 

3.5.2. 建立數據庫

  在節點1上用oracle用戶執行dbca命令, 選擇 rac數據庫點擊next。

切換到Oracle用戶下,並查看下環境變量

[root@linuxrac1 ~]# su - oracle

[oracle@linuxrac1 ~]$ echo $PATH

/u01/app/oracle/product/11.2.0/db_1/bin:/usr/sbin:/usr/kerberos/bin:

/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/bin:/usr/bin:/usr/java/jdk1.8.0_11/bin:

/usr/java/jdk1.8.0_11/jre/bin:/u01/app/11.2.0/grid/bin:/home/oracle/bin

[oracle@linuxrac1 ~]$ dbca

3.6. 集羣管理命令

3.6.1. RAC的啓動與關閉

oracle rac默認會開機自啓動,如需維護時可以使用如下命令:

關閉:

crsctl stop cluster 中止本節點集羣服務

crsctl stop cluster –all 中止全部節點服務

開啓:

crsctl start cluster 開啓本節點集羣服務

crsctl stop cluster –all 開啓全部節點服務

注:以上命令需以 root用戶執行

 

3.6.2.RAC檢查運行情況

以grid 用戶運行

[grid@linuxrac1 ~]$ crsctl check cluster

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

 

 

3.6.3.禁止數據庫啓動,中止數據庫

以grid 用戶運行:prod爲安裝時所定義的全局服務名

[grid@linuxrac1 ~]$ srvctl disable database -d prod

[grid@linuxrac1 ~]$ srvctl stop database -d prod

3.6.4.禁止LISTNER的啓動,中止全部節點LISTENER

以grid 用戶運行

[grid@linuxrac1 ~]$ srvctl disable listener

[grid@linuxrac1 ~]$ srvctl stop listener

 

3.6.5.Database檢查例狀態

以grid 用戶運行

[grid@linuxrac1 ~]$ srvctl status database -d prod

Instance prod1 is running on node linuxrac1

Instance prod2 is running on node linuxrac2

 

3.6.6.檢查節點應用狀態及配置

以oracle 用戶運行

[oracle@linuxrac1 ~]$ srvctl status nodeapps

VIP linuxrac1-vip is enabled

VIP linuxrac1-vip is running on node: linuxrac1

VIP linuxrac2-vip is enabled

VIP linuxrac2-vip is running on node: linuxrac2

Network is enabled

Network is running on node: linuxrac1

Network is running on node: linuxrac2

GSD is disabled

GSD is not running on node: linuxrac1

GSD is not running on node: linuxrac2

ONS is enabled

ONS daemon is running on node: linuxrac1

ONS daemon is running on node: linuxrac2

eONS is enabled

eONS daemon is running on node: linuxrac1

eONS daemon is running on node: linuxrac2

[oracle@linuxrac1 ~]$ srvctl config nodeapps -a -g -s -l

-l option has been deprecated and will be ignored.

VIP exists.:linuxrac1

VIP exists.: /linuxrac1-vip/10.10.97.181/255.255.255.0/eth0

VIP exists.:linuxrac2

VIP exists.: /linuxrac2-vip/10.10.97.183/255.255.255.0/eth0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

Name: LISTENER

Network: 1, Owner: grid

Home: <CRS home>

  /u01/app/11.2.0/grid on node(s) linuxrac2,linuxrac1

End points: TCP:1521

 

 

3.6.7.查看數據庫配置

以oracle 用戶運行

[oracle@linuxrac1 ~]$ srvctl config database -d prod -a

Database unique name: prod

Database name: prod

Oracle home: /u01/app/oracle/product/11.2.0/db_1

Oracle user: oracle

Spfile: +DATA/prod/spfileprod.ora

Domain:

Start options: open

Stop options: immediate

Database role: PRIMARY

Management policy: AUTOMATIC

Server pools: prod

Database instances: prod1,prod2

Disk Groups: DATA,FRA

Services:

Database is enabled

Database is administrator managed

3.6.8.檢查 ASM狀態及配置

以oracle 用戶運行

[oracle@linuxrac1 ~]$ srvctl status asm

ASM is running on linuxrac1,linuxrac2

[oracle@linuxrac1 ~]$ srvctl config asm -a

ASM home: /u01/app/11.2.0/grid

ASM listener: LISTENER

ASM is enabled.

 

3.6.9.檢查 TNS的狀態及配置

以oracle 用戶運行

[oracle@linuxrac1 ~]$ srvctl status listener

Listener LISTENER is enabled

Listener LISTENER is running on node(s): linuxrac1,linuxrac2

[oracle@linuxrac1 ~]$ srvctl config listener -a

Name: LISTENER

Network: 1, Owner: grid

Home: <CRS home>

  /u01/app/11.2.0/grid on node(s) linuxrac2,linuxrac1

End points: TCP:1521

 

3.6.10.檢查 SCAN 的狀態及配置

以oracle 用戶運行

[oracle@linuxrac1 ~]$ srvctl status scan

SCAN VIP scan1 is enabled

SCAN VIP scan1 is running on node linuxrac1

[oracle@linuxrac1 ~]$ srvctl config scan

SCAN name: linuxrac-scan, Network: 1/10.10.97.0/255.255.255.0/eth0

SCAN VIP name: scan1, IP: /linuxrac-scan/10.10.97.193

 

3.6.11.檢查 VIP的狀態及配置

以oracle 用戶運行

[oracle@linuxrac1 ~]$ srvctl status vip -n linuxrac1

VIP linuxrac1-vip is enabled

VIP linuxrac1-vip is running on node: linuxrac1

[oracle@linuxrac1 ~]$ srvctl status vip -n linuxrac2

VIP linuxrac2-vip is enabled

VIP linuxrac2-vip is running on node: linuxrac2

[oracle@linuxrac1 ~]$ srvctl config vip -n linuxrac1

VIP exists.:linuxrac1

VIP exists.: /linuxrac1-vip/10.10.97.181/255.255.255.0/eth0

[oracle@linuxrac1 ~]$ srvctl config vip -n linuxrac2

VIP exists.:linuxrac2

VIP exists.: /linuxrac2-vip/10.10.97.183/255.255.255.0/eth0

 

 

安裝過程的錯誤解決辦法

4.1 安裝Oracle RAC FAQ-4.1.系統界面報錯Gnome 

1.錯誤信息:登陸系統後,屏幕彈出幾個錯誤對話框,無菜單、無按鈕

GConf error: Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://www.gnome.org/projects/gconf/ for information. (Details -  1: IOR file '/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located: No such file or directory 2: IOR file '/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located: No such file or directory)

Failed to contact configuration server; some possible causes are that you need to enable TCP/IP networking for ORBit, or you have stale NFS locks due to a system crash. See http://www.gnome.org/projects/gconf/ for information. (Details -  1: IOR file '/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located: No such file or directory 2: IOR file '/tmp/gconfd-root/lock/ior' not opened successfully, no gconfd located: No such file or directory)

2.解決辦法:

1.刪除當前用戶在/tmp下的相關文件:假設用戶名是root,就是這樣的#rm -R *root* 。

2.刪除用戶主目錄下的.gnome 和 .gnome2兩個文件夾。假設用戶名是root,就是這樣的#rm -R /root/.gnome 和#rm -R /root/.gnome2.

3.重啓gnome環境,問題解決。       

 

4.2 安裝Oracle RAC FAQ-4.2.Oracleasm Createdisk ASM磁盤失敗:Instantiating disk: failed 

1.錯誤信息:Instantiating disk: failed

[root@linuxrac1 /]# /usr/sbin/oracleasm createdisk OCR_VOTE /dev/sdb1

/etc/init.d/oracleasm createdisk OCR_VOTE /dev/sdb1

Writing disk header: done

Instantiating disk: failed

Clearing disk header: done

2.解決辦法:

1.首先檢查Oracle ASMliboracleasm與 Linux內核版本是否一致,一般版本不一致,會致使此問題。

[root@linuxrac2 etc]# uname -r

2.6.18-164.el5

若不一致,請在Oracle ASMlib下載地址下載:               

http://www.oracle.com/technetwork/server-storage/linux/downloads/rhel5-084877.html

2.可能selinux 阻止訪問disk header

[root@linuxrac1 etc]# cd selinux

[root@linuxrac1 selinux]# ll

total 20

-rw-r--r--. 1 root root  458 Jul 22 09:27 config

-rw-r--r--. 1 root root  113 Feb 21  2013 restorecond.conf

-rw-r--r--. 1 root root   76 Feb 21  2013 restorecond_user.conf

-rw-r--r--. 1 root root 2271 Feb 21  2013 semanage.conf

drwxr-xr-x. 6 root root 4096 Jul 22 09:33 targeted

[root@linuxrac1 selinux]# cp -a /etc/selinux/config  /etc/selinux/config_bak

[root@linuxrac1 selinux]# vi config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux security policy is enforced.

#     permissive - SELinux prints warnings instead of enforcing.

#     disabled - No SELinux policy is loaded.

#SELINUX=enforcing

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

#     targeted - Targeted processes are protected,

#     mls - Multi Level Security protection.

SELINUXTYPE=targeted

[root@linuxrac1 etc]# reboot

 

4.3安裝Oracle RAC FAQ-4.3.Oracle 集羣節點間連通失敗

1.檢查節點連通性的錯誤

[grid@linuxrac1 grid]$ ./runcluvfy.sh stage -post hwos -n linuxrac1,linuxrac2 -verbose

 

Performing post-checks for hardware and operating system setup

 

Checking node reachability...

 

Check: Node reachability from node "linuxrac1"

  Destination Node                      Reachable?             

  ------------------------------------  ------------------------

  linuxrac1                             yes                   

  linuxrac2                             yes                   

Result: Node reachability check passed from node "linuxrac1"

 

Checking user equivalence...

 

Check: User equivalence for user "grid"

  Node Name                             Comment          

  ------------------------------------  ------------------------

  linuxrac2                             failed                

  linuxrac1                             failed                

Result: PRVF-4007 : User equivalence check failed for user "grid"

 

ERROR:

User equivalence unavailable on all the specified nodes

Verification cannot proceed

 

Post-check for hardware and operating system setup was unsuccessful on all the nodes.

 

[root@linuxrac1 Downloads]# scp root@10.10.97.161:/home/linuxrac1/Downloads/test.txt root@10.10.97.167:/home/linuxrac2/Downloads

root@10.10.97.161's password:

root@10.10.97.167's password:

test.txt                                                                                          100%    0     0.0KB/s   00:00 

Connection to 10.10.97.161 closed.

解決辦法:從新配置用戶等效性

1.首先經過執行ssh linuxrac1 date; ssh linuxrac2 date,檢查用戶等效性是否配置好,用戶等效性配置成功的特色是,執行ssh linuxrac1 date後,直接顯示節點的時間,不會提示輸入密碼

[grid@linuxrac1 ~]$ ssh linuxrac1 date

Thu Sep 25 20:09:12 PDT 2014

[grid@linuxrac1 ~]$ ssh linuxrac1-priv date

Thu Sep 25 20:09:23 PDT 2014

[grid@linuxrac1 ~]$ ssh linuxrac2 date

Thu Sep 25 20:09:31 PDT 2014

[grid@linuxrac1 ~]$  ssh linuxrac2-priv date

Thu Sep 25 20:09:40 PDT 2014

 

4.4 安裝Oracle RAC FAQ-4.4.沒法圖形化安裝Grid Infrastructure 

 沒法圖形化安裝:

[grid@linuxrac1 grid]$ ./runInstaller

Starting Oracle Universal Installer...

 

Checking Temp space: must be greater than 120 MB.   Actual 15122 MB    Passed

Checking swap space: must be greater than 150 MB.   Actual 5945 MB    Passed

Checking monitor: must be configured to display at least 256 colors

    >>> Could not execute auto check for display colors using command /usr/bin/xdpyinfo. Check if the DISPLAY variable is set.    Failed <<<<

 

Some requirement checks failed. You must fulfill these requirements before

 

continuing with the installation,

 

Continue? (y/n) [n] y

 

 

>>> Ignoring required pre-requisite failures. Continuing...

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-09-25_11-49-23PM. Please wait ...

DISPLAY not set. Please set the DISPLAY and try again.

Depending on the Unix Shell, you can use one of the following commands as examples to set the DISPLAY environment variable:

- For csh:                      % setenv DISPLAY 192.168.1.128:0.0

- For sh, ksh and bash:         $ DISPLAY=192.168.1.128:0.0; export DISPLAY

Use the following command to see what shell is being used:

        echo $SHELL

Use the following command to view the current DISPLAY environment variable setting:

        echo $DISPLAY

- Make sure that client users are authorized to connect to the X Server.

To enable client users to access the X Server, open an xterm, dtterm or xconsole as the user that started the session and type the following command:

% xhost +

To test that the DISPLAY environment variable is set correctly, run a X11 based program that comes with the native operating system such as 'xclock':

        % <full path to xclock.. see below>

If you are not able to run xclock successfully, please refer to your PC-X Server or OS vendor for further

assistance.

Typical path for xclock: /usr/X11R6/bin/xclock

2.經過Xmanager實現圖形化安裝

1)第一種:經過Xmanager passive實現:

a).首先在本機安裝Xmanager,我這裏安裝的是Xmanager4。安裝方法跟普通軟件同樣,除了要輸入SN號那裏輸入下(101210-450789-147200),其餘的一直點下一步便可。

 

b).安裝完後,打開xmanager裏面的passive

 

c).遠程登陸到「須要使用圖形界面軟件的終端」,執行以下命令

 

[grid@ linuxrac1 ~]$ export DISPLAY=10.10.97.168:0.0    #設置輸出到哪一個機器,就填寫該機器IP

[grid @ linuxrac1~]$ xhost +

access control disabled, clients can connectfrom any host

注意:執行xhost +後,會告訴你「訪問控制已關閉,可從任何地方的客戶端鏈接」

2)Xbrowser 實現

Xbrowser 使用協議經過圖形化桌面遠程鏈接到linux,xftp使用SSH協議傳送文件到linux服務器,xshell經過SSH終端協議鏈接到linux進行字符界面管理,Xstart經過xstart sample 圖形化界面鏈接到linux。

Xbrower服務配置:

a).編輯設置文件: vi /etc/gdm/custom.conf ,修改成以下所示:

[root@ linuxrac1 ~]$ vi /etc/gdm/custom.conf

[security]

AllowRemoteRoot=true #容許root登錄

[xdmcp]

Enable=true #開啓xdcmp服務

Port=177  #指定服務端口

 

b).調整開機啓動級別:vi /etc/inittab ,將默認啓動級別改成5,即默認從圖像化界面啓動,若已是5,則不須要修改。

[root@ linuxrac1 ~]$ vi /etc/inittab

id:5 initdefault;

 

c).而後,重啓服務器:reboot

[root@ linuxrac1 ~]$ reboot

 

d).重啓後,驗證服務是否開啓

[root@ linuxrac1 ~]$lsof-i:177

 

e).使用Xbrower登錄

 

4.5安裝Oracle RAC FAQ-4.5.安裝Grid,建立ASM磁盤組空間不足 

因以前分區時,分區的Last cylinder的值選了「1」,致使建立磁盤組空間不足。解決辦法是先刪除分區,從新建立分區並刪除ASM磁盤,而後重建ASM磁盤

1. 先刪除分區,從新建立分區:

1)查詢空間不足的分區狀況:

[root@linuxrac1 ~]# fdisk /dev/sdb

 

Command (m for help): p

 

Disk /dev/sdb: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1           1        8001   83  Linux

2)刪除分區:

[root@linuxrac1 ~]# fdisk /dev/sdb

Command (m for help): d

Selected partition 1

 

Command (m for help): p

 

Disk /dev/sdb: 4294 MB, 4294967296 bytes

255 heads, 63 sectors/track, 522 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

 

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

3)重建分區:

[root@linuxrac1 ~]# fdisk /dev/sdb

Command (m for help): n

Command action

   e   extended

   p   primary partition (1-4)

p

Partition number (1-4): 1

First cylinder (1-522, default 1): 1

Last cylinder or +size or +sizeM or +sizeK (1-522, default 522): 522

 

Command (m for help): w

The partition table has been altered!

 

Calling ioctl() to re-read partition table.

Syncing disks.

4)刪除ASM磁盤:

[root@linuxrac1 ~]# oracleasm deletedisk OCR_VOTE

Clearing disk header: done

Dropping disk: done

5)重建ASM磁盤:

[root@linuxrac1 ~]# /etc/init.d/oracleasm createdisk OCR_VOTE /dev/sdb1

Marking disk "OCR_VOTE" as an ASM disk: [  OK  ]

6)查看ASM磁盤:

[root@linuxrac1 ~]# oracleasm listdisks

DATA

DATA2

FRA

OCR_VOTE

7)掃描識別ASM磁盤:

[root@linuxrac1 ~]# oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

[root@linuxrac1 ~]#  ls -l /dev/oracleasm/disks

total 0

brw-rw---- 1 grid oinstall 8, 33 Sep 26 00:52 DATA

brw-rw---- 1 grid oinstall 8, 49 Sep 26 00:52 DATA2

brw-rw---- 1 grid oinstall 8, 65 Sep 26 00:52 FRA

brw-rw---- 1 grid oinstall 8, 17 Sep 26 02:42 OCR_VOTE

[root@linuxrac2 ~]#  ls -l /dev/oracleasm/disks

total 0

brw-rw---- 1 grid oinstall 8, 33 Sep 25 18:15 DATA

brw-rw---- 1 grid oinstall 8, 49 Sep 25 18:15 DATA2

brw-rw---- 1 grid oinstall 8, 65 Sep 25 18:15 FRA

brw-rw---- 1 grid oinstall 8, 17 Sep 25 18:15 OCR_VOTE

4.6安裝Oracle RAC FAQ-4.6.從新配置與缷載11R2 Grid Infrastructure 

1.[root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh

2.[root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh

3.[root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh

4.[root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh

安裝集羣軟件時,沒有按上述步驟在兩個節點執行相同的腳本,而是採用了下面錯誤的順序:

1. [root@linuxrac1 ~]# /u01/app/oraInventory/orainstRoot.sh

2. [root@linuxrac1 ~]# /u01/app/11.2.0/grid/root.sh

3. [root@linuxrac2 ~]# /u01/app/oraInventory/orainstRoot.sh

4. [root@linuxrac2 ~]# /u01/app/11.2.0/grid/root.sh

致使集羣安裝失敗

1. 先恢復配置:恢復配置Grid Infrastructure 並不會移除已經複製的二進制文件,僅僅是回覆到配置crs以前的狀態

a) 使用root用戶登陸,並執行下面的命令(全部節點,但 最後一個節點除外)

#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

[root@linuxrac1 ~]#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

2014-10-16 00:20:37: Parsing the host name

2014-10-16 00:20:37: Checking for super user privileges

2014-10-16 00:20:37: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

PRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1

PRCR-1068 : Failed to query resources

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.gsd is registered

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.ons is registered

Cannot communicate with crsd

PRCR-1070 : Failed to check if resource ora.eons is registered

Cannot communicate with crsd

 

ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

 

ACFS-9201: Not Supported

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac1'

CRS-4548: Unable to connect to CRSD

CRS-2675: Stop of 'ora.crsd' on 'linuxrac1' failed

CRS-2679: Attempting to clean 'ora.crsd' on 'linuxrac1'

CRS-4548: Unable to connect to CRSD

CRS-2678: 'ora.crsd' on 'linuxrac1' has experienced an unrecoverable failure

CRS-0267: Human intervention required to resume its availability.

CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac1' has failed

CRS-4687: Shutdown command has completed with error(s).

CRS-4000: Command Stop failed, or completed with errors.

You must kill crs processes or reboot the system to properly

cleanup the processes started by Oracle clusterware

Successfully deconfigured Oracle clusterware stack on this node

b)        、一樣使用root用戶在最後一個節點執行下面的命令。該命令將清空ocr配置和voting disk

#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

[root@linuxrac2 ~]#perl /u01/app/11.2.0/grid/crs/install/rootcrs.pl -verbose -deconfig -force

2014-10-16 00:25:37: Parsing the host name

2014-10-16 00:25:37: Checking for super user privileges

2014-10-16 00:25:37: User has super user privileges

Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params

VIP exists.:linuxrac1

VIP exists.: /linuxrac1-vip/10.10.97.181/255.255.255.0/eth0

GSD exists.

ONS daemon exists. Local port 6100, remote port 6200

eONS daemon exists. Multicast port 18049, multicast IP address 234.241.229.252, listening port 2016

PRKO-2439 : VIP does not exist.

 

PRKO-2313 : VIP linuxrac2 does not exist.

ADVM/ACFS is not supported on centos-release-5-4.el5.centos.1

 

ACFS-9201: Not Supported

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.crsd' on 'linuxrac2'

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'linuxrac2'

CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac2'

CRS-2677: Stop of 'ora.asm' on 'linuxrac2' succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'linuxrac2' has completed

CRS-2677: Stop of 'ora.crsd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.ctssd' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.evmd' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.asm' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'linuxrac2'

CRS-2677: Stop of 'ora.cssdmonitor' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.evmd' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.mdnsd' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.asm' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'linuxrac2'

CRS-2677: Stop of 'ora.cssd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.gpnpd' on 'linuxrac2'

CRS-2673: Attempting to stop 'ora.diskmon' on 'linuxrac2'

CRS-2677: Stop of 'ora.gpnpd' on 'linuxrac2' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'linuxrac2'

CRS-2677: Stop of 'ora.gipcd' on 'linuxrac2' succeeded

CRS-2677: Stop of 'ora.diskmon' on 'linuxrac2' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'linuxrac2' has completed

CRS-4133: Oracle High Availability Services has been stopped.

Successfully deconfigured Oracle clusterware stack on this node

c)   若是使用了ASM磁盤,繼續下面的操做以使得ASM從新做爲候選磁盤(清空全部的ASM磁盤組)

[root@linuxrac1 ~]#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000

10000+0 records in

10000+0 records out

10240000 bytes (10M) copied, 0.002998 seconds, 34.2 MB/s

[root@linuxrac2 ~]#dd if=/dev/zero of=/dev/sdb1 bs=1024 count=10000

10000+0 records in

10000+0 records out

10240000 bytes (10M) copied, 0.00289 seconds, 35.4 MB/s

 

[root@linuxrac1 /]#etc/init.d/oracleasm deletedisk OCR_VOTE /dev/sdb1

Removing ASM disk "OCR_VOTE":                              [  OK  ]

[root@linuxrac2 /]#etc/init.d/oracleasm deletedisk OCR_VOTE /dev/sdb1

Removing ASM disk "OCR_VOTE":                              [  OK  ]

[root@linuxrac1 /]#etc/init.d/oracleasm createdisk OCR_VOTE /dev/sdb1

[root@linuxrac2 /]#oracleasm scandisks

Reloading disk partitions: done

Cleaning any stale ASM disks...

Scanning system for ASM disks...

Instantiating disk "OCR_VOTE"

[root@linuxrac2 /]# oracleasm listdisks

DATA

DATA2

FRA

OCR_VOTE

 

2.完全刪除Grid Infrastructure

11G R2 Grid Infrastructure 也提供了完全卸載的功能,deinstall該命令取代了使用OUI方式清除clusterware以及ASM,回覆到安裝grid以前的環境。

該命令將中止集羣,移除二進制文件及其相關的全部配置信息。

命令位置:$GRID_HOME/deinstall

下面該命令操做的具體事例,操做期間,須要提供一些交互信息,以及在新的session以root身份。

[root@ linuxrac1/ ]# cd /u01/app/11.2.0/grid/

[root@ linuxrac1/ ]# cd bin

[root@ linuxrac1 bin]# ./crsctl check crs

CRS-4047: No Oracle Clusterware components configured.

CRS-4000: Command Check failed, or completed with errors.

[root@ linuxrac1 bin]# cd ../deinstall/

[root@ linuxrac1 deinstall]# pwd

[root@ linuxrac1 deinstall]# su grid

[grid@ linuxrac1 deinstall]# ./deinstall

Checking for required files and bootstrapping ...

Please wait ...

Location of logs /tmp/deinstall2014-10-16_06-18-10-PM/logs/

 

############ ORACLE DEINSTALL & DECONFIG TOOL START ############

 

 

######################## CHECK OPERATION START ########################

Install check configuration START

 

 

Checking for existence of the Oracle home location /u01/app/11.2.0/grid

Oracle Home type selected for de-install is: CRS

Oracle Base selected for de-install is: /u01/app/grid

Checking for existence of central inventory location /u01/app/oraInventory

Checking for existence of the Oracle Grid Infrastructure home

The following nodes are part of this cluster: linuxrac1,linuxrac2

 

Install check configuration END

 

Traces log file: /tmp/deinstall2014-10-16_06-18-10-PM/logs//crsdc.log

Enter an address or the name of the virtual IP used on node "linuxrac1"[linuxrac1-vip]

 >

The following information can be collected by running ifconfig -a on node "linuxrac1"

Enter the IP netmask of Virtual IP "10.10.97.181" on node "linuxrac1"[255.255.255.0]

 >

Enter the network interface name on which the virtual IP address "10.10.97.181" is active

 >

Enter an address or the name of the virtual IP used on node "linuxrac2"[linuxrac2-vip]

 >

 

The following information can be collected by running ifconfig -a on node "linuxrac2"

Enter the IP netmask of Virtual IP "10.10.97.183" on node "linuxrac2"[255.255.255.0]

 >

 

Enter the network interface name on which the virtual IP address "10.10.97.183" is active

 >

 

Enter an address or the name of the virtual IP[]

 >

 

 

Network Configuration check config START

 

Network de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/netdc_check4793051808580150519.log

 

Specify all RAC listeners that are to be de-configured [LISTENER,LISTENER_SCAN1]:

 

Network Configuration check config END

 

Asm Check Configuration START

 

ASM de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/asmcadc_check1638223369054710711.log

 

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y

 

Enter the OCR/Voting Disk diskgroup name []:    

Specify the ASM Diagnostic Destination [ ]:

Specify the diskgroups that are managed by this ASM instance []:

 

 

######################### CHECK OPERATION END #########################

 

 

####################### CHECK OPERATION SUMMARY #######################

Oracle Grid Infrastructure Home is:

The cluster node(s) on which the Oracle home exists are: (Please input nodes seperated by ",", eg: node1,node2,...)linuxrac1,linuxrac2

Oracle Home selected for de-install is: /u01/app/11.2.0/grid

Inventory Location where the Oracle home registered is: /u01/app/oraInventory

Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN1

ASM instance will be de-configured from this Oracle home

Do you want to continue (y - yes, n - no)? [n]: y

A log of this session will be written to: '/tmp/deinstall2014-10-16_06-18-10-PM/logs/deinstall_deconfig2014-10-16_06-18-44-PM.out'

Any error messages from this session will be written to: '/tmp/deinstall2014-10-16_06-18-10-PM/logs/deinstall_deconfig2014-10-16_06-18-44-PM.err'

 

######################## CLEAN OPERATION START ########################

ASM de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/asmcadc_clean779346077107850558.log

ASM Clean Configuration START

ASM Clean Configuration END

 

Network Configuration clean config START

 

Network de-configuration trace file location: /tmp/deinstall2014-10-16_06-18-10-PM/logs/netdc_clean3314924901124092411.log

 

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN1

 

De-configuring listener: LISTENER

    Stopping listener: LISTENER

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring listener: LISTENER_SCAN1

    Stopping listener: LISTENER_SCAN1

    Warning: Failed to stop listener. Listener may not be running.

Listener de-configured successfully.

 

De-configuring Naming Methods configuration file on all nodes...

Naming Methods configuration file de-configured successfully.

 

De-configuring Local Net Service Names configuration file on all nodes...

Local Net Service Names configuration file de-configured successfully.

 

De-configuring Directory Usage configuration file on all nodes...

Directory Usage configuration file de-configured successfully.

 

De-configuring backup files on all nodes...

Backup files de-configured successfully.

 

The network configuration has been cleaned up successfully.

 

Network Configuration clean config END

 

 

---------------------------------------->

Oracle Universal Installer clean START

 

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node : Done

 

Delete directory '/u01/app/11.2.0/grid' on the local node : Done

 

Delete directory '/u01/app/oraInventory' on the local node : Done

 

Delete directory '/u01/app/grid' on the local node : Done

 

Detach Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linuxrac2' : Done

 

Delete directory '/u01/app/11.2.0/grid' on the remote nodes 'linuxrac2' : Done

 

Delete directory '/u01/app/oraInventory' on the remote nodes 'linuxrac2' : Done

 

Delete directory '/u01/app/grid' on the remote nodes 'linuxrac2' : Done

 

Oracle Universal Installer cleanup was successful.

 

Oracle Universal Installer clean END

 

 

Oracle install clean START

 

Clean install operation removing temporary directory '/tmp/install' on node 'linuxrac1'

Clean install operation removing temporary directory '/tmp/install' on node 'linuxrac2'

 

Oracle install clean END

 

 

######################### CLEAN OPERATION END #########################

 

 

####################### CLEAN OPERATION SUMMARY #######################

ASM instance was de-configured successfully from the Oracle home

Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN1

Oracle Clusterware was already stopped and de-configured on node "linuxrac2"

Oracle Clusterware was already stopped and de-configured on node "linuxrac1"

Oracle Clusterware is stopped and de-configured successfully.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the local node.

Successfully deleted directory '/u01/app/11.2.0/grid' on the local node.

Successfully deleted directory '/u01/app/oraInventory' on the local node.

Successfully deleted directory '/u01/app/grid' on the local node.

Successfully detached Oracle home '/u01/app/11.2.0/grid' from the central inventory on the remote nodes 'linuxrac2'.

Successfully deleted directory '/u01/app/11.2.0/grid' on the remote nodes 'linuxrac2'.

Successfully deleted directory '/u01/app/oraInventory' on the remote nodes 'linuxrac2'.

Successfully deleted directory '/u01/app/grid' on the remote nodes 'linuxrac2'.

Oracle Universal Installer cleanup was successful.

 

 

Run 'rm -rf /etc/oraInst.loc' as root on node(s) 'linuxrac1,linuxrac2' at the end of the session.

 

Oracle install successfully cleaned up the temporary directories.

#######################################################################

 

 

############# ORACLE DEINSTALL & DECONFIG TOOL END #############

4.74.安裝Oracle RAC FAQ-4.7.Oracle 11G R2 RAC修改public網絡IP

問題:Linuxrac2節點的public網IP被佔用,致使集羣節點2沒法訪問

1.禁止相關CRS資源的啓動,中止這些資源(vip,listener,scan,scan_listener,database)

1.1全部節點上禁止數據庫啓動,中止數據庫

[grid@linuxrac1 ~]$ srvctl disable database -d prod

[grid@linuxrac1 ~]$ srvctl stop database -d prod

1.2全部節點的LISTNER的啓動,中止全部節點上的LISTENER

[grid@linuxrac1 ~]$ srvctl disable listener

[grid@linuxrac1 ~]$ srvctl stop listener

1.3禁止全部節點的VIP的啓動,中止全部節點的VIP(a.操做VIP的時候提供的/etc/hosts中配置的是VIP的名字,b.只有root用戶才能DISABLE VIP資源)

[root@linuxrac1 ~]$ /u01/app/11.2.0/grid/bin/srvctl disable vip -i "linuxrac1-vip"

[root@linuxrac1 ~]$ /u01/app/11.2.0/grid/bin/srvctl disable vip -i "linuxrac2-vip"

[grid@linuxrac1 ~]$ srvctl stop vip -n linuxrac1

[grid@linuxrac1 ~]$ srvctl stop vip -n linuxrac2

1.4禁止全部節點的SCAN_LISTENER的啓動,中止全部節點上的SCAN_LISTENER

[grid@linuxrac1 ~]$ srvctl disable scan_listener

[grid@linuxrac1 ~]$ srvctl stop scan_listener

1.5禁止全部節點的SCAN的啓動,中止全部節點上的LISTENER

[root@linuxrac1 ~]$ /u01/app/11.2.0/grid/bin/srvctl disable scan

[grid@linuxrac1 ~]$ srvctl stop scan

2.網絡基礎設施配置(交換機,路由器,DNS),DNS服務器中的相關條目要反映新的IP

3.操做系統網絡配置修改(/etc/hosts,ifcfg-eth0,/etc/resolve.conf)

3.1修改/etc/hosts

[root@linuxrac1 ~]# vi /etc/hosts

[root@linuxrac2 ~]# vi /etc/hosts

[root@linuxrac2 ~]# cd /etc/sysconfig/network-scripts

[root@linuxrac2 network-scripts]# vi ifcfg-eth0

[root@linuxrac2 etc]# cat resolv.conf

; generated by /sbin/dhclient-script

search comtop.local

nameserver 10.10.5.12

nameserver 10.10.5.11

4.CRS網絡相關資源的從新配置和啓動(集羣的public網絡和網卡設置,vip配置和LISTENER得啓動,scan的從新設置和SCAN_LISTENER的啓動)

第3步操做完成,配置生效後才能進行下面操做;另外public ip ,vip不須要特別的配置,修改完/etc/hosts文件中的對應條目後,集羣會自動使用新的IP,ORACLE應該主要使用主機名來配置這些IP的.

4.1集羣的public 網絡和網卡設置

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/oifcfg  getif

eth0  10.10.97.0  global  public

eth1  192.168.2.0  global  cluster_interconnect

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/oifcfg  delif -global eth0

 

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/oifcfg  setif -global eth0/10.10.97.163:public

修改完成後在全部節點調用下面的命令驗證更改是否在全部節點上生效:

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/oifcfg  getif

eth1  192.168.2.0  global  cluster_interconnect

eth0  10.10.97.163  global  public

4.2集羣的public 網絡和網卡設置

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl enable vip -i "linuxrac1-vip"

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl enable vip -i "linuxrac2-vip"

 

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl start vip -n linuxrac1

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl start vip -n linuxrac2

 

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl enable listener

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl start listener

4.3 scan 的從新配置和SCAN_LISTENER的啓動

通過試驗發現scan中的subnet依賴於資源ora.net1.network的USR_ORA_SUBNET屬性,因此修改SCAN前先修改該屬性,修改資源ora.net1.network的USR_ORA_SUBNET屬性爲新的網絡號。

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/crsctl modify res "ora.net1.network" -attr "USR_ORA_SUBNET=10.10.97.163"

修改linuxrac-scan的值,srvctl只提供了一個用域名來修改scan配置的選項,猜想ORACLE是經過DNS來獲取對應的IP,從而實現配置的。

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/crsctl modify scan -n linuxrac-scan.comtop.local

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl enable scan

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl start scan

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl enable scan_listener

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl start scan_listener

4.4啓動數據庫,完成oracle rac 環境public 網絡的切換。

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl enable database -d prod

[root@linuxrac2 ~]$ /u01/app/11.2.0/grid/bin/srvctl start database -d prod

相關文章
相關標籤/搜索