Oracle RAC 11g R2(11.2.0.4)部署文檔

環境準備:html

主機node

名稱linux

部署應用數據庫

IPbash


系統盤空間分配服務器

數據存儲網絡

空間session

系統配置信息oracle

node1app

 

node2

oracle linux  6.7

Oracle11g RAC11.2.0.4

pub:eth0
192.168.*.240
192.168.*.239
vip:
192.168.*.238
192.168.*.237
priv:eth1

10.0.0.11
10.0.0.12
scan:rac-scan
192.168.*.236

vda1
/boot 500MB  ext4
vda2

swap 20GB

/dev/shm 48GB ext4
/ 90GB  ext4
/home 60GB  ext4
vda3:
/u01 100GB  ext4
sdj1
/backup 1TB  ext4 (
僅服務器node1

ASM磁盤組:
ASM_DATA  3*300G
ASM_FRA  2*300G
OCR_OVTE  4*1G

hostname:   node1;node2
root(****)
oracle(****)
grid(****)
sys(****)
目錄:
ORACLE_HOME=

/u01/app/oracle/product/11.2.0/db_1
GRID_HOME=

/u01/11.2.0/grid
登錄方式:ssh

共享磁盤分區列表

用途

分區

大小

 

COR+VOTE

/dev/sda1

1G

/dev/sdb1

1G

/dev/sdc1

1G

/dev/sdd1

1G

DATABASE

 

/dev/sde1

300G

/dev/sdf1

300G

/dev/sdg1

300G

RECOVERY AREA

/dev/sdh1

300G

/dev/sdi1

300G

硬件環境檢測

檢查項

檢查方法

內存

grep -i   memtotal /proc/meminfo

Swap空間

/sbin/swapon   -s

相關軟件包安裝

軟件包

安裝方法

yum   install -y binutils*

yum   install -y compat-libstdc*

yum install   -y elfutils-libelf*

yum   install -y gcc*

yum   install -y gcc-c*

yum   install -y glibc*

yum   install -y libaio*

yum   install -y libgcc*

yum   install -y libstdc*

yum   install -y compat-libcap1*

yum   install -y make*

yum   install -y sysstat*

yum   install -y unixODBC*

yum   install -y ksh*

yum   install -y vnc*

yum

cvuqdisk-1.0.10-1

oracleasmlib-2.0.12-1.el6.x86_64

oracleasm-support-2.1.8-1.el6.x86_64

Rpm

(先下載好

RAC安裝步驟

網絡與主機名配置

1更改主機node1/etc/sysconfig/network

--不用設置網關

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node1

二、更改主機node1/etc/sysconfig/network-scripts/ifcfg-eth0

--這個文件不須要配置MAC地址

DEVICE=eth0

BOOTPROTO=static

IPADDR=192.168.*.240

NETMASK=255.255.255.0

GATEWAY=192.168.*.1

ONBOOT=yes

3 、更改主機node1/etc/sysconfig/network-scripts/ifcfg-eth1

--內部通訊的私有IP不用設置網關

--這個文件不須要配置MAC地址

DEVICE=eth1

BOOTPROTO=static

IPADDR=10.10.10.11

NETMASK=255.255.255.0

ONBOOT=yes

4 、使用service network restart重啓node1的網絡服務。也能夠重啓系統使新主機名一併生效

5  、更改主機node2/etc/sysconfig/network

--不用設置網關

NETWORKING=yes

NETWORKING_IPV6=no

HOSTNAME=node2

6 、更改主機node2/etc/sysconfig/network-scripts/ifcfg-eth0

--這個文件不須要配置MAC地址

DEVICE=eth0

BOOTPROTO=static

IPADDR=192.168.*.239

NETMASK=255.255.255.0

GATEWAY=192.168.*.254

ONBOOT=yes

7 、更改主機node2/etc/sysconfig/network-scripts/ifcfg-eth1

--內部通訊的私有IP不用設置網關

--這個文件不須要配置MAC地址

DEVICE=eth1

BOOTPROTO=static

IPADDR=10.10.10.12

NETMASK=255.255.255.0

ONBOOT=yes

8 、使用service network restart重啓node2的網絡服務。也能夠重啓reahat使新主機名一併生效

 

對磁盤進行分區(略)

建立用戶及用戶組

1對主機node1node2建立用戶

--兩個節點的用戶與組的ID號必須一致

groupadd  -g 200 oinstall

groupadd  -g 201 dba

groupadd  -g 202 oper

groupadd  -g 203 asmadmin

groupadd  -g 204 asmoper

groupadd  -g 205 asmdba

useradd -u 200 -g oinstall -G dba,asmdba,oper oracle

useradd -u 201 -g oinstall -G asmadmin,asmdba,asmoper,oper,dba grid

--設置用戶密碼

[root@node1 ~]# passwd oracle

Changing password for user oracle.

New UNIX password:

BAD PASSWORD: it is too simplistic/systematic

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

[root@node1 ~]# passwd grid

Changing password for user grid.

New UNIX password:

BAD PASSWORD: it is too simplistic/systematic

Retype new UNIX password:

passwd: all authentication tokens updated successfully.

二、分別在主機node1node2/u01下建立相應目錄

--建立目錄完畢以後,注意檢查目錄的所屬用戶及組

mkdir -p /u01/app/oraInventory

chown -R grid:oinstall /u01/app/oraInventory/

chmod -R 775 /u01/app/oraInventory/

mkdir -p /u01/11.2.0/grid

chown -R grid:oinstall /u01/11.2.0/grid/

chmod -R 775 /u01/11.2.0/grid/

mkdir -p /u01/app/oracle

mkdir -p /u01/app/oracle/cfgtoollogs

mkdir -p /u01/app/oracle/product/11.2.0/db_1

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/app/oracle

三、修改主機node1oracle用戶環境變量

--注意設置ORACLE_SID

[root@node1 ~]# su - oracle

[oracle@node1 ~]$ vi .bash_profile

 

PATH=$PATH:$HOME/bin

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=prod1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

四、修改主機node2oracle用戶的環境變量

--注意設置ORACLE_SID

[root@node2 ~]# su - oracle

[oracle@node2 ~]$ vi .bash_profile

 

# .bash_profile

PATH=$PATH:$HOME/bin

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=prod2

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

五、修改主機node1grid用戶環境變量

--注意更改ORACLE_SID

--grid用戶的環境變量中,GRID_HOMEORACLE_HOME兩個環境變量二選一便可,建議選擇GRID_HOME,在本文檔中,這兩個環境變量都設置了一樣的值

[oracle@node1 ~]$ su - grid

Password:

[grid@node1 ~]$ vi .bash_profile

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=+ASM1

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/11.2.0/grid

export GRID_HOME=/u01/11.2.0/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export THREADS_FLAG=native

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

六、修改主機node2grid用戶環境變量

--注意更改ORACLE_SID

[oracle@node2 ~]$ su - grid

Password:

[grid@node2 ~]$ vi .bash_profile

 

 

PATH=$PATH:$HOME/bin

 

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

        . ~/.bashrc

fi

 

# User specific environment and startup programs

export EDITOR=vi

export ORACLE_SID=+ASM2

export ORACLE_BASE=/u01/app/oracle

export ORACLE_HOME=/u01/11.2.0/grid

export GRID_HOME=/u01/11.2.0/grid

export LD_LIBRARY_PATH=$ORACLE_HOME/lib

export THREADS_FLAG=native

export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin

umask 022

修改hosts文件

一、配置主機node1hosts文件

--vipip地址只有在安裝完CRS,啓動集羣服務以後才能訪問

[root@node1 ~]# su - root

[root@node1 ~]# vi /etc/hosts

 

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost

192.168.*.240          node1

192.168.*.238           node1-vip

10.10.10.11             node1-priv

192.168.*.239         node2

192.168.*.237         node2-vip

10.10.10.12             node2-priv

192.168.*.236           rac_scan

二、配置主機node2hosts文件

--經過scp命令將node1/etc/hosts文件拷貝到node2

[root@node1 ~]# scp /etc/hosts node2:/etc

The authenticity of host 'node2   (192.168.8.215)' can't be established.

RSA key fingerprint is 16:28:88:50:27:30:92:cb:49:be:55:61:f6:c2:a1:3f.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'node2,192.168.8.215' (RSA) to the list of known hosts.

root@node2's password:

Permission denied, please try again.

root@node2's password:

hosts                                                                                              100%  380       0.4KB/s   00:00

--node2中查看/etc/hosts文件是否已被正確配置

[oracle@node2 ~]$ cat /etc/hosts

# Do not remove the following line, or various programs

# that require network functionality will fail.

127.0.0.1               localhost 

92.168.113.240          node1

192.168.*.238           node1-vip

10.10.10.11             node1-priv 

192.168.*.239         node2

192.168.*.237         node2-vip

10.10.10.12             node2-priv 

192.168.*.236           rac_scan

 

修改內核參數文件,資源限制文件,login文件,profile文件,禁用NTP服務

一、配置主機node1的內核參數

[root@node1 ~]# vi /etc/sysctl.conf

--在文件末尾新增如下內核參數,若是默認有這個參數取值大的那個參數值。 

fs.aio-max-nr = 1048576

fs.file-max = 6815744

kernel.shmall = 2097152

kernel.shmmax = 4294967295

kernel.shmmni = 4096

kernel.sem = 250 32000 100 128

net.ipv4.ip_local_port_range = 9000 65500

net.core.rmem_default = 262144

net.core.rmem_max = 4194304

net.core.wmem_default = 262144

net.core.wmem_max = 1048586

--使內核參數生效

[root@node1 ~]# sysctl -p

二、配置主機node1的資源限制文件

[root@node1 ~]# vi /etc/security/limits.conf

--在文件末尾新增如下內容

oracle soft nproc 2047

oracle hard nproc 16384

oracle soft nofile 1024

oracle hard nofile 65536

oracle soft stack 10240

grid soft nproc 2047

grid hard nproc 16384

grid soft nofile 1024

grid hard nofile 65536

grid soft stack 10240

"/etc/security/limits.conf" 61L, 2034C written

三、配置主機node1login文件

[root@node1 ~]# vi /etc/pam.d/login

--在文件末尾新增如下內容,用戶登錄,則資源限制開始生效

session required /lib/security/pam_limits.so

四、修改主機node1profile文件

[root@node1 ~]# vi /etc/profile

--在文件末尾新增以下內容,對資源進行限制

if [ $USER = "oracle" ]||[ $USER = "grid" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536

else

ulimit -u 16384 -n 65536

fi

fi

五、修改主機node2的內核參數,資源限制文件,login文件,profile文件(四個文件都從node1複製過去)

--將主機node1的內核參數文件,資源限制文件,login文件,profile文件傳給node2

[root@node1 ~]# scp /etc/sysctl.conf node2:/etc

root@node2's password:

sysctl.conf                                   100%   1303     1.3KB/s   00:00     

[root@node1 ~]# scp /etc/security/limits.conf node2:/etc/security

root@node2's password:

limits.conf                                     100% 2034     2.0KB/s   00:00   

[root@node1 ~]# scp /etc/pam.d/login node2:/etc/pam.d/

root@node2's password:

login                                           100%  688     0.7KB/s     00:00   

[root@node1 ~]# scp /etc/profile node2:/etc

root@node2's password:

profile                                       100% 1181     1.2KB/s   00:00 

--node2執行如下命令,使內核參數生效

[root@node2 etc]# sysctl -p

六、禁用主機node1與主機node2ntp服務,sendmail服務

[root@node1 ~]# chkconfig ntpd off

[root@node1 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

[root@node1 ~]# chkconfig sendmail off

[root@node2 ~]# chkconfig ntpd off

[root@node2 ~]# mv /etc/ntp.conf /etc/ntp.conf.bak

[root@node2 ~]# chkconfig sendmail off

對共享磁盤進行分區

一、主機node1對共享磁盤進行分區

[root@node1 ~]# fdisk

略,共

二、主機node2中查看共享磁盤分區

[root@node2 ~]# fdisk -l

看是否與node1的共享分區信息同步

安裝ASM軟件

一、主機node1安裝ASM軟件

--查看redhat內核版本,版本號必須與oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm的信息一致

[root@node1 asm]# uname -a

Linux node1 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:43 EDT 2010 i686 i686 i386 GNU/Linux

[root@node1 soft]# ls

asm  linux_11gR2_database_1of2.zip  linux_11gR2_database_2of2.zip  linux_11gR2_grid.zip

[root@node1 soft]# cd asm

[root@node1 asm]# ls

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm  oracleasmlib-2.0.4-1.el5.i386.rpm  oracleasm-support-2.1.3-1.el5.i386.rpm

[root@node1 asm]# rpm -ivh *

warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159

Preparing...                  ########################################### [100%]

   1:oracleasm-support      ###########################################   [ 33%]

   2:oracleasm-2.6.18-194.el###########################################   [ 67%]

   3:oracleasmlib             ########################################### [100%]

[root@node1 asm]# rpm -qa|grep oracleasm

oracleasmlib-2.0.4-1.el5

oracleasm-support-2.1.3-1.el5

oracleasm-2.6.18-194.el5-2.0.5-1.el5

2 、主機node2安裝ASM軟件

--使用scpASM軟件傳給node2

[root@node1 soft]# scp -r asm node2:/home/oracle

root@node2's password:

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm                                                      100%  127KB 127.0KB/s   00:00     

oracleasmlib-2.0.4-1.el5.i386.rpm                                                                  100%   14KB  13.6KB/s     00:00   

oracleasm-support-2.1.3-1.el5.i386.rpm                                                             100%   83KB  83.4KB/s     00:00 

--在主機node2上安裝ASM軟件

[root@node2 ~]# cd /home/oracle

[root@node2 oracle]# ls

asm

[root@node2 oracle]# cd asm

[root@node2 asm]# ls

oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm  oracleasmlib-2.0.4-1.el5.i386.rpm  oracleasm-support-2.1.3-1.el5.i386.rpm

[root@node2 asm]# rpm -ivh *

warning: oracleasm-2.6.18-194.el5-2.0.5-1.el5.i686.rpm: Header V3 DSA   signature: NOKEY, key ID 1e5e0159

Preparing...                  ########################################### [100%]

   1:oracleasm-support      ###########################################   [ 33%]

     2:oracleasm-2.6.18-194.el########################################### [   67%]

   3:oracleasmlib             ########################################### [100%]

三、 在主機node1配置ASM

[root@node1 ~]# service   oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM   library

driver.  The following questions will   determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting <ENTER> without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]:   

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [    OK  ]

Scanning the system for Oracle ASMLib disks: [    OK  ]

4在主機node2配置ASM

[root@node2 ~]# service   oracleasm configure

Configuring the Oracle ASM library driver.

 

This will configure the on-boot properties of the Oracle ASM   library

driver.  The following questions will   determine whether the driver is

loaded on boot and what permissions it will have.  The current values

will be shown in brackets ('[]').  Hitting <ENTER> without typing an

answer will keep that current value.  Ctrl-C will abort.

 

Default user to own the driver interface []: grid

Default group to own the driver interface []: asmadmin

Start Oracle ASM library driver on boot (y/n) [n]: y

Scan for Oracle ASM disks on boot (y/n) [y]:

Writing Oracle ASM library driver configuration: done

Initializing the Oracle ASMLib driver: [    OK  ]

Scanning the system for Oracle ASMLib disks: [    OK  ]

5在主機node1建立ASM磁盤

[root@node1 ~]# service oracleasm createdisk OCR_VOTE1 /dev/sda1

Marking disk "OCR_VOTE1" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk OCR_VOTE2 /dev/sdb1

Marking disk "OCR_VOTE2" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk OCR_VOTE3 /dev/sdc1

Marking disk "OCR_VOTE3" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk OCR_VOTE4 /dev/sdd1

Marking disk "OCR_VOTE3" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_DATA1 /dev/sde1

Marking disk "ASM_DATA1" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_DATA2 /dev/sdf1

Marking disk "OCR_VOTE3" as an ASM disk: [  OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_DATA2 /dev/sdg1

Marking disk "ASM_DATA2" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_FRA1 /dev/sdh1

Marking disk "ASM_RCY1" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm createdisk ASM_FRA2 /dev/sdi1

Marking disk "ASM_RCY2" as an ASM disk: [    OK  ]

[root@node1 ~]# service oracleasm listdisks

ASM_DATA1

ASM_DATA2

ASM_DATA3

ASM_RCY1

ASM_RCY2

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

OCR_VOTE4

6在主機node2查看ASM磁盤

--主機node2掃描磁盤

[root@node2 ~]# service oracleasm scandisks

Scanning the system for Oracle ASMLib disks: [    OK  ]

--查看

[root@node2 ~]# service oracleasm listdisks

ASM_DATA1

ASM_DATA2

ASM_DATA3

ASM_RCY1

ASM_RCY2

OCR_VOTE1

OCR_VOTE2

OCR_VOTE3

OCR_VOTE4


創建
GRID
用戶信任關係(oracle安裝時也創建信任關係,但此時若是手動創建信任則沒法經過安裝前的檢查,沒法發現安裝前的問題所在)

創建GRID用戶信任關係(參考如下配置而成)

1.  配置過程以下:

2.  各節點生成Keys

1.   [root@node1 ~]# su - grid

2.   [grid@node1 ~]$ mkdir ~/.ssh

3.   [grid@node1 ~]$ chmod 700 ~/.ssh

4.   [grid@node1 ~]$ ssh-keygen -t rsa

5.   [grid@node1 ~]$ ssh-keygen -t dsa

6.   [root@node2 ~]# su - grid

7.   [grid@node2 ~]$ mkdir ~/.ssh

8.   [grid@node2 ~]$ chmod 700 ~/.ssh

9.   [grid@node2 ~]$ ssh-keygen -t rsa

10.  [grid@node2 ~]$ ssh-keygen -t dsa

11. 

12.  在節點1上進行互信配置:

13.  [grid@node1 ~]$ touch ~/.ssh/authorized_keys

14.  [grid@node1 ~]$ cd ~/.ssh

15.  [grid@node1 .ssh]$ ssh node1 cat   ~/.ssh/id_rsa.pub >> authorized_keys

16.  [grid@node1 .ssh]$ ssh node2 cat   ~/.ssh/id_rsa.pub >> authorized_keys

17.  [grid@node1 .ssh]$ ssh node1 cat   ~/.ssh/id_dsa.pub >> authorized_keys

18.  [grid@node1 .ssh]$ ssh node2 cat   ~/.ssh/id_dsa.pub >> authorized_keys

19. 

20.  在node1把存儲公鑰信息的驗證文件傳送到node2上

21.  [grid@node1 .ssh]$ pwd

22.  /home/grid/.ssh

23.  [grid@node1 .ssh]$ scp authorized_keys   node2:'pwd'

24.  grid@node2's password:

25.  authorized_keys 100% 1644 1.6KB/s 00:00

26. 

27.  設置驗證文件的權限

28.  在每個節點執行:

29.  $ chmod 600 ~/.ssh/authorized_keys

30. 

31.  啓用用戶一致性

32.  在你要運行OUI的節點以grid用戶運行(這裏選擇node1):

33.  [grid@node1 .ssh]$ exec /usr/bin/ssh-agent   $SHELL

34.  [grid@node1 .ssh]$ ssh-add

35.  Identity added: /home/grid/.ssh/id_rsa   (/home/grid/.ssh/id_rsa)

36.  Identity added: /home/grid/.ssh/id_dsa   (/home/grid/.ssh/id_dsa)

37. 

38.  驗證ssh配置是否正確

39.  以grid用戶在全部節點分別執行:

40.  ssh node1 date

41.  ssh node2 date

42.  ssh node1-priv date

43.  ssh node2-priv date

44. 

45.  若是不須要輸入密碼就能夠輸出時間,說明ssh驗證配置成功。必須把以上命令在兩個節點都運行,每個命令在第一次執行的時候須要輸入yes。

46.  若是不運行這些命令,即便ssh驗證已經配好,安裝clusterware的時候也會出現錯誤:

47.  The specified nodes are not clusterable

48.  由於,配好ssh後,還須要在第一次訪問時輸入yes,纔算是真正的無障礙訪問其餘服務器。

請謹記,SSH互信須要實現的就是各個節點之間能夠無密碼進行SSH訪問。

 

關閉防火牆

--在主機node1解壓grid安裝包

1)重啓後生效
  開啓: chkconfig iptables on
  關閉: chkconfig iptables off
  2) 即時生效,重啓後失效
  開啓: service iptables start
  關閉: service iptables stop

 

安裝GRID

1 、安裝前環境檢測

--在主機node1解壓grid安裝包

[grid@node1 ~]$ cd /soft/grid/

[grid@node1 grid]$ ls

doc  install  response    rpm  runcluvfy.sh  runInstaller  sshsetup    stage  welcome.html

[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -verbose fixup

2主機node1,node2安裝所需的軟件

安裝文章前面軟件準備部分的相關軟件


3增長主機node1swap空間

[root@node1 yum.repos.d]# free -m

             total       used       free     shared      buffers     cached

Mem:          1562         1381        181            0         33         1216

-/+ buffers/cache:          131       1430

Swap:         2047            0       2047

[root@node1 yum.repos.d]# dd if=/dev/zero of=/u01/swpf1 bs=1024k count=2048

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 12.2324 seconds, 176 MB/s

[root@node1 yum.repos.d]# mkswap -c /u01/swpf1

Setting up swapspace version 1, size = 2147479 kB

[root@node1 yum.repos.d]# swapon -a /u01/swpf1

[root@node1 yum.repos.d]# free -m

             total       used       free     shared      buffers     cached

Mem:          1562         1523         39            0          7         1384

-/+ buffers/cache:          130       1431

Swap:         4095            0       4095

--/etc/fstab增長以下內容

/u01/swpf1              swap                    swap     defaults       0 0

 4增長主機node2swap空間

[root@node2 yum.repos.d]# dd if=/dev/zero of=/u01/swpf1 bs=1024k count=2048

2048+0 records in

2048+0 records out

2147483648 bytes (2.1 GB) copied, 12.6712 seconds, 169 MB/s

[root@node2 yum.repos.d]# mkswap -c /u01/swpf1

Setting up swapspace version 1, size = 2147479 kB

[root@node2 yum.repos.d]# swapon -a /u01/swpf1

--/etc/fstab增長以下內容

/u01/swpf1              swap                    swap     defaults       0 0


 5安裝GRID

--建議使用VNC安裝,node1主機爲例

--node1主機上運行vncserver,設置vnc鏈接的密碼

--node1主機的本機系統中,打開終端,root用戶下執行xhost +

--而後切換至grid用戶:su - grid

--執行vncviewer node1:5901,vnc界面的root用戶 ,執行xhost +

--vnc界面中切換至grid用戶:su – grid

$ export 爲了中文不顯示亂碼

--而後在grid用戶下執行grid軟件的安裝

--cd /soft/grid

--./runInstaller

選擇第一個安裝選項

 

選擇」Advanced Installation」

默認語言選擇

 

 

設置SCAN Namerac_scan,不要安裝」Configure CNS」

點擊Add,添加HOSTNAME填寫node2,Virtual IP Name填寫node2-vip

「Network Interface Usage」默認選擇,點擊下一步

「Storage option」選擇ASM

「Disk Group Name」設置爲」OCR_VOTE」,」Redundancy」」Normal」,選擇相應的磁盤,點擊下一步

 

設置sys密碼:****

IPMI界面,默認選擇

       Groups界面,確認Groups正確後點擊下一步

確認ORACLE BASESOFTWARE LOCATION路徑正確後點擊下一步

確認Inventory路徑正確後點擊下一步

查看摘要信息,正確無誤後點擊Finish

 

分別在主機node1與主機    node2root用戶運行以下兩個腳本,不能同時運行,執行完一個節點在執行下一個節點

 

       編輯主機node1與主機node2/etc/profile文件,增長以下內容

       export PATH=$PATH:/u01/11.2.0/grid/bin

      

    而後source /etc/profile

 

       查看主機node1與主機node2的服務是否在線

      

[root@node1 ~]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

 

[root@node2 yum.repos.d]# crsctl check crs

CRS-4638: Oracle High Availability Services is online

CRS-4537: Cluster Ready Services is online

CRS-4529: Cluster Synchronization Services is online

CRS-4533: Event Manager is online

 

       在主機node2查看資源是否在線

[root@node2 yum.repos.d]# crs_stat -t

Name           Type           Target    State     Host       

------------------------------------------------------------

ora....N1.lsnr ora....er.type ONLINE    ONLINE    node1     

ora....VOTE.dg ora....up.type ONLINE    ONLINE    node1     

ora.asm        ora.asm.type   ONLINE    ONLINE    node1     

ora....SM1.asm application    ONLINE    ONLINE    node1     

ora....de1.gsd application    OFFLINE   OFFLINE              

ora....de1.ons application    ONLINE    ONLINE    node1     

ora....de1.vip ora....t1.type ONLINE    ONLINE    node1     

ora....SM2.asm application    ONLINE    ONLINE    node2     

ora....de2.gsd application    OFFLINE   OFFLINE              

ora....de2.ons application    ONLINE    ONLINE    node2     

ora....de2.vip ora....t1.type ONLINE    ONLINE    node2     

ora.eons       ora.eons.type  ONLINE    ONLINE    node1     

ora.gsd        ora.gsd.type   OFFLINE   OFFLINE              

ora....network ora....rk.type ONLINE    ONLINE    node1     

ora.oc4j       ora.oc4j.type  OFFLINE   OFFLINE              

ora.ons        ora.ons.type   ONLINE    ONLINE    node1     

ora....ry.acfs ora....fs.type ONLINE    ONLINE    node1     

ora.scan1.vip  ora....ip.type ONLINE    ONLINE    node1

      

       點擊OK,結束安裝

      

 

       這個關於IPMI的錯誤能夠不用管

      

 

 

      

 安裝ORACLE數據庫軟件

1解壓ORACLE DATABASE 軟件

[root@node1 soft]# unzip linux_11gR2_database_1of2.zip && unzip   linux_11gR2_database_2of2.zip

二、安裝ORACLE DATABASE軟件

--建議使用VNC安裝,node1主機爲例

--node1主機上運行vncserver,設置vnc鏈接的密碼,若是已經設置了vnc的密碼,則不須要再設置。

--node1主機的本機系統中,打開終端,root用戶下執行xhost +

--而後切換至oracle用戶:su - oracle

--執行vncviewer node1:5901,vnc界面的root用戶下,執行xhost +

--vnc界面中切換至oracle用戶:su - oracle

--而後在oracle用戶下執行oracle軟件的安裝

--cd /soft/database

--./runInstaller

 

選擇」Install database software only」,點擊下一步

默認選擇」Real Application Cluster database installation」,點擊下一步

默認語言選擇,點擊下一步

選擇」Enterprise Edition」,點擊下一步

 

確認用戶組選擇正確無誤,點擊下一步

出現以下問題,使用命令crsctl check crscrs_stat –t檢查服務與資源是否在線

 

 

點擊」Finish」開始安裝

 

在主機node1與主機node2root用戶執行如下腳本

/u01/app/oracle/product/11.2.0/db_1/root.sh

 

點擊OK,完成安裝

ASMCA創建磁盤組

1經過VNC運行ASMCA

--建議使用VNC安裝,node1主機爲例

--node1主機上運行vncserver,設置vnc鏈接的密碼,若是已經設置了vnc的密碼,則不須要再設置。

--node1主機的本機系統中,打開終端,root用戶下執行xhost +

--而後切換至grid用戶:su - grid

--執行vncviewer node1:5901,vnc界面的root用戶下,執行xhost +

--vnc界面中切換至grid用戶:su - grid

--而後在grid用戶下運行asmca命令

--asmca 

二、ASMCA創建磁盤組,

相關文章
相關標籤/搜索