RedHat 7.3 Oracle 12.2.0.1 RAC 安裝手冊(轉)

準備工做

1.1   關於GRID的一些變化

1.1.1  簡化的基於映像的Oracle Grid Infrastructure安裝

Oracle Grid Infrastructure 12c2版(12.2)開始,Oracle Grid Infrastructure軟件可用做下載和安裝的映像文件。css

此功能大大簡化了Oracle Grid Infrastructure的安裝過程。html

注意:你必須將GRID軟件解壓縮到希Grid Home位於的目錄中,而後運行gridSetup.sh腳本以啓動Oracle Grid Infrastructure安裝。java

1.1.2  支持Oracle域服務集羣和Oracle成員集羣

Oracle Grid Infrastructure 12c2版(12.2)開始,Oracle Grid Infrastructure安裝程序支持部署Oracle域服務集羣和Oracle成員集羣的選項。node

更多介紹請看官方文檔:linux

http://docs.oracle.com/database/122/CWLIN/understanding-cluster-configuration-options.htm#GUID-4D6C2B52-9845-48E2-AD68-F0586AA20F48c++

1.1.3  支持Oracle可擴展集羣

Oracle Grid Infrastructure 12c2版(12.2)開始,Oracle Grid Infrastructure安裝程序支持將不一樣位置的集羣節點配置爲Oracle擴展集羣的選項。 Oracle擴展集羣由位於稱爲站點的多個位置的節點組成。shell

1.1.4  全局網格基礎設施管理知識庫-GIMR

Oracle Grid Infrastructure部署如今支持全局離羣網格基礎架構管理存儲庫(GIMR)。 此存儲庫是具備用於每一個集羣的GIMR的可插入數據庫(PDB)的多租戶數據庫。 全局GIMROracle域服務集羣中運行。 全局GIMR使本地羣集免於在其磁盤組中爲此數據專用存儲,並容許長期歷史數據存儲用於診斷和性能分析。數據庫

這個在後面安裝GRID時候,會提示你是否爲GIMR單首創建一個磁盤組用於存放數據。vim

1.2   硬件最低配置要求

序號bash

組件

內存

1

Oracle Grid Infrastructure installations

4GB以上

2

Oracle Database installations

最小1GB,建議2GB以上


1.3   RAC規劃

服務器主機名

rac1

rac2

公共 IP 地址(eth0)

192.168.56.121

192.168.56.123

虛擬 IP 地址(eth0)

192.168.56.122

192.168.56.124

私有 IP 地址(eth1)

192.168.57.121

192.168.57.123

ORACLE RAC SID

cndba1

cndba2

集羣實例名稱

cndba

 

SCAN IP   

192.168.56.125

 

操做系統

Red hat7.3

 

Oracle   版本

12.2.0.1

 

1.4   磁盤劃分

12C R2中對磁盤組空間要求更大。OCR外部冗餘最少40GNORMAL最少80G

磁盤組名稱

磁盤

大小

冗餘策略

DATAFILE

data01

40G

NORMAL

data02

40G

OCR

OCRVOTING01

30G

NORMAL

OCRVOTING02

30G

OCRVOTING03

30G


1.5   操做系統安裝

具體過程.....

注意Redhat 7.3 中主機名和IP地址的操做。

相關操做能夠參考:

Linux 7.2 修改主機名

http://www.cndba.cn/dave/article/1795

Linux 7 防火牆 配置管理

http://www.cndba.cn/dave/article/153

1.6   配置host

在全部節點修改:

[root@rac1 ~]# cat /etc/hosts

127.0.0.1   localhost



192.168.56.121 rac1

192.168.57.121 rac1-priv

192.168.56.122 rac1-vip

 

192.168.56.123 rac2

192.168.57.123 rac2-priv

192.168.56.124 rac2-vip

 

192.168.56.125 rac-scan

1.7   添加用戶和組

/usr/sbin/groupadd -g 54321 oinstall

/usr/sbin/groupadd -g 54322 dba

/usr/sbin/groupadd -g 54323 oper

/usr/sbin/groupadd -g 54324 backupdba

/usr/sbin/groupadd -g 54325 dgdba

/usr/sbin/groupadd -g 54326 kmdba

/usr/sbin/groupadd -g 54327 asmdba

/usr/sbin/groupadd -g 54328 asmoper

/usr/sbin/groupadd -g 54329 asmadmin

/usr/sbin/groupadd -g 54330 racdba

/usr/sbin/useradd -u 54321 -g oinstall -G dba,asmdba,oper oracle

/usr/sbin/useradd -u 54322 -g oinstall -G dba,oper,backupdba,dgdba,kmdba,asmdba,asmoper,asmadmin,racdba grid

修改用戶密碼:

[root@rac1 ~]# passwd grid

[root@rac1 ~]# passwd oracle

確認用戶信息:

[root@rac1 ~]# id oracle

uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54327(asmdba)

[root@rac1 ~]# id grid

uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54327(asmdba),54328(asmoper),54329(asmadmin),54330(racdba)

[root@rac1 ~]#

1.8   關閉防火牆和selinux

防火牆:

[root@rac1 ~]# systemctl stop firewalld.service

[root@rac1 ~]# ]# systemctl disable firewalld.service

rm '/etc/systemd/system/basic.target.wants/firewalld.service'

rm '/etc/systemd/system/dbus-org.Fedoraproject.FirewallD1.service'

SELINUX

[root@rac1 ~]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux security policy is enforced.

#     permissive - SELinux prints warnings instead of enforcing.

#     disabled - No SELinux policy is loaded.

SELINUX=disabled

# SELINUXTYPE= can take one of these two values:

#     targeted - Targeted processes are protected,

#     mls - Multi Level Security protection.

SELINUXTYPE=targeted

1.9   配置時間同步

停用NTP

[root@rac1 ~]# systemctl stop ntpd.service

[root@rac1 ~]# systemctl disable ntpd.service
[root@rac1 etc]# systemctl stop chronyd.service

[root@rac1 etc]# systemctl disable chronyd.service

Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.

1.10   建立目錄

mkdir -p /u01/app/12.2.0/grid

mkdir -p /u01/app/grid

mkdir -p /u01/app/oracle/product/12.2.0/dbhome_1

chown -R grid:oinstall /u01

chown -R oracle:oinstall /u01/app/oracle

chmod -R 775 /u01/

1.11   配置用戶環境變量

1.11.1  ORACLE用戶

[root@rac1 ~]# cat /home/oracle/.bash_profile

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

 

# User specific environment and startup programs

 

ORACLE_SID=cndba1;export ORACLE_SID  

#ORACLE_SID=cndba2;export ORACLE_SID  

ORACLE_UNQNAME=cndba;export ORACLE_UNQNAME

JAVA_HOME=/usr/local/java; export JAVA_HOME

ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

ORACLE_HOME=$ORACLE_BASE/product/12.2.0/dbhome_1; export ORACLE_HOME

ORACLE_TERM=xterm; export ORACLE_TERM

NLS_DATE_FORMAT="YYYY:MM:DDHH24:MI:SS"; export NLS_DATE_FORMAT

NLS_LANG=american_america.ZHS16GBK; export NLS_LANG

TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

ORA_NLS11=$ORACLE_HOME/nls/data; export ORA_NLS11

PATH=.:${JAVA_HOME}/bin:${PATH}:$HOME/bin:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin

PATH=${PATH}:/usr/bin:/bin:/usr/bin/X11:/usr/local/bin

export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$ORACLE_HOME/oracm/lib

LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/lib:/usr/lib:/usr/local/lib

export LD_LIBRARY_PATH

CLASSPATH=$ORACLE_HOME/JRE

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/jlib

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/rdbms/jlib

CLASSPATH=${CLASSPATH}:$ORACLE_HOME/network/jlib

export CLASSPATH

THREADS_FLAG=native; export THREADS_FLAG

export TEMP=/tmp

export TMPDIR=/tmp

umask 022

1.11.2  GRID用戶

[root@rac1 ~]# cat /home/grid/.bash_profile

# .bash_profile

 

# Get the aliases and functions

if [ -f ~/.bashrc ]; then

. ~/.bashrc

fi

 

# User specific environment and startup programs

 

PATH=$PATH:$HOME/bin

 

export ORACLE_SID=+ASM1  

#export ORACLE_SID=+ASM2  

export ORACLE_BASE=/u01/app/grid

export ORACLE_HOME=/u01/app/12.2.0/grid

export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin/:.

export TEMP=/tmp

export TMP=/tmp

export TMPDIR=/tmp

umask 022

export PATH

1.12   修改資源限制

1.12.1  修改/etc/security/limits.conf

[root@rac1 ~]# cat >> /etc/security/limits.conf <

1.13   配置NOZEROCONF

編輯 /etc/sysconfig/network文件增長如下內容

[root@rac1 ~]# cat >> /etc/sysconfig/network <

1.14   修改內核參數

[root@rac1 ~]# vim /etc/sysctl.conf 
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
[root@rac1 ~]#sysctl -p

1.15   安裝必要的包

yum 的配置參考以下文章:

 Linux 平臺下 YUM 源配置 手冊

 http://www.cndba.cn/dave/article/154

libXau  libXau.i686  libxcb  libxcb.i686  libXi  libXi.i686  make  sysstat  unixODBC  unixODBC-devel  zlib-devel  zlib-devel.i686 compat-libcap1 –yyum install binutils  compat-libstdc++-33   gcc  gcc-c++  glibc  glibc.i686  glibc-devel   ksh   libgcc.i686libstdc++-devel  libaio  libaio.i686libaio-devel  libaio-devel.i686  libXext  libXext.i686  libXtst  libXtst.i686libX11  libX11.i686

1.16   安裝cvuqdisk

cvuqdisk存於oracle安裝介質的cv/rpm目錄下,解壓縮database的安裝介質便可看到此包:

export CVUQDISK_GRP=asmadmin

[root@rac1 rpm]# pwd

/software/database/rpm

[root@rac1 rpm]# ll

total 12

-rwxr-xr-x 1 root root 8860 Jan  5 17:36 cvuqdisk-1.0.10-1.rpm

[root@rac1 rpm]# rpm -ivh cvuqdisk-1.0.10-1.rpm

Preparing...                          ################################# [100%]

Using default group oinstall to install package

Updating / installing...

   1:cvuqdisk-1.0.10-1                ################################# [100%]

[root@rac1 rpm]#

 拷貝至另外一個節點也安裝一下。

1.17   配置共享磁盤

執行以下腳本:

[root@rac1 ~]#

for i in b c d e f ; 

do

echo "KERNEL==/"sd*/",ENV{DEVTYPE}==/"disk/",SUBSYSTEM==/"block/",PROGRAM==/"/usr/lib/udev/scsi_id -g -u -d /$devnode/",RESULT==/"`/usr/lib/udev/scsi_id -g -u /dev/sd$i`/", RUN+=/"/bin/sh -c 'mknod /dev/asmdisk$i b  /$major /$minor; chown grid:asmadmin /dev/asmdisk$i; chmod 0660 /dev/asmdisk$i'/""

done

執行結果:

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB90ea2842-3d5cfe18", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b  $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB0c31ed82-ca3c7a2f", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b  $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBd2eba70f-9707444e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b  $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB15946091-75f9c0f4", RUN+="/bin/sh -c 'mknod /dev/asmdiske b  $major $minor; chown grid:asmadmin /dev/asmdiske; chmod 0660 /dev/asmdiske'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBac950c6b-de84431c", RUN+="/bin/sh -c 'mknod /dev/asmdiskf b  $major $minor; chown grid:asmadmin /dev/asmdiskf; chmod 0660 /dev/asmdiskf'"

 

建立規則文件:/etc/udev/rules.d/99-oracle-asmdevices.rules,並將上述內容添加到文件中。

[root@rac1 rules.d]# cat 99-oracle-asmdevices.rules

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB90ea2842-3d5cfe18", RUN+="/bin/sh -c 'mknod /dev/asmdiskb b  $major $minor; chown grid:asmadmin /dev/asmdiskb; chmod 0660 /dev/asmdiskb'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB0c31ed82-ca3c7a2f", RUN+="/bin/sh -c 'mknod /dev/asmdiskc b  $major $minor; chown grid:asmadmin /dev/asmdiskc; chmod 0660 /dev/asmdiskc'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBd2eba70f-9707444e", RUN+="/bin/sh -c 'mknod /dev/asmdiskd b  $major $minor; chown grid:asmadmin /dev/asmdiskd; chmod 0660 /dev/asmdiskd'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VB15946091-75f9c0f4", RUN+="/bin/sh -c 'mknod /dev/asmdiske b  $major $minor; chown grid:asmadmin /dev/asmdiske; chmod 0660 /dev/asmdiske'"

KERNEL=="sd*",ENV{DEVTYPE}=="disk",SUBSYSTEM=="block",PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",RESULT=="1ATA_VBOX_HARDDISK_VBac950c6b-de84431c", RUN+="/bin/sh -c 'mknod /dev/asmdiskf b  $major $minor; chown grid:asmadmin /dev/asmdiskf; chmod 0660 /dev/asmdiskf'"

執行生效

[root@rac1 ~]# /sbin/udevadm trigger --type=devices --action=change

若是權限沒有變,嘗試重啓。

[root@rac1 rules.d]# ll /dev/asm*

brw-rw---- 1 grid asmadmin 8, 16 Mar 21 22:01 /dev/asmdiskb

brw-rw---- 1 grid asmadmin 8, 32 Mar 21 22:01 /dev/asmdiskc

brw-rw---- 1 grid asmadmin 8, 48 Mar 21 22:01 /dev/asmdiskd

brw-rw---- 1 grid asmadmin 8, 64 Mar 21 22:01 /dev/asmdiske

brw-rw---- 1 grid asmadmin 8, 80 Mar 21 22:01 /dev/asmdiskf

1.17.1  修改磁盤屬性

1)修改磁盤屬性

echo deadline >/sys/block/sdb/queue/scheduler

echo deadline > /sys/block/sdc/queue/scheduler

echo deadline >/sys/block/sdd/queue/scheduler

echo deadline > /sys/block/sde/queue/scheduler

echo deadline >/sys/block/sdf/queue/scheduler

(2) 驗證屬性修改結果:

如:

[root@rac1 dev]#  more /sys/block/sdb/queue/scheduler

noop anticipatory [deadline]cfq

[root@rac1 dev]#  more /sys/block/sdc/queue/scheduler

noop anticipatory [deadline]cfq

安裝GRID

下載地址:

http://www.oracle.com/technetwork/database/enterprise-edition/downloads/oracle12c-linux-12201-3608234.html

2.1   上傳並解壓介質

注意:12cR2 GRID 的安裝和以前版本不一樣,採用的是直接解壓縮的模式。 因此須要先把安裝介質複製到GRID HOME,而後直接進行解壓縮。 這個目錄必須在GRID HOME下才能夠進行解壓縮。

About Image-Based Oracle Grid Infrastructure Installation

Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation and configuration of Oracle Grid Infrastructure software is simplified with image-based installation.

[grid@rac1 ~]$ echo $ORACLE_HOME

/u01/app/12.2.0/grid

[grid@rac1 ~]$ cd $ORACLE_HOME

[grid@rac1 grid]$ ll linuxx64_12201_grid_home.zip

-rw-r--r-- 1 grid oinstall 2994687209 Mar 21 22:10 linuxx64_12201_grid_home.zip

[grid@rac1 grid]$

[grid@rac1 grid]$ unzip linuxx64_12201_grid_home.zip

解壓縮完成後文件自動就補全了,剩下的在執行腳本便可。 沒有了安裝的過程了。

[grid@rac1 grid]$ ll

total 2924572

drwxr-xr-x  2 grid oinstall        102 Jan 27 00:12 addnode

drwxr-xr-x 11 grid oinstall        118 Jan 27 00:10 assistants

drwxr-xr-x  2 grid oinstall       8192 Jan 27 00:12 bin

drwxr-xr-x  3 grid oinstall         23 Jan 27 00:12 cdata

drwxr-xr-x  3 grid oinstall         19 Jan 27 00:10 cha

drwxr-xr-x  4 grid oinstall         87 Jan 27 00:12 clone

drwxr-xr-x 16 grid oinstall        191 Jan 27 00:12 crs

drwxr-xr-x  6 grid oinstall         53 Jan 27 00:12 css

drwxr-xr-x  7 grid oinstall         71 Jan 27 00:10 cv

drwxr-xr-x  3 grid oinstall         19 Jan 27 00:10 dbjava

drwxr-xr-x  2 grid oinstall         22 Jan 27 00:11 dbs

drwxr-xr-x  2 grid oinstall         32 Jan 27 00:12 dc_ocm

drwxr-xr-x  5 grid oinstall        191 Jan 27 00:12 deinstall

drwxr-xr-x  3 grid oinstall         20 Jan 27 00:10 demo

drwxr-xr-x  3 grid oinstall         20 Jan 27 00:10 diagnostics

drwxr-xr-x  8 grid oinstall        179 Jan 27 00:11 dmu

-rw-r--r--  1 grid oinstall        852 Aug 19  2015 env.ora

drwxr-xr-x  7 grid oinstall         65 Jan 27 00:12 evm

drwxr-xr-x  5 grid oinstall         49 Jan 27 00:10 gpnp

2.2   運行安裝

在節點1執行安裝腳本,這裏依賴圖形界面,可以使用xshell 或者 vnc 進行調用。

Linux VNC 安裝配置

http://www.cndba.cn/dave/article/1814

[grid@rac1 grid]$ pwd

/u01/app/12.2.0/grid

[grid@rac1 grid]$ ll *.sh

-rwxr-x--- 1 grid oinstall 5395 Jul 21  2016 gridSetup.sh

-rwx------ 1 grid oinstall  603 Jan 27 00:12 root.sh

-rwx------ 1 grid oinstall  612 Jan 27 00:12 rootupgrade.sh

-rwxr-x--- 1 grid oinstall  628 Sep  5  2015 runcluvfy.sh
[grid@rac1 grid]$ ./gridSetup.sh

Launching Oracle Grid Infrastructure Setup Wizard...


添加節點並配置SSH 驗證

注意:新增了一個冗餘類型FLEX:而且對磁盤組空間也有新的更高要求

官方文檔解釋:

FLEX REDUNDANCY是一種磁盤組,容許數據庫在建立磁盤組後指定本身的冗餘。 文件的冗餘也能夠在建立後進行更改。 此類型的磁盤組支持Oracle ASM文件組和配額組。 靈活磁盤組須要至少存在三個故障組。 若是彈性磁盤組具備少於五個故障組,則它能夠容忍丟失一個; 不然,它能夠容忍兩個故障組的丟失。 要建立一個彈性磁盤組,COMPATIBLE.ASMCOMPATIBLE.RDBMS磁盤組屬性必須設置爲12.2或更高。

若是前提檢查出現NTP,內存方面的警告,還有什麼avahi-deamon的問題。能夠忽略。

開始安裝

執行腳本


[root@rac1 etc]# /u01/app/12.2.0/grid/root.sh

Performing root user operation.

 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/12.2.0/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

   Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-21_11-50-15PM.log

2017/03/21 23:50:20 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.

2017/03/21 23:50:20 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2017/03/21 23:50:53 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2017/03/21 23:50:53 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.

2017/03/21 23:50:57 CLSRSC-363: User ignored prerequisites during installation

2017/03/21 23:50:58 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.

2017/03/21 23:51:00 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2017/03/21 23:51:01 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.

2017/03/21 23:51:12 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.

2017/03/21 23:51:13 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.

2017/03/21 23:51:13 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.

2017/03/21 23:51:49 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.

2017/03/21 23:51:58 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.

2017/03/21 23:51:58 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2017/03/21 23:52:04 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.

2017/03/21 23:52:19 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2017/03/21 23:52:42 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.

2017/03/21 23:52:48 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

2017/03/21 23:53:17 CLSRSC-400: A system reboot is required to continue installing. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed

[root@rac1 etc]#

 

執行root.sh腳本時,出現了

2017/03/21 23:53:17 CLSRSC-400: A system reboot is required to continue installing. The command '/u01/app/12.2.0/grid/perl/bin/perl -I/u01/app/12.2.0/grid/perl/lib -I/u01/app/12.2.0/grid/crs/install /u01/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failed

官方文檔解釋:要求必定要重啓服務器,而後再次執行這兩個腳本。時間稍微有點長....


若是出現CLSRSC-1102: failed to start resource 'qosmserver'這種錯誤,有多是你分配的內存不夠形成的,形成資源不夠啓動該服務。增長內存後,從新執行root.sh腳本。

root.sh腳本最後:

CRS-6016: Resource auto-start has completed for server rac1

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2017/03/21 14:12:39 CLSRSC-343: Successfully started Oracle Clusterware stack

2017/03/21 14:12:39 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.

2017/03/21 14:16:10 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2017/03/21 14:17:55 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

 

表示成功了。

Log 有點長,有興趣本身看:

[root@rac1 ~]# /u01/app/12.2.0/grid/root.sh

Performing root user operation.


The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /u01/app/12.2.0/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of "dbhome" have not changed. No need to overwrite.

The contents of "oraenv" have not changed. No need to overwrite.

The contents of "coraenv" have not changed. No need to overwrite.

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Relinking oracle with rac_on option

Using configuration parameter file: /u01/app/12.2.0/grid/crs/install/crsconfig_params

The log of current session can be found at:

  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-22_00-00-32AM.log

2017/03/22 00:00:37 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.

2017/03/22 00:00:37 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2017/03/22 00:00:37 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2017/03/22 00:00:37 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.

2017/03/22 00:00:40 CLSRSC-363: User ignored prerequisites during installation

2017/03/22 00:00:40 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.

2017/03/22 00:00:42 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2017/03/22 00:00:43 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'.

2017/03/22 00:00:45 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'.

2017/03/22 00:00:47 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'.

2017/03/22 00:00:47 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'.

2017/03/22 00:00:49 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.

2017/03/22 00:00:51 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.

2017/03/22 00:01:37 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2017/03/22 00:01:38 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.

2017/03/22 00:01:53 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

2017/03/22 00:02:16 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.

2017/03/22 00:02:20 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

2017/03/22 00:02:52 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.

2017/03/22 00:02:57 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

CRS-4123: Oracle High Availability Services has been started.

CRS-2672: Attempting to start 'ora.evmd' on 'rac1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'

CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded

CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'

CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'

CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rac1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

 

Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-170322AM120336.log for details.

 

2017/03/22 00:04:39 CLSRSC-482: Running command: '/u01/app/12.2.0/grid/bin/ocrconfig -upgrade grid oinstall'

CRS-2672: Attempting to start 'ora.crf' on 'rac1'

CRS-2672: Attempting to start 'ora.storage' on 'rac1'

CRS-2676: Start of 'ora.storage' on 'rac1' succeeded

CRS-2676: Start of 'ora.crf' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rac1'

CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

CRS-4256: Updating the profile

Successful addition of voting disk 07f57bf9f7634f5abfb849735e86d3aa.

Successful addition of voting disk 3c930c3a19f34f25bfddc3a5a41bbb4e.

Successful addition of voting disk 4fab95ab67ed4f07bf4e9aa67e3e095e.

Successfully replaced voting disk group with +OCR.

CRS-4256: Updating the profile

CRS-4266: Voting file(s) successfully replaced

##  STATE    File Universal Id                File Name Disk group

--  -----    -----------------                --------- ---------

 1. ONLINE   07f57bf9f7634f5abfb849735e86d3aa (/dev/asmdiskb) [OCR]

 2. ONLINE   3c930c3a19f34f25bfddc3a5a41bbb4e (/dev/asmdiskd) [OCR]

 3. ONLINE   4fab95ab67ed4f07bf4e9aa67e3e095e (/dev/asmdiskc) [OCR]

Located 3 voting disk(s).

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'

CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'

CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.storage' on 'rac1'

CRS-2673: Attempting to stop 'ora.crf' on 'rac1'

CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'

CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'

CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded

CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.storage' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.asm' on 'rac1'

CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'

CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'

CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded

CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'

CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded

CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'

CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed

CRS-4133: Oracle High Availability Services has been stopped.

2017/03/22 00:06:15 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.

CRS-4123: Starting Oracle High Availability Services-managed resources

CRS-2672: Attempting to start 'ora.evmd' on 'rac1'

CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'

CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded

CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'

CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'

CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'

CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failed

CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'

CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cssd' on 'rac1'

CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'

CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded

CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'

CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'

CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac1'

CRS-2674: Start of 'ora.drivers.acfs' on 'rac1' failed

CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rac1'

CRS-2676: Start of 'ora.asm' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.storage' on 'rac1'

CRS-2676: Start of 'ora.storage' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.crf' on 'rac1'

CRS-2676: Start of 'ora.crf' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.crsd' on 'rac1'

CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded

CRS-6023: Starting Oracle Cluster Ready Services-managed resources

CRS-6017: Processing resource auto-start for servers: rac1

CRS-6016: Resource auto-start has completed for server rac1

CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources

CRS-4123: Oracle High Availability Services has been started.

2017/03/22 00:09:08 CLSRSC-343: Successfully started Oracle Clusterware stack

2017/03/22 00:09:08 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.

 

CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1'

CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.asm' on 'rac1'

CRS-2676: Start of 'ora.asm' on 'rac1' succeeded

CRS-2672: Attempting to start 'ora.OCR.dg' on 'rac1'

CRS-2676: Start of 'ora.OCR.dg' on 'rac1' succeeded

2017/03/22 00:14:18 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2017/03/22 00:17:02 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[root@rac1 ~]#

 

2.3   驗證集羣是否正常

[grid@rac1 ~]$ crsctl stat res -t

--------------------------------------------------------------------------------

Name           Target  State        Server                   State details       

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.ASMNET1LSNR_ASM.lsnr

               ONLINE  ONLINE       rac1                     STABLE

               ONLINE  ONLINE       rac2                     STABLE

ora.LISTENER.lsnr

               ONLINE  ONLINE       rac1                     STABLE

               ONLINE  ONLINE       rac2                     STABLE

ora.OCR_VOTE.dg

               ONLINE  ONLINE       rac1                     STABLE

               ONLINE  ONLINE       rac2                     STABLE

ora.net1.network

               ONLINE  ONLINE       rac1                     STABLE

               ONLINE  ONLINE       rac2                     STABLE

ora.ons

               ONLINE  ONLINE       rac1                     STABLE

               ONLINE  ONLINE       rac2                     STABLE

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       rac1                     STABLE

ora.MGMTLSNR

      1        OFFLINE OFFLINE                               STABLE

ora.asm

      1        ONLINE  ONLINE       rac1                     Started,STABLE

      2        ONLINE  ONLINE       rac2                     Started,STABLE

      3        OFFLINE OFFLINE                               STABLE

ora.cvu

      1        ONLINE  ONLINE       rac1                     STABLE

ora.qosmserver

      1        ONLINE  ONLINE       rac1                     STABLE

ora.rac1.vip

      1        ONLINE  ONLINE       rac1                     STABLE

ora.rac2.vip

      1        ONLINE  ONLINE       rac2                     STABLE

ora.scan1.vip

      1        ONLINE  ONLINE       rac1                     STABLE

--------------------------------------------------------------------------------

[grid@rac1 ~]$

ASMCA建立磁盤組

界面都清爽了許多


安裝DB

這個安裝方式和以前同樣,沒有變化

./runInstaller

安裝部分就省略了,基本上就是配置ssh,選擇磁盤組等等。


組分的更細了,分工更明確了。



DBCA建立數據庫

....

驗證

6.1   查看建立的容器數據庫

SQL> select name,cdb from v$database;

NAME	  CDB

--------  ---------

CNDBA	  YES

6.2   查看存在的插撥數據庫

SQL> col pdb_name for a30

SQL> select pdb_id,pdb_name,dbid,status,creation_scn from dba_pdbs;

 

    PDB_ID PDB_NAME	DBID STATUS	CREATION_SCN

---------- ------------------------------ ---------- ---------- ------------

 3 lei	  3459708341 NORMAL	     1456419

 2 PDB$SEED	  3422473700 NORMAL	     1408778本文轉自:http://www.cndba.cn/Expect-le/article/1819
相關文章
相關標籤/搜索