配置項目 | 數據庫1 | 數據庫2 |
---|---|---|
主機名 | rac1 | rac2 |
操做系統 | Red Hat Enterprise Linux Server release 7.2 (Maipo) | Red Hat Enterprise Linux Server release 7.2 (Maipo) |
內核 | 3.10.0-327.el7.x86_64 | 3.10.0-327.el7.x86_64 |
public網卡 | enp0s9 | enp0s9 |
public ip | 192.168.56.101 | 192.168.56.102 |
private網卡 | enp0s8 | enp0s8 |
private ip | 10.0.0.7 | 10.0.0.0.8 |
virtual ip | 192.168.56.103 | 192.168.56.104 |
scan name | definescan | |
scan ip | 192.168.56.105 |
如無特殊說明,如下操做需在全部節點上進行配置。
rac1(root)
# hostnamectl set-hostname rac1 # su # hostname rac1
rac2(root)
# hostnamectl set-hostname rac2 # su # hostname rac2
rac1(root),rac2(root)
# public IP 192.168.56.101 rac1 192.168.56.102 rac2 # virtual IP 192.168.56.103 rac1-vip 192.168.56.104 rac2-vip # rac scan IP 192.168.56.106 definescan # private IP 10.0.0.7 rac1-priv 10.0.0.8 rac2-priv
!> 全部的網卡都須要設置靜態iphtml
nmcli con show
查看全部的網卡[root@rac1 /]# nmcli con show NAME UUID TYPE DEVICE enp0s8 e849869a-de19-4824-bafd-4491e66e8ca4 802-3-ethernet enp0s8 enp0s3 86db33b5-ea89-47aa-a038-98f6029fa608 802-3-ethernet enp0s3 enp0s9 706ffc32-e82c-4a01-8b8f-eefbf92950ff 802-3-ethernet -- virbr0-nic 1ac00d88-3f52-4dad-8da7-006b9073469f 802-3-ethernet virbr0-nic virbr0 00facfd9-5460-4846-8e94-1a12de673348 bridge virbr0
而後到路徑/etc/sysconfig//etc/sysconfig/network-scripts
下根據網卡名稱找到網卡到配置文件,通常名稱命名格式爲ifcfg-NAME
node
rac1(root)
ifcfg-enp0s9(public ip)linux
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPADDR=192.168.56.101 IPV4_FAILURE_FATAL=no NAME=enp0s9 UUID=706ffc32-e82c-4a01-8b8f-eefbf92950ff DEVICE=enp0s9 ONBOOT=yes
ifcfg-enp0s8(private ip)c++
TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes IPV4_FAILURE_FATAL=no IPADDR=10.0.0.7 NAME=enp0s8 UUID=e849869a-de19-4824-bafd-4491e66e8ca4 DEVICE=enp0s8 ONBOOT=yes PEERDNS=yes PEERROUTES=yes
rac2(root)
ifcfg-enp0s9(public ip)數據庫
HWADDR=08:00:27:26:72:E5 TYPE=Ethernet BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPADDR=192.168.56.102 IPV4_FAILURE_FATAL=no NAME=enp0s9 UUID=bc89e1c6-2457-41ce-a366-5a505c5d1cd3 ONBOOT=yes
ifcfg-enp0s8(private ip)segmentfault
HWADDR=08:00:27:F9:1B:62 TYPE=Ethernet IPADDR=10.0.0.8 BOOTPROTO=none DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no NAME=enp0s8 UUID=0d68de3e-74ab-4e0d-af99-12a68f5f7525 ONBOOT=yes
在兩個節點上經過ping驗證兩個節點是否可通centos
ping rac1 ping rac2 ping rac1-priv ping rac2-priv
若是在network-scripts
目錄下找不到配置文件,可自行建立一個,網卡的UUID可經過命令nmcli con show
查看
# systemctl status firewalld.service # systemctl stop firewalld.service # systemctl disable firewalld.service
修改文件/etc/selinux/config
,設置安全
SELINUX=disabled
關閉selinuxbash
# setenforce 0 # getenforce
rac安裝過程須要依賴較多的包,這些包都包含在系統鏡像中,可經過將系統鏡像掛載到系統中設置本地yum源來進行安裝。每一個虛擬環境掛載鏡像的方式不一樣,步驟並不複雜,這裏以VirtualBox爲例。服務器
/dev/sr0
,能夠經過mount
命令掛載到/mnt
下# mount /dev/sr0 /mnt # cd /mnt # ll total 872 dr-xr-xr-x. 4 root root 2048 Oct 30 2015 addons dr-xr-xr-x. 3 root root 2048 Oct 30 2015 EFI -r--r--r--. 1 root root 8266 Apr 4 2014 EULA -r--r--r--. 1 root root 18092 Mar 6 2012 GPL dr-xr-xr-x. 3 root root 2048 Oct 30 2015 images dr-xr-xr-x. 2 root root 2048 Oct 30 2015 isolinux dr-xr-xr-x. 2 root root 2048 Oct 30 2015 LiveOS -r--r--r--. 1 root root 114 Oct 30 2015 media.repo dr-xr-xr-x. 2 root root 835584 Oct 30 2015 Packages dr-xr-xr-x. 24 root root 6144 Oct 30 2015 release-notes dr-xr-xr-x. 2 root root 4096 Oct 30 2015 repodata -r--r--r--. 1 root root 3375 Oct 23 2015 RPM-GPG-KEY-redhat-beta -r--r--r--. 1 root root 3211 Oct 23 2015 RPM-GPG-KEY-redhat-release -r--r--r--. 1 root root 1568 Oct 30 2015 TRANS.TBL
# cd /etc/yum.repos.d # cat <<EOF > redhat7.2iso.repo [rhel7] name = Red Hat Enterprise Linux 7.2 baseurl=file:///mnt/ gpgcheck=0 enabled=1 EOF # yum clean all # yum grouplist # yum makecache
能正常輸出說明配置成功
Red Hat默認的yum倉庫須要註冊用戶才能使用,若是你的系統未註冊,使用yum時會報如下錯誤
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
解決辦法就是刪掉自帶的倉庫,只要刪除文件/etc/yum.repos.d/redhat.repo
便可。
本次rac安裝是經過GUI界面進行安裝,所以須要事先安裝VNC,經過VNC進入系統進行數據庫安裝。
安裝vnc以前先確保已經完成以上掛載鏡像設置本地yum源相關步驟。
# yum install tigervnc-server
編輯文件/lib/systemd/system/vncserver@.service
,將裏面<USER>
替換成登陸用戶,這裏直接用root登陸。
[Unit] Description=Remote desktop service (VNC) After=syslog.target network.target [Service] Type=forking # Clean any existing files in /tmp/.X11-unix environment ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :' ExecStart=/usr/sbin/runuser -l root -c "/usr/bin/vncserver %i" PIDFile=/root/.vnc/%H%i.pid ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :' [Install] WantedBy=multi-user.target
修改完執行如下命令從新加載
# systemctl daemon-reload
# vncserver
首次啓動須要輸入密碼,啓動後,默認端口號是5901,也能夠經過命令查看端口號
# netstat -npl|grep vnc tcp 0 0 0.0.0.0:5901 0.0.0.0:* LISTEN 7048/Xvnc tcp6 0 0 :::5901 :::* LISTEN 7048/Xvnc
後續全部的GUI操做都經過vnc客戶端進行操做。
groupadd -g 1204 oinstall groupadd -g 1200 dba groupadd -g 1203 asmadmin groupadd -g 1201 asmdba groupadd -g 1202 asmoper
useradd -u 1100 -g oinstall -G dba,asmdba,asmadmin -d /home/oracle oracle useradd -u 1200 -g oinstall -G asmadmin,asmdba,asmoper -d /home/grid grid passwd oracle passwd grid
# id nobody
檢查是否存在,若無則手動建立,且保證兩邊的ID一致
cat 1>> /etc/sysctl.conf <<EOF fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 858993459200 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500 EOF
執行如下命令生效
sysctl -p
limits.conf
cat 1>>/etc/security/limits.conf <<EOF grid soft nofile 1024 grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 4096 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 4096 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768 grid soft memlock -1 grid hard memlock -1 oracle soft memlock -1 oracle hard memlock -1 EOF
/etc/pam.d/login
cat 1>>/etc/pam.d/login <<EOF session required pam_limits.so EOF
/etc/profile
cat 1>>/etc/profile <<EOF if [ \$USER = "oracle" ] || [ \$USER = "grid" ]; then if [ \$SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi EOF
grid用戶環境變量設置
在.bash_profile其中添加如下內容,注意是以RAC節點1爲例,在節點2上要寫ORACLE_SID=+ASM2,節點3上要寫ORACLE_SID=+ASM3。
rac1
# su - grid # vi ~/.bash_profile umask 022 export ORACLE_SID=+ASM1 export ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_BASE=/u01/app/oracle export PATH=/u01/app/11.2.0/grid/bin:$PATH # source ~/.bash_profile
rac2
# su - grid # vi ~/.bash_profile umask 022 export ORACLE_SID=+ASM2 export ORACLE_HOME=/u01/app/11.2.0/grid export ORACLE_BASE=/u01/app/oracle export PATH=/u01/app/11.2.0/grid/bin:$PATH # source ~/.bash_profile
oracle用戶環境變量設置
在.bash_profile其中添加如下內容,注意是以RAC節點1爲例,在節點2上要寫ORACLE_SID=db2,在節點3上要寫ORACLE_SID=db3
rac1
# su - oracle # vi ~/.bash_profile umask 022 export ORACLE_SID=db1 export ORACLE_BASE=/u01/app/oracledb export ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1 export PATH=$ORACLE_HOME/bin:$PATH # source ~/.bash_profile
rac2
# su - oracle # vi ~/.bash_profile umask 022 export ORACLE_SID=db2 export ORACLE_BASE=/u01/app/oracledb export ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1 export PATH=$ORACLE_HOME/bin:$PATH # source ~/.bash_profile
用root用戶在全部節點上執行如下命令建立文件夾。
su - root mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle/product/11.2.0/db_1 mkdir -p /u01/soft mkdir -p /u01/app/oracledb mkdir -p /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/oracle chown -R grid:oinstall /u01 chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracledb chown -R oracle:oinstall /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/11.2.0/grid chmod -R 775 /u01/
# chmod +x /etc/rc.d/rc.local # cat >>/etc/rc.local <<EOF if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi EOF # cat >/etc/default/grub <<EOF GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)" GRUB_DEFAULT=saved GRUB_DISABLE_SUBMENU=true GRUB_TERMINAL_OUTPUT="console" GRUB_CMDLINE_LINUX="rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet transparent_hugepage=never" GRUB_DISABLE_RECOVERY="true" EOF # grub2-mkconfig -o /boot/grub2/grub.cfg
# grep SwapTotal /proc/meminfo SwapTotal: 2723836 kB # mkdir -p /usr/swap # dd if=/dev/zero of=/usr/swap/swapfile bs=1G count=2 # mkswap /usr/swap/swapfile # swapon /usr/swap/swapfile # grep SwapTotal /proc/meminfo SwapTotal: 4820984 kB
設置開機啓動掛載,編輯/etc/fstab
在文件最後增長一行
/usr/swap/swapfile swap swap defaults 0 0
修改文件/etc/systemd/logind.conf
設置RemoveIPC值爲no
RemoveIPC=no
從新加載
systemctl daemon-reload systemctl restart systemd-logind
修改文件/etc/sysconfig/network
NOZEROCONF=yes
執行如下命令,安裝依賴包,若是報錯請忽略
yum clean all yum install -y binutils* yum install -y compat-libcap1* yum install -y compat-libstdc++* yum install -y compat-libstdc++*686* yum install -y e2fsprogs* yum install -y e2fsprogs-libs* yum install -y glibc*686* yum install -y glibc* yum install -y glibc-devel* yum install -y glibc-devel*686* yum install -y ksh* yum install -y libgcc*686* yum install -y libgcc* yum install -y libs* yum install -y libstdc++* yum install -y libstdc++*686* yum install -y libstdc++-devel* yum install -y libstdc++*686* yum install -y libaio* yum install -y libaio*686* yum install -y libaio-devel* yum install -y libaio-devel*686* yum install -y libXtst* yum install -y libXtst*686* yum install -y libX11*686* yum install -y libX11* yum install -y libXau*686* yum install -y libXau* yum install -y libxcb*686* yum install -y libxcb* yum install -y libXi* yum install -y libXi*686* yum install -y make* yum install -y net-tools* yum install -y nfs-utils* yum install -y sysstat* yum install -y smartmontools* yum install -y unixODBC* yum install -y unixODBC-devel* yum install -y unixODBC*686* yum install -y unixODBC-devel*686* yum install -y gcc-* yum install -y gcc-c++* yum install -y elfutils-libelf-devel
特別說明:RHEL7.2對於Oracle11.2.0.4的認證是後認證(11.2.0.4先於Redhat7.2發佈)的,compat-libstdc++-33這個包11.2.0.4安裝須要,可是Redhat7.2自帶的包中不存在,因此須要從其餘版本得到這個包,並手動安裝。
兩個包能夠從如下地址獲取
下載完後執行如下命令完成安裝
rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm rpm -ivh compat-libstdc++-33-3.2.3-72.el7.i686.rpm
執行如下命令檢查依賴包安裝狀況
rpm -q binutils compat-libcap1 compat-libstdc++-33 e2fsprogs e2fsprogs-libs glibc glibc glibc-devel glibc-devel ksh libgcc libgcc libs libstdc++ libstdc++ libstdc++-devel libstdc++ libaio libaio libaio-devel libaio-devel libXtst libXtst libX11 libX11 libXau libXau libxcb libxcb libXi libXi make net-tools nfs-utils sysstat smartmontools unixODBC unixODBC-devel unixODBC unixODBC-devel gcc gcc-c++ elfutils-libelf-devel
若是有no install
請安裝好再進行下一步。
系統總共掛載了5塊共享存儲盤,各盤存儲狀況以下
路徑 | 大小 | 用途 |
---|---|---|
/dev/sdb | 2G | vote(投票) |
/dev/sdc | 2G | vote(投票) |
/dev/sdd | 2G | vote(投票) |
/dev/sde | 20G | arch(歸檔) |
/dev/sdf | 40G | data(數據) |
能夠用命令fdisk -l
查看具體狀況
由於存儲都是共享的,因此分區操做在任一節點上操做便可
對每一個盤進行分區,以/dev/sde
爲例,輸入命令fdisk /dev/sde
,依次輸入n->p->(一路默認到底)->w
。
# fdisk /dev/sde Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-43548671, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-43548671, default 43548671): Using default value 43548671 Partition 1 of type Linux and of size 20.8 GiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
經過命令/usr/lib/udev/scsi_id -g -u /dev/sdxxx
查看磁盤wwid,由於是共享存儲,每一個節點看到的wwid都是同樣的,udev經過規則文件,給磁盤設置權限,讓grid用戶有權限操做磁盤。
# /usr/lib/udev/scsi_id -g -u /dev/sdb 1ATA_VBOX_HARDDISK_VB54ce865f-e65a7d00 # /usr/lib/udev/scsi_id -g -u /dev/sdc 1ATA_VBOX_HARDDISK_VB8f9429ee-32f50530 # /usr/lib/udev/scsi_id -g -u /dev/sdd 1ATA_VBOX_HARDDISK_VBc92cde00-a564f90e # /usr/lib/udev/scsi_id -g -u /dev/sde 1ATA_VBOX_HARDDISK_VBcc226ad4-aee5f903 # /usr/lib/udev/scsi_id -g -u /dev/sdf 1ATA_VBOX_HARDDISK_VB3fd31e1a-a035187e
根據查詢到的wwid,建立規則文件/etc/udev/rules.d/99-asmdevices.rules
內容以下,RESULT就是上面查到的wwid,每一塊盤建立一條記錄。
ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB54ce865f-e65a7d00", SYMLINK+="asmdisk001", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB8f9429ee-32f50530", SYMLINK+="asmdisk002", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VBc92cde00-a564f90e", SYMLINK+="asmdisk003", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VBcc226ad4-aee5f903", SYMLINK+="asmdisk004", OWNER="grid", GROUP="asmadmin", MODE="0660" ENV{DEVTYPE}=="disk", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode", RESULT=="1ATA_VBOX_HARDDISK_VB3fd31e1a-a035187e", SYMLINK+="asmdisk005", OWNER="grid", GROUP="asmadmin", MODE="0660"
partprobe udevadm control --reload-rules udevadm trigger --type=devices --action=change
若是是用名稱進行綁定,在執行規則文件以前,必定要用
partprobe
命令進行更新磁盤信息
若是目錄/dev/
下生成asmdisk*
軟連接,則說明執行成功
# cd /dev # ll asmdisk* lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk001 -> sdb lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk002 -> sdc lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk003 -> sdd lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk004 -> sde lrwxrwxrwx 1 root root 3 Sep 26 00:55 asmdisk005 -> sdf
此時查看設備權限,正常的話權限變爲660(rw-rw----)
,擁有者變爲 grid:asmadmin
# ls -l /dev/sd* brw-rw----. 1 grid asmadmin 8, 16 Sep 26 10:38 /dev/sdb brw-rw----. 1 grid asmadmin 8, 32 Sep 26 10:38 /dev/sdc brw-rw----. 1 grid asmadmin 8, 48 Sep 26 10:38 /dev/sdd brw-rw----. 1 grid asmadmin 8, 64 Sep 26 10:38 /dev/sde brw-rw----. 1 grid asmadmin 8, 80 Sep 26 10:38 /dev/sdf
# ll /u01/soft/ total 9797256 -rw-r--r--@ 1 grid oinstall 1395582860 9 26 14:13 p13390677_112040_Linux-x86-64_1of7.zip -rw-r--r--@ 1 grid oinstall 1151304589 9 26 13:52 p13390677_112040_Linux-x86-64_2of7.zip -rw-r--r--@ 1 grid oinstall 1205251894 9 20 01:57 p13390677_112040_Linux-x86-64_3of7.zip -rw-r--r--@ 1 grid oinstall 1133472011 9 26 09:59 p29255947_112040_Linux-x86-64.zip -rw-r--r--@ 1 grid oinstall 113112960 9 17 13:02 p6880880_112000_Linux-x86-64.zip
# su - grid # cd /u01/soft # unzip *.zip
# su - root # cd /u01/soft/grid/rpm # rpm -ivh cvuqdisk-1.0.9-1.rpm
將文件/u01/soft/grid/rpm/cvuqdisk-1.0.9-1.rpm
拷貝至其餘節點tmp目錄下,能夠用scp命令拷貝
其餘節點執行如下操做
# su - root # cd /tmp # scp grid@rac1:/u01/soft/grid/rpm/cvuqdisk-1.0.9-1.rpm . # rpm -ivh cvuqdisk-1.0.9-1.rpm
cvu的包安裝完成後在節點一以grid用戶啓動grid安裝。
在節點一登陸grid用戶,安裝數據庫集羣軟件
# su - grid $ export DISPLAY=:1.0 $ xhost + $ cd /u01/soft/grid $ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
export DISPLAY=:1.0
設置圖形界面顯示到哪一個端口上,1.0是vnc的監聽端口,./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
這樣寫的目的是安裝程序在顯示上會有一些bug,致使顯示不全或者按鈕點不了,這樣啓動能夠避免該狀況。
Simplified Chinese
1521
能夠修改,不啓動GNS
Add
增長集羣節點
此時若是還未配置節點信任,點擊下一步會報[INS-30132]
的錯誤
點擊界面上SSH Connectivity
配置互信
輸入rac2 grid用戶的密碼,點擊setup,若是提示配置成功,能夠點擊test測試是否如今是互信狀態,若是未setup就test,會報錯。
Do Not Use
Oracle ASM
Disk Group Name
輸入CRS
,點擊Change Discovery Path
,輸入/dev/asmdisk*
,這個文件名在咱們前面配置存儲規則的時候指定。
根據前面的規劃,選擇 asmdisk001/002/003
三塊2G的盤
這裏因爲系統自帶了ksh,能夠忽略pdksh的缺包問題;ASM磁盤設備因爲使用裸盤,已確認共享,也能夠忽略。
若是這裏有錯誤,能夠根據提示解決完畢後點擊Check Again
進行從新檢查若是確認錯誤能夠忽略,把
Ignore All
的選項勾上進行下一步
執行第一個腳本/u01/app/oraInventory/orainstRoot.sh
通常不會有問題。
執行第二個腳本/u01/app/11.2.0/grid/root.sh
時報如下錯誤
Adding Clusterware entries to inittab ohasd failed to start Failed to start the Clusterware. Last 20 lines of the alert log follow: 2019-09-27 12:54:19.483:
這個地方是RHEL7.x和11.2.0.4.0兼容性問題。由於RHEL7使用systemd而不是initd運行進程和重啓進程,而root.sh經過傳統的initd運行ohasd進程。解決方法就是在RHEL7中ohasd須要被設置爲一個服務,而且在運行腳本root.sh以前啓動。
停掉root.sh腳本,以root用戶執行如下腳本
# touch /usr/lib/systemd/system/ohas.service # chmod 777 /usr/lib/systemd/system/ohas.service # cat >>/usr/lib/systemd/system/ohas.service <<EOF [Unit] Description=Oracle High Availability Services After=syslog.target [Service] ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Restart=always [Install] WantedBy=multi-user.target EOF # systemctl daemon-reload # systemctl enable ohas.service # systemctl start ohas.service # systemctl status ohas.service ● ohas.service - Oracle High Availability Services Loaded: loaded (/usr/lib/systemd/system/ohas.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2019-09-27 00:36:06 CST; 4s ago Main PID: 5730 (init.ohasd) CGroup: /system.slice/ohas.service └─5730 /bin/sh /etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple Sep 27 00:36:06 rac2 systemd[1]: Started Oracle High Availability Services. Sep 27 00:36:06 rac2 systemd[1]: Starting Oracle High Availability Services...
從新執行root.sh
,若是此時仍是報錯多是root.sh腳本建立了init.ohasd以後,ohas.service沒有立刻啓動,解決方法參考如下,當運行root.sh時,一直刷新/etc/init.d,直到出現init.ohasd文件,立刻手動啓動ohas.service服務命令
systemctl start ohas.service
當兩個節點顯示如下信息時說明腳本執行成功
CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.CRS.dg' on 'rac1' CRS-2676: Start of 'ora.CRS.dg' on 'rac1' succeeded Configure Oracle Grid Infrastructure for a Cluster ... succeeded
當全部節點執行完後在任意節點執行如下命令查看各節點狀態
# /u01/app/11.2.0/grid/bin/crsctl stat res -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.CRS.dg ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.asm ONLINE ONLINE rac1 Started ONLINE ONLINE rac2 Started ora.gsd OFFLINE OFFLINE rac1 OFFLINE OFFLINE rac2 ora.net1.network ONLINE ONLINE rac1 ONLINE ONLINE rac2 ora.ons ONLINE ONLINE rac1 ONLINE ONLINE rac2 -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 ora.cvu 1 ONLINE ONLINE rac1 ora.oc4j 1 ONLINE ONLINE rac1 ora.rac1.vip 1 ONLINE ONLINE rac1 ora.rac2.vip 1 ONLINE ONLINE rac2 ora.scan1.vip 1 ONLINE ONLINE rac1
在集羣驗證過程當中提示scan驗證問題,這是因爲咱們採用了hosts解析,只要保證全部節點解析definescan正常便可
點擊Next
,選擇忽略該錯誤便可
cd /u01/app rm -rf * mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/oracle/product/11.2.0/db_1 mkdir -p /u01/soft mkdir -p /u01/app/oracledb mkdir -p /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/oracle chown -R grid:oinstall /u01 chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1 chown -R oracle:oinstall /u01/app/oracledb chown -R oracle:oinstall /u01/app/oracledb/product/11.2.0/db_1 chown -R grid:oinstall /u01/app/11.2.0/grid chmod -R 775 /u01/ cd /etc/ rm -rf ora* # 移除以前的配置 /u01/app/11.2.0.4/grid/perl/bin/perl /u01/app/11.2.0.4/grid/crs/install/roothas.pl -deconfig -force
在節點一登陸grid用戶,安裝數據庫集羣軟件
# su - oracle $ export DISPLAY=:1.0 $ xhost + $ cd /u01/soft/database $ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0
和集羣安裝同樣,這裏須要配置互信,輸入rac2 oracle用戶的密碼,點擊setup,執行完後點擊test測試,測試經過就能夠下一步
dba
Error in invoking target 'agent nhms' of makefile....
這裏也是因爲RHEL7.x與11.2.0.4兼容性的一個bug,解決方法(安裝節點執行便可):
su - oracle cd $ORACLE_HOME/sysman/lib vi ins_emagent.mk #搜索關鍵字 MK_EMAGENT_NMECTL,添加 -lnnz11,以下 #=========================== # emdctl #=========================== $(SYSMANBIN)emdctl: $(MK_EMAGENT_NMECTL) -lnnz11
修改完畢後回到安裝界面Retry
asmc
,點擊Create建立diskgroupsu - export DISPLAY=:1.0 xhost + su - grid export DISPLAY=:1.0 xhost + asmca
External
若是這邊看不到,多是窗口過小了,須要鼠標點擊右下角進行放大。
External
最後ASM磁盤組狀態以下
點擊mount all
點擊Exit退出
dbca
su - export DISPLAY=:1.0 xhost + su - oracle export DISPLAY=:1.0 xhost + dbca
將歸檔放入+ARCH中
進程數調整爲1000
字符集選擇UTF-8
鏈接模式默認便可
如下操做須要在全部節點上完成
# ll /u01/soft -rw-r--r--@ 1 grid oinstall 1133472011 9 26 09:59 p29255947_112040_Linux-x86-64.zip -rw-r--r--@ 1 grid oinstall 113112960 9 17 13:02 p6880880_112000_Linux-x86-64.zip
export GRID_HOME=/u01/app/11.2.0/grid export ORACLE_HOME=/u01/app/oracledb/product/11.2.0/db_1 mv $GRID_HOME/OPatch $GRID_HOME/OPatch_bak mv $ORACLE_HOME/OPatch $ORACLE_HOME/OPatch_bak unzip p6880880_112000_Linux-x86-64.zip -d $GRID_HOME unzip p6880880_112000_Linux-x86-64.zip -d $ORACLE_HOME chown -R grid:oinstall $GRID_HOME/OPatch chown -R oracle:oinstall $ORACLE_HOME/OPatch
# su - grid $ cd /u01/soft $ unzip p29255947_112040_Linux-x86-64.zip $ /u01/app/11.2.0/grid/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/ocm.rsp Provide your email address to be informed of security issues, install and initiate Oracle Configuration Manager. Easier for you if you use your My Oracle Support Email address/User Name. Visit http://www.oracle.com/support/policies.html for details. Email address/User Name: You have not provided an email address for notification of security issues. Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: y^H The OCM configuration response file (/tmp/ocm.rsp) was successfully created. $ ll /tmp/ocm.rsp -rw-r--r-- 1 grid oinstall 621 Sep 28 16:27 /tmp/ocm.rsp
# su - # export PATH=/u01/app/11.2.0/grid/OPatch:$PATH # opatch auto ./29255947/ -ocmrf /tmp/ocm.rsp Executing /u01/app/11.2.0/grid/perl/bin/perl /u01/app/11.2.0/grid/OPatch/crs/patch11203.pl -patchdir . -patchn 29255947 -ocmrf /tmp/ocm.rsp -paramfile /u01/app/11.2.0/grid/crs/install/crsconfig_params This is the main log file: /u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.log This file will show your detected configuration and all the steps that opatchauto attempted to do on your system: /u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.report.log 2019-09-28 16:32:59: Starting Clusterware Patch Setup Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params Stopping RAC /u01/app/oracle_base/product/11.2.0/db_1 ... Stopped RAC /u01/app/oracle_base/product/11.2.0/db_1 successfully patch ./29255947/29141201/custom/server/29141201 apply successful for home /u01/app/oracle_base/product/11.2.0/db_1 patch ./29255947/29141056 apply successful for home /u01/app/oracle_base/product/11.2.0/db_1 Stopping CRS... Stopped CRS successfully patch ./29255947/29141201 apply successful for home /u01/app/11.2.0/grid patch ./29255947/29141056 apply successful for home /u01/app/11.2.0/grid patch ./29255947/28729245 apply successful for home /u01/app/11.2.0/grid Starting CRS... Installing Trace File Analyzer CRS-4123: Oracle High Availability Services has been started. Starting RAC /u01/app/oracle_base/product/11.2.0/db_1 ... Started RAC /u01/app/oracle_base/product/11.2.0/db_1 successfully opatch auto succeeded.
當出現opatch auto succeeded
時表示補丁安裝成功。若是補丁安裝失敗,能夠根據控制檯輸出找到日誌文件,如上面日誌文件位於/u01/app/11.2.0/grid/cfgtoollogs/opatchauto2019-09-28_16-32-59.log
至此,rac數據庫安裝完畢。