1、實施前期準備工做html
2、安裝前期準備工做linux
Linux平臺 Oracle 18c RAC安裝指導: Part1:Linux平臺 Oracle 18c RAC安裝Part1:準備工做 Part2:Linux平臺 Oracle 18c RAC安裝Part2:GI配置 Part3:Linux平臺 Oracle 18c RAC安裝Part3:DB配置shell
本文安裝環境:<font color="red">OEL 7.5 + Oracle 18.3 GI & RAC</font>服務器
<h1 id="1">1、實施前期準備工做</h1>網絡
<h2 id="1.1">1.1 服務器安裝操做系統</h2> 配置徹底相同的兩臺服務器,安裝相同版本的Linux操做系統。留存系統光盤或者鏡像文件。 我這裏是OEL7.5,系統目錄大小均一致。對應OEL7.5的系統鏡像文件放在服務器上,供後面配置本地yum使用。 <h2 id="1.2">1.2 Oracle安裝介質</h2>Oracle 18.3 版本2個zip包(總大小9G+,注意空間): LINUX.X64_180000_grid_home.zip MD5: CD42D137FD2A2EEB4E911E8029CC82A9 LINUX.X64_180000_db_home.zip MD5: 99A7C4A088A8A502C261E741A8339AE8 這個本身去Oracle官網下載,而後只須要上傳到節點1便可。 <h2 id="1.3">1.3 共享存儲規劃</h2>從存儲中劃分出兩臺主機能夠同時看到的共享LUN,3個1G的盤用做OCR和Voting Disk,1個40G的盤作GIMR,其他規劃作數據盤和FRA。 根據實際須要選擇multipath或者udev綁定設備。這裏選用multipath綁定。oracle
multipath -ll multipath -F multipath -v2 multipath -ll
我這裏實驗環境,存儲劃分的LUN是經過一臺iSCSI服務器模擬的,下面是服務端主要配置信息:app
o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 8] | | o- disk1 ...................................................... [/dev/mapper/vg_storage-lv_lun1 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk2 ...................................................... [/dev/mapper/vg_storage-lv_lun2 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk3 ...................................................... [/dev/mapper/vg_storage-lv_lun3 (1.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk4 ..................................................... [/dev/mapper/vg_storage-lv_lun4 (40.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk5 ..................................................... [/dev/mapper/vg_storage-lv_lun5 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk6 ..................................................... [/dev/mapper/vg_storage-lv_lun6 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk7 ..................................................... [/dev/mapper/vg_storage-lv_lun7 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk8 ..................................................... [/dev/mapper/vg_storage-lv_lun8 (16.0GiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 1] | o- iqn.2003-01.org.linux-iscsi.storage-c.x8664:sn.bc3a6511567c ....................................................... [TPGs: 1] | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................................... [ACLs: 1] | | o- iqn.2003-01.org.linux-iscsi.storage-c.x8664:sn.bc3a6511567c:client ................................... [Mapped LUNs: 8] | | o- mapped_lun0 ................................................................................. [lun0 block/disk1 (rw)] | | o- mapped_lun1 ................................................................................. [lun1 block/disk2 (rw)] | | o- mapped_lun2 ................................................................................. [lun2 block/disk3 (rw)] | | o- mapped_lun3 ................................................................................. [lun3 block/disk4 (rw)] | | o- mapped_lun4 ................................................................................. [lun4 block/disk5 (rw)] | | o- mapped_lun5 ................................................................................. [lun5 block/disk6 (rw)] | | o- mapped_lun6 ................................................................................. [lun6 block/disk7 (rw)] | | o- mapped_lun7 ................................................................................. [lun7 block/disk8 (rw)] | o- luns .......................................................................................................... [LUNs: 8] | | o- lun0 ................................................ [block/disk1 (/dev/mapper/vg_storage-lv_lun1) (default_tg_pt_gp)] | | o- lun1 ................................................ [block/disk2 (/dev/mapper/vg_storage-lv_lun2) (default_tg_pt_gp)] | | o- lun2 ................................................ [block/disk3 (/dev/mapper/vg_storage-lv_lun3) (default_tg_pt_gp)] | | o- lun3 ................................................ [block/disk4 (/dev/mapper/vg_storage-lv_lun4) (default_tg_pt_gp)] | | o- lun4 ................................................ [block/disk5 (/dev/mapper/vg_storage-lv_lun5) (default_tg_pt_gp)] | | o- lun5 ................................................ [block/disk6 (/dev/mapper/vg_storage-lv_lun6) (default_tg_pt_gp)] | | o- lun6 ................................................ [block/disk7 (/dev/mapper/vg_storage-lv_lun7) (default_tg_pt_gp)] | | o- lun7 ................................................ [block/disk8 (/dev/mapper/vg_storage-lv_lun8) (default_tg_pt_gp)] | o- portals .................................................................................................... [Portals: 1] | o- 0.0.0.0:3260 ..................................................................................................... [OK] o- loopback ......................................................................................................... [Targets: 0] />
關於這部分相關的知識點能夠參考以前的文章:oop
關於udev + multipath 的最簡配置(可在後續建立用戶後操做):測試
# vi /etc/udev/rules.d/12-dm-permissions.rules ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" # udevadm control --reload # udevadm trigger
<h2 id="1.4">1.4 網絡規範分配</h2> 公有網絡 以及 私有網絡。 公有網絡:這裏實驗環境是enp0s3是public IP,enp0s8是private IP,enp0s9和enp0s10是用於模擬IPSAN的兩條網絡。實際生產需根據實際狀況調整規劃。lua
<h1 id="2">2、安裝前期準備工做</h1>
<h2 id="2.1">2.1 各節點系統時間校對</h2> 各節點系統時間校對:
--檢驗時間和時區確認正確 date --關閉chrony服務,移除chrony配置文件(後續使用ctss) systemctl list-unit-files|grep chronyd systemctl status chronyd systemctl disable chronyd systemctl stop chronyd mv /etc/chrony.conf /etc/chrony.conf_bak
這裏實驗環境,選擇不使用NTP和chrony,這樣Oracle會自動使用本身的ctss服務。
<h2 id="2.2">2.2 各節點關閉防火牆和SELinux</h2>各節點關閉防火牆:
systemctl list-unit-files|grep firewalld systemctl status firewalld systemctl disable firewalld systemctl stop firewalld
各節點關閉SELinux:
getenforce cat /etc/selinux/config 手工修改/etc/selinux/config SELINUX=disabled,或使用下面命令: sed -i '/^SELINUX=.*/ s//SELINUX=disabled/' /etc/selinux/config setenforce 0
最後覈實各節點已經關閉SELinux便可。
<h2 id="2.3">2.3 各節點檢查系統依賴包安裝狀況</h2>
yum install -y oracle-database-server-12cR2-preinstall.x86_64
在OEL7.5中仍是12cR2-preinstall的名字,並無對應18c的,但實際測試,在依賴包方面基本沒區別。 若是選用的是其餘Linux,好比經常使用的RHEL,那就須要yum安裝官方文檔要求的依賴包了。
<h2 id="2.4">2.4 各節點配置/etc/hosts</h2>編輯/etc/hosts文件:
#public ip 192.168.1.40 db40 192.168.1.42 db42 #virtual ip 192.168.1.41 db40-vip 192.168.1.43 db42-vip #scan ip 192.168.1.44 db18c-scan #private ip 10.10.1.40 db40-priv 10.10.1.42 db42-priv
<h2 id="2.5">2.5 各節點建立須要的用戶和組</h2> 建立group & user,給oracle、grid設置密碼:
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54324 backupdba groupadd -g 54325 dgdba groupadd -g 54326 kmdba groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin groupadd -g 54330 racdba useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid echo oracle | passwd --stdin oracle echo oracle | passwd --stdin grid
我這裏測試環境設置密碼都是oracle,實際生產環境建議設置符合規範的複雜密碼。
<h2 id="2.6">2.6 各節點建立安裝目錄</h2>各節點建立安裝目錄(root用戶):
mkdir -p /u01/app/18.3.0/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
<h2 id="2.7">2.7 各節點系統配置文件修改</h2> 內核參數修改:vi /etc/sysctl.conf 實際上OEL在安裝依賴包的時候也同時修改了這些值,如下參數主要是覈對或是對RHEL版本做爲參考:
# vi /etc/sysctl.conf 增長以下內容: vm.swappiness = 1 vm.dirty_background_ratio = 3 vm.dirty_ratio = 80 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.panic_on_oops = 1 net.ipv4.conf.enp0s8.rp_filter = 2 net.ipv4.conf.enp0s9.rp_filter = 2 net.ipv4.conf.enp0s10.rp_filter = 2
修改生效:
#sysctl -p /etc/sysctl.conf
注:enp0s9和enp0s10是IPSAN專用的網卡,跟私網同樣設置loose mode。
#sysctl -p /etc/sysctl.d/98-oracle.conf net.ipv4.conf.enp0s8.rp_filter = 2 net.ipv4.conf.enp0s9.rp_filter = 2 net.ipv4.conf.enp0s10.rp_filter = 2
用戶shell的限制:vi /etc/security/limits.d/99-grid-oracle-limits.conf
oracle soft nproc 16384 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 grid soft nproc 16384 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768
這裏須要注意OEL自動配置的 /etc/security/limits.d/oracle-database-server-12cR2-preinstall.conf 並不包含grid用戶的,能夠手工加上。
vi /etc/profile.d/oracle-grid.sh
#Setting the appropriate ulimits for oracle and grid user if [ $USER = "oracle" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi if [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -u 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi fi
這個OEL中也沒有自動配置,須要手工配置。
<h2 id="2.8">2.8 各節點設置用戶的環境變量</h2>
第1個節點grid用戶:
export ORACLE_SID=+ASM1; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/18.3.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
第2個節點grid用戶:
export ORACLE_SID=+ASM2; export ORACLE_BASE=/u01/app/grid; export ORACLE_HOME=/u01/app/18.3.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
第1個節點oracle用戶:
export ORACLE_SID=cdb1; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/18.3.0/db_1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;
第2個節點oracle用戶:
export ORACLE_SID=cdb2; export ORACLE_BASE=/u01/app/oracle; export ORACLE_HOME=/u01/app/oracle/product/18.3.0/db_1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib;