以前測試部署過OEMCC 13.2單機,具體可參考以前隨筆:html
當時環境:兩臺主機,系統RHEL 6.5,分別部署OMS和OMR:
OMS,也就是OEMCC的服務端 IP:192.168.1.88 內存:12G+ 硬盤:100G+
OMR,也就是OEM底層的資料庫 IP:192.168.1.89 內存:8G+ 硬盤:100G+linux
至關於OMS和OMR都是單機版,而後有些客戶對監控系統的要求也很高,這就須要集羣來提高高可用性。
對於OMR來講,能夠搭建對應版本的RAC來解決單點故障,那麼對於OMS而言,又如何構建具有高可用性的集羣呢?
最近遇到某客戶有這樣的高可用需求,本文總結記錄一下OEMCC集羣版本的完整安裝過程。c++
4.OMS集羣安裝數據庫
5.SLB配置session
客戶要求部署OEMCC13.2集羣,包括OMR的集羣和OMS的集羣,其中OMR的集羣就是對應Oracle 12.1.0.2 RAC;OMS的集羣要求Active-Active模式,並配合SLB實現負載均衡。oracle
使用兩臺虛擬機來實現部署。配置信息以下:
app
須要提早下載以下安裝介質:負載均衡
--oemcc13.2安裝介質 em13200p1_linux64.bin em13200p1_linux64-2.zip em13200p1_linux64-3.zip em13200p1_linux64-4.zip em13200p1_linux64-5.zip em13200p1_linux64-6.zip em13200p1_linux64-7.zip --oracle 12.1.0.2 RAC 安裝介質: p21419221_121020_Linux-x86-64_1of10.zip p21419221_121020_Linux-x86-64_2of10.zip p21419221_121020_Linux-x86-64_5of10.zip p21419221_121020_Linux-x86-64_6of10.zip --dbca針對oemcc13.2的建庫模版: 12.1.0.2.0_Database_Template_for_EM13_2_0_0_0_Linux_x64.zip
OMR集羣經過Oracle RAC來實現:OEMCC 13.2提供的模版,要求資料庫(OMR)版本爲12.1.0.2。
dom
yum install binutils compat-libcap1 compat-libstdc++-33 \ e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \ libxcb libX11 libXau libXi libXtst make \ net-tools nfs-utils smartmontools sysstat
2) 各節點關閉防火牆、SELinux:
--各節點關閉防火牆: service iptables stop chkconfig iptables off --各節點關閉SELinux: getenforce 修改/etc/selinux/config SELINUX= disabled --臨時關閉SELinux setenforce 0
3) 配置 /etc/hosts文件:
#public ip 10.1.43.211 oemapp1 10.1.43.212 oemapp2 #virtual ip 10.1.43.208 oemapp1-vip 10.1.43.209 oemapp2-vip #scan ip 10.1.43.210 oemapp-scan #private ip 172.16.43.211 oemapp1-priv 172.16.43.212 oemapp2-priv
4) 建立用戶、組;
--建立group & user: groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54324 backupdba groupadd -g 54325 dgdba groupadd -g 54326 kmdba groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin groupadd -g 54330 racdba useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid --而後給oracle、grid設置密碼: passwd oracle passwd grid
5) 各節點建立安裝目錄(root用戶):
mkdir -p /app/12.1.0.2/grid mkdir -p /app/grid mkdir -p /app/oracle chown -R grid:oinstall /app chown oracle:oinstall /app/oracle chmod -R 775 /app
6) 共享LUN規則配置:
vi /etc/udev/rules.d/99-oracle-asmdevices.rules KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29ad39372db383c7903d31788d0", NAME="asm-data1", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c298c085f4e57c1f9fcd7b3d1dbf", NAME="asm-data2", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c290b495ab0b6c1b57536f4b3cf8", NAME="asm-ocr1", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29e7743dca47419aca041b88221", NAME="asm-ocr2", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name", RESULT=="36000c29608a9ddb8b3168936d01a4f7b", NAME="asm-ocr3", OWNER="grid", GROUP="asmadmin", MODE="0660"
重載規則後確認共享LUN名稱和權限屬組:
[root@oemapp1 media]# udevadm control --reload-rules [root@oemapp1 media]# udevadm trigger [root@oemapp1 media]# ls -l /dev/asm* brw-rw----. 1 grid asmadmin 8, 16 Oct 9 12:27 /dev/asm-data1 brw-rw----. 1 grid asmadmin 8, 32 Oct 9 12:27 /dev/asm-data2 brw-rw----. 1 grid asmadmin 8, 48 Oct 9 12:27 /dev/asm-ocr1 brw-rw----. 1 grid asmadmin 8, 64 Oct 9 12:27 /dev/asm-ocr2 brw-rw----. 1 grid asmadmin 8, 80 Oct 9 12:27 /dev/asm-ocr3
7) 內核參數修改:
vi /etc/sysctl.conf # vi /etc/sysctl.conf 增長以下內容: fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 6597069766656 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 net.ipv4.conf.eth1.rp_filter = 2 net.ipv4.conf.eth0.rp_filter = 1 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500
修改生效:
# /sbin/sysctl –p
8) 用戶shell的限制:
vi /etc/security/limits.conf #在/etc/security/limits.conf 增長以下內容: grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240
9) 插入式認證模塊配置:
vi /etc/pam.d/login --加載 pam_limits.so 模塊 使用 root 用戶修改如下文件/etc/pam.d/login,增長以下內容: session required pam_limits.so
說明:limits.conf 文件實際是 Linux PAM(插入式認證模塊,Pluggable Authentication Modules)中 pam_limits.so 的配置文件,並且只針對於單個會話。
10) 各節點設置用戶的環境變量:
--第1個節點grid用戶: export ORACLE_SID=+ASM1; export ORACLE_BASE=/app/grid export ORACLE_HOME=/app/12.1.0.2/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib --第2個節點grid用戶: export ORACLE_SID=+ASM2; export ORACLE_BASE=/app/grid export ORACLE_HOME=/app/12.1.0.2/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; --第1個節點oracle用戶: export ORACLE_SID=omr1; export ORACLE_BASE=/app/oracle; export ORACLE_HOME=/app/oracle/product/12.1.0.2/db_1; export ORACLE_HOSTNAME=; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; --第2個節點oracle用戶: export ORACLE_SID=omr2; export ORACLE_BASE=/app/oracle; export ORACLE_HOME=/app/oracle/product/12.1.0.2/db_1; export ORACLE_HOSTNAME=; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib;
解壓安裝介質:
unzip p21419221_121020_Linux-x86-64_5of10.zip unzip p21419221_121020_Linux-x86-64_6of10.zip
配置DISPLAY變量,調用圖形界面安裝GI:
[grid@oemapp1 grid]$ export DISPLAY=10.1.52.76:0.0 [grid@oemapp1 grid]$ ./runInstaller
配置DISPLAY變量,調用圖形界面建立ASM磁盤組、ACFS集羣文件系統:
[grid@oemapp1 grid]$ export DISPLAY=10.1.52.76:0.0 [grid@oemapp1 grid]$ asmca
建立ASM磁盤組:
建立ACFS集羣文件系統:
解壓安裝介質:
unzip p21419221_121020_Linux-x86-64_1of10.zip unzip p21419221_121020_Linux-x86-64_2of10.zip
配置DISPLAY變量,調用圖形界面安裝DB軟件:
[oracle@oemapp1 database]$ export DISPLAY=10.1.52.76:0.0 [oracle@oemapp1 database]$ ./runInstaller
安裝db軟件:
解壓模版文件到模版目錄下,以後dbca就能夠從這些模版中選擇使用:
[oracle@oemapp1 media]$ unzip 12.1.0.2.0_Database_Template_for_EM13_2_0_0_0_Linux_x64.zip -d /app/oracle/product/12.1.0.2/db_1/assistants/dbca/templates
DBCA建庫步驟:
注意:數據庫字符集強烈建議選擇AL32UTF8,在後面配置OMS的時候會有對應提示。
本次OMS的集羣要求Active-Active模式,並配合SLB實現負載均衡。
#OMS export OMS_HOME=$ORACLE_BASE/oms_local/middleware export AGENT_HOME=$ORACLE_BASE/oms_local/agent/agent_13.2.0.0.0
建立目錄:
su - oracle mkdir -p /app/oracle/oms_local/agent mkdir -p /app/oracle/oms_local/middleware
對/etc/hosts 修訂,使符合oemcc對主機名稱的要求(選作):
#public ip 10.1.43.211 oemapp1 oemapp1.oracle.com 10.1.43.212 oemapp2 oemapp2.oracle.com
開始安裝:
su - oracle export DISPLAY=10.1.52.76:0.0 ./em13200p1_linux64.bin
安裝步驟:
本節使用OEMCC來添加OMS節點,須要先添加agent,而後添加OMS節點:
說明: 1./app/oracle/OMS是共享文件系統; 2./app/oracle/oms_local是各節點本地的文件系統; 3.OMR數據庫的processes參數須要從默認300修改成600.
1) 添加agent
2) 添加OMS節點
選擇Enterprise menu -> Provisioning and Patching -> Procedure Library.
找到Add Oracle Management Service,點擊Launch。
注意:OMS的相關端口,每一個OMS節點儘量保持一致,避免增長後續配置維護的複雜性。
分別使用節點1和節點2的IP地址都可以正常訪問到OEMCC網頁界面:
且任意關掉某一節點,另外存活節點訪問不受影響。
附:操做oms啓動/中止/查看狀態的命令:
--查看oms狀態 $OMS_HOME/bin/emctl status oms $OMS_HOME/bin/emctl status oms –details --中止oms $OMS_HOME/bin/emctl stop oms $OMS_HOME/bin/emctl stop oms –all --啓動oms $OMS_HOME/bin/emctl start oms
負載均衡使用的產品是radware,這部分須要負載均衡工程師進行配置。下面是根據Oracle官方文檔結合本次需求整理出的配置要求,供參考:
其餘具體配置項,例如Monitors、Pools、Required Virtual Servers等都按此爲基準結合官方文檔進行規劃設計便可,再也不贅述。
在/etc/hosts 添加負載均衡地址名稱解析:
10.1.44.207 myslb.oracle.com
SLB配置後,OMS同步須要配置。
配置OMS:
$OMS_HOME/bin/emctl secure oms -host myslb.oracle.com -secure_port 4903 -slb_port 4903 -slb_console_port 443 -slb_bip_https_port 5443 -slb_jvmd_https_port 7301 -lock_console -lock_upload
配置agent:
$AGENT_HOME/bin/emctl secure agent –emdWalletSrcUrl https://myslb.oracle.com:4903/em
查看oms狀態:
[oracle@oemapp1 backup]$ $OMS_HOME/bin/emctl status oms -details Oracle Enterprise Manager Cloud Control 13c Release 2 Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved. Enter Enterprise Manager Root (SYSMAN) Password : Console Server Host : oemapp1.oracle.com HTTP Console Port : 7788 HTTPS Console Port : 7802 HTTP Upload Port : 4889 HTTPS Upload Port : 4903 EM Instance Home : /app/oracle/oms_local/gc_inst/em/EMGC_OMS1 OMS Log Directory Location : /app/oracle/oms_local/gc_inst/em/EMGC_OMS1/sysman/log SLB or virtual hostname: myslb.oracle.com HTTPS SLB Upload Port : 4903 HTTPS SLB Console Port : 443 HTTPS SLB JVMD Port : 7301 Agent Upload is locked. OMS Console is locked. Active CA ID: 1 Console URL: https://myslb.oracle.com:443/em Upload URL: https://myslb.oracle.com:4903/empbs/upload WLS Domain Information Domain Name : GCDomain Admin Server Host : oemapp1.oracle.com Admin Server HTTPS Port: 7102 Admin Server is RUNNING Oracle Management Server Information Managed Server Instance Name: EMGC_OMS1 Oracle Management Server Instance Host: oemapp1.oracle.com WebTier is Up Oracle Management Server is Up JVMD Engine is Up BI Publisher Server Information BI Publisher Managed Server Name: BIP BI Publisher Server is Up BI Publisher HTTP Managed Server Port : 9701 BI Publisher HTTPS Managed Server Port : 9803 BI Publisher HTTP OHS Port : 9788 BI Publisher HTTPS OHS Port : 9851 BI Publisher HTTPS SLB Port : 5443 BI Publisher is locked. BI Publisher Server named 'BIP' running at URL: https://myslb.oracle.com:5443/xmlpserver BI Publisher Server Logs: /app/oracle/oms_local/gc_inst/user_projects/domains/GCDomain/servers/BIP/logs/ BI Publisher Log : /app/oracle/oms_local/gc_inst/user_projects/domains/GCDomain/servers/BIP/logs/bipublisher/bipublisher.log
查看agent狀態:
[oracle@oemapp1 backup]$ $AGENT_HOME/bin/emctl status agent Oracle Enterprise Manager Cloud Control 13c Release 2 Copyright (c) 1996, 2016 Oracle Corporation. All rights reserved. --------------------------------------------------------------- Agent Version : 13.2.0.0.0 OMS Version : 13.2.0.0.0 Protocol Version : 12.1.0.1.0 Agent Home : /app/oracle/oms_local/agent/agent_inst Agent Log Directory : /app/oracle/oms_local/agent/agent_inst/sysman/log Agent Binaries : /app/oracle/oms_local/agent/agent_13.2.0.0.0 Core JAR Location : /app/oracle/oms_local/agent/agent_13.2.0.0.0/jlib Agent Process ID : 17263 Parent Process ID : 17060 Agent URL : https://oemapp1.oracle.com:3872/emd/main/ Local Agent URL in NAT : https://oemapp1.oracle.com:3872/emd/main/ Repository URL : https://myslb.oracle.com:4903/empbs/upload Started at : 2018-10-12 15:49:58 Started by user : oracle Operating System : Linux version 2.6.32-696.el6.x86_64 (amd64) Number of Targets : 34 Last Reload : (none) Last successful upload : 2018-10-12 15:50:53 Last attempted upload : 2018-10-12 15:50:53 Total Megabytes of XML files uploaded so far : 0.17 Number of XML files pending upload : 19 Size of XML files pending upload(MB) : 0.07 Available disk space on upload filesystem : 63.80% Collection Status : Collections enabled Heartbeat Status : Ok Last attempted heartbeat to OMS : 2018-10-12 15:50:33 Last successful heartbeat to OMS : 2018-10-12 15:50:33 Next scheduled heartbeat to OMS : 2018-10-12 15:51:35 --------------------------------------------------------------- Agent is Running and Ready
最終測試,經過負載均衡地址10.1.44.207能夠直接訪問OEMCC,進行正常操做:
至此,OEMCC13.2集羣安裝已完成。