此文檔詳細描述了Oracle 11gR2 數據庫在HPDL380G上的安裝RAC的檢查及安裝步驟。文檔中#表示root用戶執行,$表示grid或oracle用戶執行。css
說明項html |
節點1node |
節點2linux |
硬件型號c++ |
HPDL380pGen8sql |
HPDL380pGen8shell |
操做系統數據庫 |
Oel6.4bash |
Oel6.4服務器 |
集羣件 |
oracle grid |
oracle grid |
服務器主機名 |
Rbdb81 |
Rbdb82 |
IP地址 |
192.168.1.108 |
192.168.1.109 |
語言環境 |
中文/英文 |
中文/英文 |
時區 |
中國 |
中國 |
本地硬盤 |
RAID1 300G |
RAID1 300G |
|
RAID5 1.8T |
無 |
/ |
260G |
260G |
文件系統/boot |
100M |
100M |
文件系統/swap |
64G |
64G |
文件系統/rmanbak |
1.8T |
1.8T |
系統用戶 |
root |
root |
grid |
grid |
|
oracle |
oracle |
|
系統組 |
oinstall |
oinstall |
dba |
dba |
|
asmdba |
asmdba |
|
asmadmin |
asmadmin |
|
asmoper |
asmoper |
說明項 |
節點1 |
點節2 |
存儲型號 |
HP p2000 |
|
多路徑軟件 |
HPDM multipath |
|
磁盤劃分 |
Mpath0 2G Mpath1 2G Mpath2 2G Mpath3 1.5T Mpath4 0.9T Mpath4 1.2T |
說明項 |
節點1 |
點節2 |
服務器主機名 |
rbdb81 |
rbdb82 |
存儲型號 |
HP p2000 |
|
光纖交換機 |
無 |
|
public IP |
192.168.1.108 |
192.168.1.109 |
vip IP |
192.168.1.208 |
192.168.1.209 |
private IP |
102.168.2.108 |
102.168.2.109 |
scan IP |
192.168.1.210 |
|
DATABASE NAME |
rbdbon8 |
|
ORACLE RAC SID |
rbdbon81 |
rbdbon82 |
集羣實例名稱 |
rbdb8 |
|
OCRVOTE |
+OCRVOTE |
+OCR_VOTE |
OCRVOTEMO1 |
+OCRVOTEMO1 |
+OCRVOTEMO1 |
數據文件 |
+DATA01 |
+DATA01 |
|
+DATA02 |
+DATA02 |
歸檔文件 |
+ARCH |
+ARCH |
RMAN備份 |
/rmanbak/rbdbon8 |
無 |
數據庫版本 |
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production |
|
GRID BASE目錄 |
/u01/app |
|
GRID HOME目錄 |
/u01/app/grid/product/11.2.0/ |
|
數據庫BASE目錄 |
/u01/app/oracle |
|
數據庫HOME目錄 |
/u01/app/oracle/product/11.2.0/db_1 |
|
數據庫監聽端口 |
1521 |
|
數據庫字符集 |
UTF8 |
|
數據庫系統賬號與初始密碼 |
sys/recbok system/recbok |
|
數據庫實例建庫方式 |
ASM |
|
數據庫塊大小 |
8192byte |
|
ASM磁盤組 |
+OCRVOTE(mpath0) |
|
+OCRVOTEMO1(mpath1) |
||
+DATA01(mpath3) |
||
+DATA02(mpath4) |
||
|
+ARCH(mpath5) |
見存儲規劃與風險評估報告
查看系統信息
[root@rbdb82 ~]# dmidecode |grep -A16 "System Information$"
System Information
Manufacturer: HP
Product Name: ProLiant DL380p Gen8
Version: Not Specified
Serial Number: 6CU3260KP7
UUID: 32333536-3030-4336-5533-3236304B5037
Wake-up Type: Power Switch
SKU Number: 653200-001
Family: ProLiant
Handle 0x0300, DMI type 3, 21 bytes
Chassis Information
Manufacturer: HP
Type: Rack Mount Chassis
Lock: Not Present
Version: Not Specified
Serial Number: 6CU3260KP7
[root@rbdb82 ~]#
[root@rbdb82 ~]# cat /etc/issue | grep Linux
Red Hat Enterprise Linux Server release 6.4 (Santiago)
上傳光纖卡驅動到服務器,運行(兩邊都要裝)
[root@rbdb81 soft]# rpm -Uvh kmod-hpqlgc-qla2xxx-8.04.00.12.06.0_k2-1.rhel6u4.x86_64.rpm
而後執行lspci 檢查全部的PCI接口的硬件設置。
[root@rbdb81 sw]# lspci
查看以下光纖卡,就說明光纖卡驅動安裝正常
07:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
0a:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
先裝device-mapper-multipath-libs-0.4.9-64.0.1.el6.x86_64.rpm
[root@rbdb81 soft]# rpm -Uvh device-mapper-multipath-libs-0.4.9-64.0.1.el6.x86_64.rpm
再裝rpm -Uvh device-mapper-multipath-0.4.9-64.0.1.el6.x86_64.rpm
[root@rbdb81 soft]# rpm -Uvh device-mapper-multipath-0.4.9-64.0.1.el6.x86_64.rpm
Multipath相關說明
使用Multipath進行多鏈路聚合並對聚合後的設備固定命名
1、啓用Multipath:
(1)啓動multipathd服務
#service multipathd start 或者 #/etc/init.d/multipathd start
(2)修改multipath配置文件/etc/multipath.conf:
# cp /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
/etc/multipath.conf
(能夠vi /etc/multipath/ bindings查看)
[oracle@rbdb81 ~]$ vi /etc/multipath.conf
defaults { user_friendly_names yes path_grouping_policy multibus find_multipaths yes } blacklist { wwid 3600508b1001cb4c7c0d68a3645bf5dde #這是一號節點備份盤. devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z]" }
multipaths { multipath { wwid 3600c0ff0001977f5d5a0835201000000 alias mpath0 path_grouping_policy multibus path_selector "round-robin 0" failback immediate rr_weight uniform no_path_retry 18 } multipath { wwid3600c0ff0001977f5eba0835201000000 alias mpath1 path_grouping_policy multibus path_selector "round-robin 0" failback immediate rr_weight uniform no_path_retry 18 } multipath { wwid3600c0ff0001977f5a0a0835201000000 alias mpath2 path_grouping_policy multibus path_selector "round-robin 0" failback immediate rr_weight uniform no_path_retry 18 } multipath { wwid3600c0ff0001977f5eda3845201000000 alias mpath3 path_grouping_policy multibus path_selector "round-robin 0" failback immediate rr_weight uniform no_path_retry 18 } multipath { wwid3600c0ff0001977f53da4845201000000 alias mpath4 path_grouping_policy multibus path_selector "round-robin 0" failback immediate rr_weight uniform no_path_retry 18 } multipath { wwid3600c0ff00019869012a1835201000000 alias mpath5 path_grouping_policy multibus path_selector "round-robin 0" failback immediate rr_weight uniform no_path_retry 18 } }
|
(3)重啓multipathd服務(修改multipath.conf文件以後都應該重啓multipath服務)
multipath -F
(4)掃描磁盤
#multipath -v2
使用上面命令以後,系統中會出現鏈路聚合以後的dm設備,同時也會在/dev/mapper/、/dev/mpath/目錄下生成相應的設備。
查看multipath拓撲結構
#multipath -ll
[root@rbdb81 ~]# multipath -ll mpath2 (3600c0ff0001977f5a0a0835201000000) dm-0 HP,P2000 G3 FC size=1.9G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=70 status=active |- 0:0:0:1 sda 8:0 active ready running `- 1:0:0:1 sdi 8:128 active ready running mpath1 (3600c0ff0001977f5eba0835201000000) dm-4 HP,P2000 G3 FC size=1.9G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=70 status=active |- 0:0:0:3 sdc 8:32 active ready running `- 1:0:0:3 sdk 8:160 active ready running mpath0 (3600c0ff0001977f5d5a0835201000000) dm-3 HP,P2000 G3 FC size=1.9G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=70 status=active |- 0:0:0:2 sdb 8:16 active ready running `- 1:0:0:2 sdj 8:144 active ready running mpath5 (3600c0ff00019869012a1835201000000) dm-2 HP,P2000 G3 FC size=1.1T features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=70 status=active |- 0:0:0:5 sde 8:64 active ready running `- 1:0:0:5 sdm 8:192 active ready running mpath4 (3600c0ff0001977f53da4845201000000) dm-5 HP,P2000 G3 FC size=831G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=70 status=active |- 0:0:0:6 sdf 8:80 active ready running `- 1:0:0:6 sdn 8:208 active ready running mpath3 (3600c0ff0001977f5eda3845201000000) dm-1 HP,P2000 G3 FC size=1.4T features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=70 status=active |- 0:0:0:4 sdd 8:48 active ready running `- 1:0:0:4 sdl 8:176 active ready running |
(5)關掉其它節點的multipathd 服務
service multipathd stop
(6) 拷貝multipath.conf 到其它節點
scp /etc/multipath.conf rbdb82:/etc/
(7) 起動其它節點服務
multipath –F multipath –v2 multipath –ll |
(8)設置自動啓服務(全部節點都設)
/etc/init.d/multipathd restart chkconfig --list multipathd chkconfig --level 3456 multipathd on |
內存,swap,tmp,安裝軟件的磁盤空間,shared storage
最少兩塊網卡
最好5塊網卡,2塊綁定爲public,2塊綁定爲private,一塊做爲archive log用。不要把一卡上面的多個口子綁定在一塊兒。
保證全部節點上public,private的接口名稱相同。例如:en0在node1上是public,那麼在node2上面,它的en0也必須是public。private同理。集羣中全部節點的private接口必須能互相訪問,配通。
(兩節點都要執行)
groupadd -g 501 oinstall
groupadd -g 502 dba
groupadd -g 503 asmadmin
groupadd -g 504 asmdba
groupadd -g 505 asmoper
useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
useradd -u 502 -g oinstall -G dba,asmdba oracle
# passwd oracle
Changing password for user oracle.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
# passwd grid
Changing password for user grid.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@node1 ~]#
注意:下劃線不能用做主機名。最好使用小寫,不帶下劃線,中劃線。
說明:推薦使用網卡綁定,而且最還使用主備模式。
cluster name必須知足:在整個域中的globally unique。最少一個字符,小於15個字符,與hostname的字符集相同
public host name:使用每一個主機的primary host name。即:使用hostname命令顯示出來的名稱。
virtual hostname:The virtual host name is a public nodename that is used to reroute client requests sent to the node if the node is down。推薦的命名方式<public hostname>-vip,最好仍是別用中劃線。
VIP:必須沒有使用,與public ip處於同一個子網段。必須解析(/etc/hosts或者dns)
private hostname:不須要通過dns解析,可是必須在/etc/hosts中配置。推薦命名方式<public hostname>pvt。也最好別用中劃線。
private ip:不能被集羣以外的servers訪問;private net應該在獨立的交換網絡上;private net不該該是大環境的網絡top的一部分;private net應該部署在千兆或者更好的以太網上。
SCANIP:多個scanIP的話不能配置在/etc/hosts,只能在DNS中,否則就只有一個生效。若是在dns中配置了ip,那麼就得在全部節點上更改主機的搜索順序
/etc/hosts配置:
(兩節點都要執行)
#Public 192.168.1.108 rbdb81 192.168.1.109 rbdb82 #VIP 192.168.1.208 rbdb81vip 192.168.1.209 rbdb82vip
#Private 192.168.2.108 rbdb81priv 192.168.2.109 rbdb82priv #scan 192.168.1.210 rbdb8scan
192.168.1.100 rbdb1 192.168.1.101 rbdb2 192.168.1.102 rbdb3 192.168.1.104 rbdb4 192.168.1.105 rbdb05 192.168.1.106 rbdb06 192.168.1.107 rbdb07 192.168.1.200 vip-rbdb1 192.168.1.201 vip-rbdb2 192.168.1.205 rbdb05-vip 192.168.1.206 rbdb06-vip |
若是使用ntp同步,那麼須要額外配置,若是使用Oracle Cluster Time Synchronization
Service (ctssd)同步,那麼就不須要配置了。使用ctssd要作如下操做:
#Network Time Protocol Setting
停掉服務
/sbin/service ntpd stop
chkconfig ntpd off
刪除ntp.conf 文件或更名:
rm /etc/ntp.conf
或
mv /etc/ntp.conf to /etc/ntp.conf.org
(兩節點都要執行)
root用戶更改(如下改系統參數據都在ROOT)
/etc/sysctl.conf
kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 6553600 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 kernel.shmall = 8388608 fs.file-max = 6815744 fs.aio-max-nr = 1048576
|
#/sbin/sysctl -p當即生效
說明:因爲 kernel.shmall設置的爲默認的2097152也就是最小值設置,
個人機器是64GB內存的 應該設置爲以下 ,或者更大才行。
kernel.shmall =8388608
For example, if the sum of all the SGAs on the system is 16Gb and the result of '$ getconf PAGE_SIZE' is 4096 (4Kb) then set shmall to 4194304 (4Mb)
/etc/security/limits.conf
grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 |
/etc/pam.d/login file,若是不存在就加上一下行
session required pam_limits.so |
/etc/profile,增長如下內容
if [ $USER = "oracle" ] || [ $USER = "grid" ]; then if [ $SHELL = "/bin/ksh" ]; then ulimit -p 16384 ulimit -n 65536 else ulimit -u 16384 -n 65536 fi umask 022 fi |
若是是C-shell
For the C shell (csh or tcsh), add the following lines to the /etc/csh.login file:
if ( $USER = "oracle" || $USER = "grid" ) then
limit maxproc 16384
limit descriptors 65536
endif
在其餘節點上重複上述操做
(在ROOT下,兩節點都要執行)
oracle Inventory Directory
其實能夠不用建立,初始目錄只要權限夠了,那麼安裝時會自動建立。
mkdir -p /u01/app/oraInventory
chown -R grid:oinstall /u01/app/oraInventory
chmod -R 775 /u01/app/oraInventory
建立grid 的 ORACLE_HOME 目錄
mkdir -p /u01/app/grid/product/11.2.0
chown -R grid:oinstall /u01/app/grid/product/11.2.0
chmod -R 775 /u01/app/grid/product/11.2.0
建立oracle 的ORACLE_BASE目錄
mkdir -p /u01/app/oracle
mkdir /u01/app/oracle/cfgtoollogs
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/app/oracle
建立rdbms 的 ORACLE_HOME目錄
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R oracle:oinstall /u01/app/oracle/product/11.2.0/db_1
chmod -R 775 /u01/app/oracle/product/11.2.0/db_1
如下是Oracle Enterprise Linux 5 64位系統須要的包
binutils-2.15.92.0.2
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.97
elfutils-libelf-devel-0.97
expat-1.95.7
gcc-3.4.6
gcc-c++-3.4.6
glibc-2.3.4-2.41
glibc-2.3.4-2.41 (32 bit)
glibc-common-2.3.4
glibc-devel-2.3.4
glibc-headers-2.3.4
libaio-0.3.105
libaio-0.3.105 (32 bit)
libaio-devel-0.3.105
libaio-devel-0.3.105 (32 bit)
libgcc-3.4.6
libgcc-3.4.6 (32-bit)
libstdc++-3.4.6
libstdc++-3.4.6 (32 bit)
libstdc++-devel 3.4.6
make-3.80
pdksh-5.2.14
sysstat-5.0.5
unixODBC-2.2.11
unixODBC-2.2.11 (32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11 (32 bit)
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel
若是發現有包沒安裝,進入光盤的目錄下,進入Server目錄下安裝.
Vi /etc/selinux/config
SELINUX=disabled
關掉防火牆
[root@rbdb81 Packages]# chkconfig --level 2345 ip6tables off
[root@rbdb81 Packages]# chkconfig --level 2345 iptables off
關服務
chkconfig --level 35 autofs off
chkconfig --level 35 acpid off
chkconfig --level 35 sendmail off
chkconfig --level 35 cups-config-daemon off
chkconfig --level 35 cpus off
chkconfig --level 35 xfs off
chkconfig --level 35 lm_sensors off
chkconfig --level 35 gpm off
chkconfig --level 35 openibd off
chkconfig --level 35 iiim off
chkconfig --level 35 pcmcia off
chkconfig --level 35 cpuspeed off
chkconfig --level 35 nfslock off
chkconfig --level 35 ip6tables off
chkconfig --level 35 rpcidmapd off
chkconfig --level 35 apmd off
chkconfig --level 35 sendmail off
chkconfig --level 35 arptables_jf off
chkconifg --level 35 microcode_ctl off
chkconfig --level 35 rpcgssd off
有兩個不一樣方法可在 Linux 上配置 ASM:
使用 ASMLib I/O 的 ASM:此方法使用 ASMLib 調用在由 ASM 管理的原始塊設備上建立全部 Oracle 數據庫文件。因爲 ASMLib 使用塊設備,所以該方法不須要原始設備。
使用標準 Linux I/O 的 ASM:此方法使用標準 Linux I/O 系統調用在 ASM 管理的原始字符設備上建立全部 Oracle 數據庫文件。您將須要爲 ASM 使用的全部磁盤分區建立原始設備。 (裸設備)
咱們將在此處介紹「使用 ASMLib I/O 的 ASM」。
下載ASMLib
首先,從 Red Hat找到 kmod-oracleasm來安裝oracleasm ,oracleasm-2.6.9-22.ELsmp-2.0.0-1.x86_64.rpm -(適用於多處理器)從這個網址(http://www.oracle.com/technology/global/cn/tech/linux/asmlib/install.html)下載ASMLib 軟件。首先,下載適用於多處理器 Linux 服務器的爲 ASM 庫提供內核驅動程序:
還將須要下載如下兩個支持文件:
oracleasmlib-2.0.0-1.x86_64.rpm -(提供實際的 ASM 庫)
oracleasm-support-2.0.0-1.x86_64.rpm -(提供用來使 ASM 驅動程序啓動並運行的公用程序)
安裝ASMLib
咱們將把ASMLib文件安裝到兩臺多處理器計算機中。安裝過程只須要您以 root 用戶賬戶的身份在集羣中的全部節點上運行如下命令便可:
[root@rbdb82 asm]# ls -l
total 128
-rwxr-xr-x 1 oracle oinstall 33840 May 31 11:48 kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
-rwxr-xr-x 1 oracle oinstall 13300 Oct 12 17:30 oracleasmlib-2.0.4-1.el6.x86_64.rpm
-rwxr-xr-x 1 oracle oinstall 74984 Oct 12 17:30 oracleasm-support-2.1.8-1.el6.x86_64.rpm
[root@rbdb82 asm]#
rpm -ivh oracleasm-support-2.1.8-1.el6.x86_64.rpm
rpm -ivh kmod-oracleasm-2.0.6.rh1-2.el6.x86_64.rpm
rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm
如今已經安裝了 ASMLib 軟件,系統管理員必須執行幾個步驟來使 ASM 驅動程序可用。須要加載 ASM 驅動程序,而且須要裝載驅動程序文件系統。數據庫以 ' grid ' 用戶和 ' oinstall' 用戶組身份運行,安裝過程只須要您以 root 用戶賬戶的身份在集羣中的全部節點上運行如下命令便可:
/etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []:grid Default group to own the driver interface []:asmadmin Start Oracle ASM library driver on boot (y/n) [n]:y Scan for Oracle ASM disks on boot (y/n) [y]:y Writing Oracle ASM library driver configuration: done Initializing the Oracle ASMLib driver: [ OK ] Scanning the system for Oracle ASMLib disks: [ OK ] |
這應加載 oracleasm.o 驅動程序模塊並裝載 ASM 驅動程序文件系統。經過在配置期間選擇 enabled = 'y',系統將始終在啓動時加載該模塊並裝載文件系統。
使磁盤對 ASMLib 可用
(注意:與本節中的其餘任務不一樣,建立ASMLib文件系統應只在一個節點上執行。咱們將只從 linux1執行本節中的全部命令。)
系統管理員有最後一項任務。須要使 ASMLib 要訪問的每個磁盤可用。這是經過建立一個 ASM 磁盤來實現的。/etc/init.d/oracleasm 腳本將再次用於這個任務:
/etc/init.d/oracleasm createdisk OCR_VOTE01 /dev/mapper/mpath0
/etc/init.d/oracleasm createdisk OCR_VOTE02 /dev/mapper/mpath1
/etc/init.d/oracleasm createdisk OCR_VOTE03 /dev/mapper/mpath2
/etc/init.d/oracleasm createdisk DATA01 /dev/mapper/mpath3
/etc/init.d/oracleasm createdisk DATA02/dev/mapper/mpath4
/etc/init.d/oracleasm createdisk ARCH /dev/mapper/mpath5
若是要刪除,須要執行:
/etc/init.d/oracleasm deletedisk DATA01
/etc/init.d/oracleasm deletedisk /dev/mapper/mpath3
當向 RAC 設置中添加磁盤時,須要通知其餘節點該磁盤的存在。在一個節點上運行 'createdisk' 命令,而後在其餘每個節點上運行 'scandisks':
/etc/init.d/oracleasm scandisks
列出和查詢現有的磁盤
咱們可使用如下命令在全部節點上以 root用戶賬戶的身份測試是否成功建立了 ASM 磁盤:
/etc/init.d/oracleasm listdisks;
ARCH
DATA01
DATA02
OCR_VOTE01
OCR_VOTE02
OCR_VOTE03
其餘命令說明
能夠利用 /etc/init.d/oracleasm 的 'enable' 和 'disable' 選項來啓用或禁用自動啓動。
/etc/init.d/oracleasm disable
Writing Oracle ASM library driver configuration [ OK ]
Unmounting ASMlib driver filesystem [ OK ]
Unloading module "oracleasm" [ OK ]
/etc/init.d/oracleasm enable
Writing Oracle ASM library driver configuration [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]
磁盤名稱是 ASCII 大寫字母、數字和下劃線。它們必須以字母開始。
再也不被 ASM 使用的磁盤也能夠取消標記:
/etc/init.d/oracleasm deletedisk VOL1
Deleting Oracle ASM disk "VOL1" [OK ]
能夠查詢任意的操做系統磁盤,以瞭解它是否被 ASM 使用:
/etc/init.d/oracleasm querydisk
/etc/init.d/oracleasm querydisk ASM1
Checking if device "/dev/sdg" is an Oracle ASM disk [ OK ]
Linux ASMLib 的發現字符串
ASMLib 使用發現字符串來肯定 ASM 正在請求哪些磁盤。通常的 Linux ASMLib 使用 glob 字符串。字符串必須以 "ORCL:" 爲前綴。磁盤經過名稱來指定。能夠經過發現字符串 "ORCL:VOL1" 在 ASM 中,發現以名稱 "VOL1" 建立的磁盤。相似地,能夠用發現字符串 "ORCL:VOL*" 來查詢全部以字符串 "VOL" 開始的磁盤。
不能在發現字符串中用路徑名稱來發現磁盤。若是缺乏前綴,那麼通常的 Linux ASMLib 將徹底忽略發現字符串,認爲它適用於一個不一樣的 ASMLib。惟一的例外是空字符串 (""),它被認爲是一個全通配符。這與發現字符串 "ORCL:*" 徹底等價。
注意:一旦您用 Linux ASMLib 標記了磁盤,那麼OUI 將不能發現您的磁盤。建議您完成「僅限於軟件」 (Software Only) 的安裝,而後使用 DBCA 來建立數據庫(或者使用自定義安裝)。
檢查 SAM磁盤權限:
[root@rbdb81 ~]# ll /dev/oracleasm/disks/
total 0
brw-rw---- 1 grid asmadmin 8, 64 Nov 15 15:36 ARCH
brw-rw---- 1 grid asmadmin 8, 48 Nov 15 15:36 DATA01
brw-rw---- 1 grid asmadmin 8, 80 Nov 15 15:36 DATA02
brw-rw---- 1 grid asmadmin 8, 16 Nov 15 15:36 OCR_VOTE01
brw-rw---- 1 grid asmadmin 8, 32 Nov 15 15:36 OCR_VOTE02
brw-rw---- 1 grid asmadmin 8, 0 Nov 15 15:36 OCR_VOTE03
注意2兩個問題:
1,若是ASM服務沒法自啓動,則加如下兩條命令
su – root
cd /etc/rc5.d/
ln -s ../init.d/oracleasm S99oracleasm
ln -s ../init.d/oracleasm K01oracleasm
2,修改全部節點/etc/sysconfig/oracleasm文件(很重要)
ORACLEASM_ENABLED=true
ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER="mpath sd"
ORACLEASM_SCANEXCLUDE=""
[root@rbdb82 ~]# ll /dev/oracleasm/disks/ total 0 brw-rw---- 1 grid asmadmin 8, 64 Nov 15 15:36 ARCH brw-rw---- 1 grid asmadmin 8, 48 Nov 15 15:36 DATA01 brw-rw---- 1 grid asmadmin 8, 80 Nov 15 15:36 DATA02 brw-rw---- 1 grid asmadmin 8, 16 Nov 15 15:36 OCR_VOTE01 brw-rw---- 1 grid asmadmin 8, 32 Nov 15 15:36 OCR_VOTE02 brw-rw---- 1 grid asmadmin 8, 0 Nov 15 15:36 OCR_VOTE03 |
[root@rbdb81 ~]# ll /dev/oracleasm/disks/ total 0 brw-rw---- 1 grid asmadmin 8, 64 Nov 15 15:36 ARCH brw-rw---- 1 grid asmadmin 8, 48 Nov 15 15:36 DATA01 brw-rw---- 1 grid asmadmin 8, 80 Nov 15 15:36 DATA02 brw-rw---- 1 grid asmadmin 8, 16 Nov 15 15:36 OCR_VOTE01 brw-rw---- 1 grid asmadmin 8, 32 Nov 15 15:36 OCR_VOTE02 brw-rw---- 1 grid asmadmin 8, 0 Nov 15 15:36 OCR_VOTE03 |
unzip p10404530_112030_Linux-x86-64_1of7.zip
unzip p10404530_112030_Linux-x86-64_2of7.zip
unzip p10404530_112030_Linux-x86-64_3of7.zip
chown -R grid:oinstall grid
chown -R oracle:oinstall database
su - grid
vi /home/grid/.bash_profile
ORACLE_BASE=/u01/app/oracle ORACLE_HOME=/u01/app/grid/product/11.2.0 ORACLE_SID=+ASM1#在2節點爲+ASM2 LANG=C PATH=$ORACLE_HOME/bin:$PATH:$HOME/bin export PATH ORACLE_BASE ORACLE_HOME ORACLE_SID LANG |
cd /soft/11grac/grid
./runInstaller
安裝以前檢查節點時間要差很少相同,由於自動同步服務尚未,只有安裝完成後纔有。
上面一步添加好之後點擊setup建立互信
能夠在test一下
rbdb81->grid>上執行
[grid@rbdb81 ~]$ ssh rbdb82 date
2013年 11月 5日 星期四 17:27:58 CST
[grid@node1 ~]$
rbdb82上執行
[grid@rbdb82 ~]$ ssh rbdb81 date
2013年 11月 5日 星期四 17:28:09 CST
[grid@ rbdb82 ~]$
以上若是都不須要密碼那麼互信就行了
檢查結果:有幾個內核
參數檢測失敗,更改時必須兩個節點都更改
kernel.shmall = 2097152
fs.file-max = 6815744
fs.aio-max-nr = 1048576
對於包:(rbdb82可使用scp拷貝rbdb81上面的包)
[root@node1 rpm]# rpm -ivh cvuqdisk-1.0.9-1.rpm
Preparing...########################################### [100%]
Using default group oinstall to install package
1:cvuqdisk########################################### [100%]
[root@node1 rpm]# pwd
/software/grid/rpm
[root@node1 rpm]#
從新檢查後,剩下swap和DNS的錯誤,swap錯誤如今不能解決,沒有虛擬磁盤了,加盤須要重啓。DNS不須要配置,因此就直接下一步
若是碰到以上問題,就要解決ASM的共享問題,在ASM節中有說明
進入安裝光盤安裝
[root@rbdb81 Packages]# rpm -ivh compat-libcap1-1.10-1.x86_64.rpm
warning: compat-libcap1-1.10-1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ec551f03: NOKEY
Preparing... ########################################### [100%]
1:compat-libcap1 ########################################### [100%]
[root@rbdb81 Packages]# pwd
/media/OL6.4 x86_64 Disc 1 20130225/Packages
跑腳本時必定要依次跑,第一個在node1跑完成功後,再在node2上跑,第一個在全部節點上都跑完成功後才跑第二個,跑法與第一個相同。跑的時候最好grid用戶登陸後再直接[grid@node1 ~]$ su進入root用戶,這樣能保留grid用戶的環境變量。
rbdb81:腳本一:/u01/app/oraInventory/orainstRoot.sh
[grid@node1 ~]$ id
uid=501(grid) gid=501(oinstall) groups=501(oinstall),503(asmadmin),504(asmdba),505(asmoper)
[grid@node1 ~]$ su
Password:
[root@node1 grid]# /oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node1 grid]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@node1 grid]#
rbdb82腳本一:/u01/app/oraInventory/orainstRoot.sh
[grid@node2 ~]$ id
uid=501(grid) gid=501(oinstall) groups=501(oinstall),503(asmadmin),504(asmdba),505(asmoper)
[grid@node2 ~]$ su
Password:
[root@node2 grid]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@node2 grid]# /oracle/app/oraInventory/orainstRoot.sh
Changing permissions of /oracle/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /oracle/app/oraInventory to oinstall.
The execution of the script is complete.
[root@node2 grid]#
rbdb81腳本二:/oracle/11.2.0/grid/crs/root.sh
[root@node1 grid]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@node1 grid]# /oracle/11.2.0/grid/crs/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/11.2.0/grid/crs
Enter the full pathname of the local bin directory: [/usr/local/bin]: --回車
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0/grid/crs/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
root wallet
root wallet cert
root cert export
peer wallet
profile reader wallet
pa wallet
peer wallet keys
pa wallet keys
peer cert request
pa cert request
peer cert
pa cert
peer root cert TP
profile reader root cert TP
pa root cert TP
peer pa cert TP
pa peer cert TP
profile reader pa cert TP
profile reader peer cert TP
peer user cert
pa user cert
Adding Clusterware entries to inittab
CRS-2672: Attempting to start 'ora.mdnsd' on 'node1'
CRS-2676: Start of 'ora.mdnsd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'node1'
CRS-2676: Start of 'ora.gpnpd' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'node1'
CRS-2672: Attempting to start 'ora.gipcd' on 'node1'
CRS-2676: Start of 'ora.gipcd' on 'node1' succeeded
CRS-2676: Start of 'ora.cssdmonitor' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'node1'
CRS-2672: Attempting to start 'ora.diskmon' on 'node1'
CRS-2676: Start of 'ora.diskmon' on 'node1' succeeded
CRS-2676: Start of 'ora.cssd' on 'node1' succeeded
已成功建立並啓動 ASM。
已成功建立磁盤組OCR_VOTE。
clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 16a36bc2d6d04fb2bf0f1da5fab701a9.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 16a36bc2d6d04fb2bf0f1da5fab701a9 (/dev/raw/raw1) [OCR_VOTE]
Located 1 voting disk(s).
CRS-2672: Attempting to start 'ora.asm' on 'node1'
CRS-2676: Start of 'ora.asm' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.OCR_VOTE.dg' on 'node1'
CRS-2676: Start of 'ora.OCR_VOTE.dg' on 'node1' succeeded
CRS-2672: Attempting to start 'ora.registry.acfs' on 'node1'
CRS-2676: Start of 'ora.registry.acfs' on 'node1' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@node1 grid]#
rbdb82腳本二:/oracle/11.2.0/grid/crs/root.sh
[root@node2 grid]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@node2 grid]# /oracle/11.2.0/grid/crs/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /oracle/11.2.0/grid/crs
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oracle/11.2.0/grid/crs/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node node1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@node2 grid]#
跑完腳本而後點OK
到100%的時候報錯,能夠直接點擊OK,而後skip。查看日誌發現幾個error,
INFO: Checking name resolution setup for "dbscan"...
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "dbscan" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "dbscan" (IP address: 192.168.16.30) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "dbscan"
INFO: Verification of SCAN VIP and Listener setup failed
INFO: Checking OLR integrity...
INFO: Checking OLR config file...
INFO: OLR config file check successful
INFO: Checking OLR file attributes...
INFO: OLR file check successful
INFO: WARNING:
INFO: Checking name resolution setup for "dbscan"...
INFO: ERROR:
INFO: PRVG-1101 : SCAN name "dbscan" failed to resolve
INFO: ERROR:
INFO: PRVF-4657 : Name resolution setup check for "dbscan" (IP address: 192.168.16.30) failed
INFO: ERROR:
INFO: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "dbscan"
INFO: Verification of SCAN VIP and Listener setup failed
INFO: Checking OLR integrity...
INFO: Checking OLR config file...
INFO: OLR config file check successful
INFO: Checking OLR file attributes...
INFO: OLR file check successful
這個錯誤是scan解析失敗,在os中ping一下scanip和scan name若是能ping通的話,那就沒問題,直接ok,而後skip
安裝完成後,啓動的asm實例自動加上了1和2
[grid@node1 grid]$ ps -ef|grep asm
grid 17770 1 0 18:23 ? 00:00:00 asm_pmon_+ASM1
grid 17772 1 0 18:23 ? 00:00:00 asm_psp0_+ASM1
grid 17776 1 0 18:23 ? 00:00:00 asm_vktm_+ASM1
[root@node2 grid]# ps –ef | grep asm
grid 23639 1 0 18:30 ? 00:00:00 asm_pmon_+ASM2
grid 23641 1 0 18:30 ? 00:00:00 asm_psp0_+ASM2
grid 23643 1 0 18:30 ? 00:00:00 asm_vktm_+ASM2
su - oracle
rbdb81,rbdb82只須要更改sid。(可能在這裏配置sid無效了,最後也會自動加上1和2,就像asm的sid同樣)
vi /home/oracle/.bash_profile
ORACLE_BASE=/u01/app/oracle ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1 ORACLE_SID=rbdb81 PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin LANG=C export PATH ORACLE_BASE ORACLE_HOME ORACLE_SID LANG |
./runInstaller
配置oracle用戶的互信,與GI中的相似
[oracle@node1 ~]$ ssh node2 date
2012年 12月 20日 星期四 19:15:06 CST
[oracle@node1 ~]$
[oracle@node2 ~]$ ssh node1 date
2012年 12月 20日 星期四 19:15:13 CST
[oracle@node2 ~]$
後面兩個錯誤都是DNS解析的問題,能夠無論。第一個錯誤如今沒有虛擬磁盤,不能解決。直接所有忽略,next
跑腳本,node1跑完成功後再在node2跑
node1
[root@node1 db_1]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@node1 db_1]# /oracle/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]: --回車
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@node1 db_1]#
node2:
[root@node2 db_1]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@node2 db_1]# /oracle/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/app/oracle/product/11.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@node2 db_1]#
跑完後點OK
用ASMCA建立磁盤組
[root@rbdb81 ~]# su - grid
[grid@rbdb81 ~]$ asmca
Create 新建DG
選定磁盤點,用external,點OK
[grid@rbdb81 ~]$ su - oracle
Password:
[oracle@rbdb81 ~]$ dbca
Recbok
這裏重要,要設置好字符集
SQL> show parameter log_archive_dest;
SQL> alter system set log_archive_dest_1='location=+ARCH' scope=spfile sid='*';
SQL> startup mount;
ORACLE instance started.
Total System Global Area 2.0243E+10 bytes
Fixed Size 2237088 bytes
Variable Size 2952793440 bytes
Database Buffers 1.7247E+10 bytes
Redo Buffers 41189376 bytes
Database mounted.
SQL> alter database archivelog;
Database altered.
SQL> alter database open;
SQL> show parameter sga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
lock_sga boolean FALSE
pre_page_sga boolean FALSE
sga_max_size big integer 19392M
sga_target big integer 19392M
SQL> alter system set sga_target=25000M scope=spfile sid='*';
System altered.
SQL> show parameter pga
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_target big integer 6458M
SQL> alter system set pga_aggregate_target=12000M scope=spfile sid='*';
System altered.
SQL> show parameter job
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
job_queue_processes integer 1000
SQL> alter system set job_queue_processes=100 scope=spfile sid='*';
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup nomount;
ORACLE instance started.
[oracle@rbdb81 ~]$ rman target /
RMAN> restore controlfile to '+DATA02/rbdbon8/controlfile/control01.ctl'
from '+DATA01/rbdbon8/controlfile/current.256.831579049';
Starting restore at 18-NOV-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=82 instance=rbdbon81 device type=DISK
channel ORA_DISK_1: copied control file copy
Finished restore at 18-NOV-13
RMAN>exit
su - grid
asmcmd
ASMCMD> ls
control01.ctl
current.256.831809041
ASMCMD> ls -s
Block_Size Blocks BytesSpace Name
control01.ctl => +DATA02/RBDBON8/CONTROLFILE/current.256.831809041
16384 112918497536 25165824 current.256.831809041
ASMCMD> pwd
+data02/rbdbon8/controlfile
進入sqlplus / as sysdba
SQL> alter system set control_files='+DATA01/rbdbon8/controlfile/current.256.831579049',
'+DATA02/rbdbon8/controlfile/control01.ctl' scope=spfile sid='*';
System altered.
驗證
SQL> select value from v$spparameter where name='control_files';
VALUE
--------------------------------------------------------------------------------
+DATA01/rbdbon8/controlfile/current.256.831579049
+DATA02/rbdbon8/controlfile/control01.ctl
重啓數據庫驗證
SQL> set linesize 250
SQL> col name for a50
SQL> select * from v$controlfile;
STATUS NAME IS_ BLOCK_SIZE FILE_SIZE_BLKS
------- -------------------------------------------------- --- ---------- --------------
+DATA01/rbdbon8/controlfile/current.256.831579049 NO 16384 1128
+DATA02/rbdbon8/controlfile/control01.ctl NO 16384 1128
SQL> select value from v$spparameter where name='control_files';
VALUE
-----------------------------------------------------
+DATA01/rbdbon8/controlfile/current.256.831579049
+DATA02/rbdbon8/controlfile/control01.ctl
先用ASM增長一個名爲OCRVOTEMO1的磁盤組,而後按如下完成
[root@rbdb81 rmanbak]# su - grid
[grid@rbdb81 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) :262120
Used space (kbytes) : 2888
Available space (kbytes) : 259232
ID : 1532418355
Device/File Name : +OCRVOTE
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check bypassed due to non-privileged user
[grid@rbdb81 ~]$ ocrconfig -add +OCRVOTEMO1
PROT-20: Insufficient permission to proceed. Require privileged user
[grid@rbdb81 ~]$ su root
Password:
[root@rbdb81 grid]# ocrconfig -add +OCRVOTEMO1
[root@rbdb81 grid]# cd /etc/oracle
[root@rbdb81 oracle]# ls
lastgasp ocr.loc ocr.loc.origolr.loc olr.loc.orig oprocdscls_scr setasmgid
[root@rbdb81 oracle]# cat ocr.loc
#Device/file getting replaced by device +OCRVOTEMO1
ocrconfig_loc=+OCRVOTE
ocrmirrorconfig_loc=+OCRVOTEMO1
local_only=false[root@rbdb81 oracle]#
[root@rbdb81 oracle]# ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) :262120
Used space (kbytes) :2904
Available space (kbytes) : 259216
ID : 1532418355
Device/File Name :+OCRVOTE
Device/File integrity check succeeded
Device/File Name : +OCRVOTEMO1
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
兩邊數據庫啓動時,使用如下腳本在sqlplus / as sysdba 中運行
1,建立4組日誌組,大小爲512M
alter database add logfile thread 1 group 5 '+DATA01' size 512M;
alter database add logfile thread 2 group 6 '+DATA01' size 512M;
alter database add logfile thread 1 group 7 '+DATA01' size 512M;
alter database add logfile thread 2 group 8 '+DATA01' size 512M;
2,切歸檔,切到原來4組不在current狀態
alter system archive log current;
3,刪除原來的日誌組(只有52M,不夠用,刪掉)
alter database drop logfile group 1;
alter database drop logfile group 2;
alter database drop logfile group 3;
alter database drop logfile group 4;
4,增長4組新的日誌組
alter database add logfile thread 1 group 1 '+DATA01' size 512M;
alter database add logfile thread 2 group 2 '+DATA01' size 512M;
alter database add logfile thread 1 group 3 '+DATA01' size 512M;
alter database add logfile thread 2 group 4 '+DATA01' size 512M;
SQL> show parameter proces
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
aq_tm_processes integer 1
cell_offload_processing boolean TRUE
db_writer_processes integer 4
gcs_server_processes integer 3
global_txn_processes integer 1
job_queue_processes integer 100
log_archive_max_processes integer 4
processes integer 150
processor_group_name string
SQL> alter system set processes=5000 scope=spfile sid='*';
System altered.
1, 設置備份爲2份
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
2,設置控制文件自動備份
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/rmanbak/autocontrol/%F';
3,全備腳本
見RMAN備份計劃手冊 |
Netca
設置端口
Netmgr(兩節點都要作)
在客服端,HOSTS文件要增長
192.168.1.208 rbdb81
192.168.1.209 rbdb82
192.168.1.210 rbdb8scan
RBSCAN= (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rbdb8scan)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = rbdbon8) ) )
RBVIP= (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = rbdb81vip)(PORT = 1521)) (ADDRESS = (PROTOCOL = TCP)(HOST = rbdb82vip)(PORT = 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = rbdbon8) ) )
RB55VIP= (DESCRIPTION = (ADDRESS_LIST = (LOAD_BALANCE=ON) (ADDRESS = (PROTOCOL = TCP)(HOST = rbdb81vip)(PORT = 1555)) (ADDRESS = (PROTOCOL = TCP)(HOST = rbdb82vip)(PORT = 1555)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = rbdbon8) ) ) |