1.查看OCR位置
用戶指定的位置會被放置在 /etc/oracle/ocr.loc(Liunx系統) 或 /var/opt/oracle/ocr.loccss
[oracle@rac4 opt]$ cat /etc/oracle/ocr.loc ocrconfig_loc=/dev/raw/raw1 local_only=FALSE
2.查看voting disk位置html
[oracle@rac4 opt]$ crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
3.關鍵的3個進程node
EVMD和CRSD兩個進程若是出現異常,則系統會自動重啓這兩個進程,可是若是CSSD進程異常,系統會當即重啓數據庫
h1:35:respawn:/etc/init.d/init.evmd run >/dev/null 2>&1 </dev/null h2:35:respawn:/etc/init.d/init.cssd fatal >/dev/null 2>&1 </dev/null h3:35:respawn:/etc/init.d/init.crsd run >/dev/null 2>&1 </dev/null
4.CSS服務兩種心跳網絡延時參數安全
[oracle@rac4 opt]$ crsctl get css disktimeout 200 [oracle@rac4 opt]$ crsctl get css misscount 60
修改心跳時間設置 crsctl set css misscount 100 慎用性能優化
5.查看集羣的節點信息網絡
[oracle@rac4 opt]$ olsnodes --help Usage: olsnodes [-n] [-p] [-i] [<node> | -l] [-g] [-v] where -n print node number with the node name -p print private interconnect name with the node name -i print virtual IP name with the node name <node> print information for the specified node -l print information for the local node -g turn on logging -v run in verbose mode
[oracle@rac4 opt]$ olsnodes -n rac3 1 rac4 2 [oracle@rac4 opt]$ olsnodes -n -p rac3 1 rac3-priv rac4 2 rac4-priv [oracle@rac4 opt]$ olsnodes -n -p -i rac3 1 rac3-priv rac3-vip rac4 2 rac4-priv rac4-vip
備註:本文部分例子來源於張曉明《大話Oracle RAC:集羣 高可用性 備份與恢復》oracle
6.配置crs棧是否自動啓動app
root@rac3 bin]# ./crsctl
crsctl enable crs - enables startup for all CRS daemons
crsctl disable crs - disables startup for all CRS daemons
其實 crsctl enable crs 修改的是/etc/oracle/scls_scr/節點名/root/crsstart文件工具
[root@rac3 root]# more crsstart
enable
其實能夠手動把它編輯成disable或enable也能夠,由於crsctl enable crs/crsctl disable crs就是改的這個文件。
7.查看RAC資源的狀態
[oracle@rac4 ~]$ srvctl status nodeapps -n rac3 VIP is running on node: rac3 GSD is running on node: rac3 Listener is running on node: rac3 ONS daemon is running on node: rac3
[oracle@rac4 ~]$ srvctl status asm -n rac3 ASM instance +ASM1 is running on node rac3.
[oracle@rac4 ~]$ srvctl status database -d racdb Instance racdb2 is running on node rac4 Instance racdb1 is running on node rac3
[oracle@rac4 ~]$ srvctl status service -d racdb Service racdbserver is running on instance(s) racdb2
8.查看集羣件的狀態
[oracle@rac3 ~]$ crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy [oracle@rac4 ~]$ crsctl check crs CSS appears healthy CRS appears healthy EVM appears healthy
[oracle@rac4 ~]$ crsctl check cssd
CSS appears healthy
[oracle@rac4 ~]$ crsctl check crsd
CRS appears healthy
[oracle@rac4 ~]$ crsctl check evmd
EVM appears healthy
9.oifcfg命令的使用
oifcfg命令有如下4個子命令,每一個命令又能夠有不一樣的參數,具體可經過oifcfg -help獲取幫助
[oracle@rac4 ~]$ oifcfg --hlep PRIF-9: incorrect usage Name: oifcfg - Oracle Interface Configuration Tool. Usage: oifcfg iflist [-p [-n]] oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>}... oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ] oifcfg delif [-node <nodename> | -global] [<if_name>[/<subnet>]] oifcfg [-help] <nodename> - name of the host, as known to a communications network <if_name> - name by which the interface is configured in the system <subnet> - subnet address of the interface <if_type> - type of the interface { cluster_interconnect | public | storage }
<1>.iflist 顯示網口列表
<2>.getif 得到單個網口信息
<3>.setif 配置單個網口
<4>.delif 刪除網口
--使用iflist顯示網口列表
[oracle@rac4 ~]$ oifcfg iflist eth0 192.168.1.0 eth1 192.168.2.0
--使用getif子命令查看每一個網卡的屬性
[oracle@rac4 ~]$ oifcfg getif eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
注意:網絡接口的配置方式分紅兩類:global和node-specific。前者說明集羣全部節點的配置信息相同,也就是說
全部節點的配置是對稱的;然後者意味着這個節點的配置和其餘節點的配置不一樣,是非對稱的。
--查詢rac4/rac3節點的global類型配置
[oracle@rac4 ~]$ oifcfg getif -global rac4 eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect [oracle@rac4 ~]$ oifcfg getif -global rac3 eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
--查詢rac4/rac3節點的node-specific類型配置
[oracle@rac4 ~]$ oifcfg getif -node rac3
[oracle@rac4 ~]$ oifcfg getif -node rac4
兩個節點都沒有輸出,說明集羣中沒有node-specific的配置
--按類型查看網卡的配置 (public/cluster_interconnect)
[oracle@rac4 ~]$ oifcfg getif -type public eth0 192.168.1.0 global public [oracle@rac4 ~]$ oifcfg getif -type cluster_interconnect eth1 192.168.2.0 global cluster_interconnect
--經過setif添加新的網卡
[oracle@rac4 ~]$ oifcfg setif -global livan@net/10.0.0.0:public --注意這個命令並不會檢查網卡是否真的存在 [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect livan@net 10.0.0.0 global public
--使用delif命令刪除接口配置
[oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect livan@net 10.0.0.0 global public [oracle@rac4 ~]$ oifcfg delif -global livan@net [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
[oracle@rac4 ~]$ oifcfg delif -global --刪除全部網絡配置 [oracle@rac4 ~]$ oifcfg getif -global [oracle@rac4 ~]$ oifcfg setif -global eth0/192.168.1.0:public [oracle@rac4 ~]$ oifcfg setif -global eth1/192.168.2.0:cluster_interconnect [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
[oracle@rac4 ~]$ oifcfg delif -global [oracle@rac4 ~]$ oifcfg setif -global eth0/192.168.1.0:public eth1/192.168.2.0:cluster_interconnect [oracle@rac4 ~]$ oifcfg getif -global eth0 192.168.1.0 global public eth1 192.168.2.0 global cluster_interconnect
10.查看Votedisk磁盤的位置
[oracle@rac4 ~]$ crsctl query css votedisk
0. 0 /dev/raw/raw2
located 1 votedisk(s).
上面的輸入說明votedisk爲於 /dev/raw/raw2
11.列出crs集羣件的安裝及操做版本
[oracle@rac4 ~]$ crsctl query crs softwareversion rac3 CRS software version on node [rac3] is [10.2.0.1.0] [oracle@rac4 ~]$ crsctl query crs softwareversion rac4 CRS software version on node [rac4] is [10.2.0.1.0] [oracle@rac4 ~]$ crsctl query crs activeversion CRS active version on the cluster is [10.2.0.1.0]
12.列出crs各服務模塊
CRS由CRS、CSS、EVM這3個服務組成,而每一個服務又是由一系列modeule(模塊)組成, crsctl容許對每一個modeule進行跟蹤,並把跟蹤內容記錄的日誌中。
--列出CRS服務對應的模塊
[oracle@rac4 ~]$ crsctl lsmodules crs
The following are the CRS modules ::
CRSUI
CRSCOMM
CRSRTI
CRSMAIN
CRSPLACE
CRSAPP
CRSRES
CRSCOMM
CRSOCR
CRSTIMER
CRSEVT
CRSD
CLUCLS
CSSCLNT
COMMCRS
COMMNS
--列出CSS服務對應的模塊
[oracle@rac4 ~]$ crsctl lsmodules css
The following are the CSS modules ::
CSSD
COMMCRS
COMMNS
--列出EVM服務對應的模塊
[oracle@rac4 ~]$ crsctl lsmodules evm
The following are the EVM modules ::
EVMD
EVMDMAIN
EVMCOMM
EVMEVT
EVMAPP
EVMAGENT
CRSOCR
CLUCLS
CSSCLNT
COMMCRS
COMMNS
13.跟蹤cssd模塊(須要用root用戶執行)
[root@rac4 bin]# ./crsctl debug log css "CSSD:1" Configuration parameter trace is now set to 1. Set CRSD Debug Module: CSSD Level: 1
[root@rac4 10.2.0]# more ./crs_1/log/rac4/cssd/ocssd.log ...... [ CSSD]2015-01-26 09:02:12.891 [1084229984] >TRACE: clssscSetDebugLevel: The logging level is set to 1 ,the cache level is set to 2 [ CSSD]2015-01-26 09:02:46.587 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7c3bf0) proc(0x7c0850) pid() proto(10:2:1:1) [ CSSD]2015-01-26 09:03:46.948 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7a4bf0) proc(0x7c0900) pid() proto(10:2:1:1) [ CSSD]2015-01-26 09:04:47.299 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7c3bf0) proc(0x7c0850) pid() proto(10:2:1:1) [ CSSD]2015-01-26 09:05:47.553 [1147169120] >TRACE: clssgmClientConnectMsg: Connect from con(0x7a4bf0) proc(0x7c0900) pid() proto(10:2:1:1) ......
14.增長或刪除Votedisk
加和刪除votedisk的操做比較危險,必須在中止數據庫、中止ASM、中止CRS Stack後操做,而且操做時必須使用-force參數
注意:即便在CRS關閉後,也必須經過-force參數來添加刪除votedisk。而且-force參數只有在crs關閉的場合下使用才安全。 由於votedisk的數量應該是奇數,因此添加刪除操做都是成對進行的。
咱們在RAC上增長了兩個裸設備,分別都是2G
--添加前
[root@rac3 ~]# raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3: bound to major 8, minor 49
/dev/raw/raw4: bound to major 8, minor 65
/dev/raw/raw5: bound to major 8, minor 81
[root@rac3 ~]#
延伸:
vmware workstation RAC環境下添加共享磁盤(裸設備):
1.在一個節點上建立好虛擬磁盤,預先分配好空間,並設置好磁盤的 虛擬設備節點[虛擬機設置-->點擊要設置的磁盤-->右邊高級選項]
2.在另外節點添加虛擬機磁盤,選擇添加已存在的盤,選擇在第一個節點建立的盤,並把虛擬設備節點與第一臺設備設置爲相同
3.fdisk -l 在兩個節點上都能看到盤,劃分去
4.修改/etc/sysconfig/rawdevices 添加裸設備與磁盤的對應關係
5.重啓裸設備服務 service rawdevices start
--添加後
[root@rac3 ~]# raw -qa
/dev/raw/raw1: bound to major 8, minor 17
/dev/raw/raw2: bound to major 8, minor 33
/dev/raw/raw3: bound to major 8, minor 49
/dev/raw/raw4: bound to major 8, minor 65
/dev/raw/raw5: bound to major 8, minor 81
/dev/raw/raw7: bound to major 8, minor 97 --新添加
/dev/raw/raw8: bound to major 8, minor 113 --新添加
[root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
[root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 --必須增長force選項 Cluster is not in a ready state for online disk addition [root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force --沒有關閉crs,增長失敗 Now formatting voting disk: /dev/raw/raw7 CLSFMT returned with error [4]. failed 9 to initailize votedisk /dev/raw/raw7. [root@rac4 bin]#
[root@rac4 bin]# ./crsctl stop crs --全部節點都關閉 Stopping resources. Successfully stopped CRS resources Stopping CSSD. Shutting down CSS daemon. Shutdown request successfully issued.
[root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force --再添加提示咱們已經有了【保險點,從新添加】 votedisk named /dev/raw/raw7 already configured as /dev/raw/raw7.
[root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw7 --刪除剛次的添加 successful deletion of votedisk /dev/raw/raw7.
[root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force --仍是不成功 Now formatting voting disk: /dev/raw/raw7 CLSFMT returned with error [4]. failed 9 to initailize votedisk /dev/raw/raw7.
--重啓系統後增長成功,【成對增長或刪除】(新增長了裸設備,重啓一下系統仍是比重啓一下裸設備服務保險點),用來測試的裸設備的有點大,格式化時間會長一點
[root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s). [root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw7 -force Now formatting voting disk: /dev/raw/raw7 successful addition of votedisk /dev/raw/raw7. [root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw7 located 2 votedisk(s). [root@rac4 bin]# ./crsctl add css votedisk /dev/raw/raw8 -force Now formatting voting disk: /dev/raw/raw8 successful addition of votedisk /dev/raw/raw8. [root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw7 2. 0 /dev/raw/raw8 located 3 votedisk(s).
--另外一節點查看
[root@rac3 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 1. 0 /dev/raw/raw7 2. 0 /dev/raw/raw8
--刪除votedisk
[root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw7 Cluster is not in a ready state for online disk removal [root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw7 -force successful deletion of votedisk /dev/raw/raw7. [root@rac4 bin]# ./crsctl delete css votedisk /dev/raw/raw8 -force successful deletion of votedisk /dev/raw/raw8. [root@rac4 bin]# ./crsctl query css votedisk 0. 0 /dev/raw/raw2 located 1 votedisk(s).
15.備份Votedisk
<1>. 劉憲軍的《Oracle RAC 11g實戰指南》p94第9行:「從Oracle 11.2開始,Voting文件的備份不須要手工進行,只要對Clusterware的結構作了修改,Voting文件便被自動備份到OCR文件中。」
<2>. 劉炳林的《構建最高可用Oracle數據庫系統 Oracle11gR2 RAC管理、維護與性能優化》p326第2行:「在Clusterware 11gR2中,不須要備份表決磁盤。表決磁盤的任何改變會自動備份到OCR備份文件中,相關信息會自動還原到任何添加的表決磁盤文件中。」
[oracle@rac3 ~]$ dd if=/dev/raw/raw2 of=/home/oracle/votedisk.bak
208864+0 records in
208864+0 records out
一樣恢復的命令是 dd if=/home/oracle/votedisk.bak of=/dev/raw/raw2
16.清除裸設備的內容(咱們以前增長的兩個裸設備,重裝crs的時候須要清除一下裸設備)
[root@rac3 bin]# dd if=/dev/zero of=/dev/raw/raw7 bs=10M dd: writing `/dev/raw/raw7': No space left on device 205+0 records in 204+0 records out [root@rac3 bin]# dd if=/dev/zero of=/dev/raw/raw8 bs=10M dd: writing `/dev/raw/raw8': No space left on device 205+0 records in 204+0 records out
17.使用ocrdump打印ocr內容
ocrdump命令能以ASCII的方式打印出OCR的內容,這是這個命令不能用做OCR的輩分恢復, 也就是說產生的文件只能用於閱讀,而不能用於恢復OCR.
執行 ocrdump -help 尋求幫組
orcdump [-stdout] [filename] [-keyname name] [-xml]
【-stdout】把內容打印輸出到屏幕上
【filename】把內容輸出到文件中
【-keyname name】只打印某個鍵及其子鍵的內容
【-xml】以.xml格式打印輸出
--把SYSTEM.css鍵的內容以.xml格式打印輸出到屏幕上
[root@rac3 bin]# ./ocrdump -stdout -keyname SYSTEM.css -xml|more <OCRDUMP> <TIMESTAMP>01/27/2015 10:37:04</TIMESTAMP> <COMMAND>./ocrdump.bin -stdout -keyname SYSTEM.css -xml </COMMAND> <KEY> <NAME>SYSTEM.css</NAME> <VALUE_TYPE>UNDEF</VALUE_TYPE> <VALUE><![CDATA[]]></VALUE> <USER_PERMISSION>PROCR_ALL_ACCESS</USER_PERMISSION> <GROUP_PERMISSION>PROCR_READ</GROUP_PERMISSION> <OTHER_PERMISSION>PROCR_READ</OTHER_PERMISSION> <USER_NAME>root</USER_NAME> <GROUP_NAME>root</GROUP_NAME>
。。。。。。
ocrdump命令執行過程當中會在$CRS_HOME/log/<nodename>/client目錄下產生日誌文件,文件名ocrdump_<pid>.log,
若是命令執行出現問題,能夠從這個日誌查看問題緣由。
[root@rac3 client]# pwd /opt/ora10g/product/10.2.0/crs_1/log/rac3/client [root@rac3 client]# ll -ltr ocrdump_2* -rw-r----- 1 root root 245 Jan 27 10:35 ocrdump_26850.log -rw-r----- 1 root root 823 Jan 27 10:39 ocrdump_29423.log
18.使用ocrcheck檢查OCR內容的一致性
該命令用於檢查OCR內容的一致性,命令執行過程當中會在$CRS_HOME/log/<nodename>/client/ocrcheck_<pid>.log日誌文件。這個命令沒有參數。
[root@rac3 bin]# ./ocrcheck Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 104344 Used space (kbytes) : 4340 Available space (kbytes) : 100004 ID : 777521936 Device/File Name : /dev/raw/raw1 --OCR盤所在位置 Device/File integrity check succeeded --這裏表示內容一致,內容不一致的話會輸出 Device/File needs to be synchronized with the other device Device/File not configured Cluster registry integrity check succeeded
--查看該命令生成的日誌
[root@rac3 client]# pwd /opt/ora10g/product/10.2.0/crs_1/log/rac3/client [root@rac3 client]# ll -ltr ocrcheck_* -rw-r----- 1 oracle oinstall 370 Apr 18 2014 ocrcheck_25577.log -rw-r----- 1 root root 370 Jan 27 10:44 ocrcheck_7947.log
19.使用ocrconfig命令維護OCR磁盤
ocrconfig命令用於維護OCR磁盤,安裝Clusterware過程當中,若是選擇External Redundancy冗餘方式,則只能輸入一個OCR磁盤位置。可是Oracle容許配置兩個OCR磁盤互爲鏡像,以防止OCR磁盤的單點故障。OCR磁盤和votedisk磁盤不同,OCR磁盤最多隻能有兩個,一個Primary OCR和一個Mirror OCR(鏡像的OCR)。
[root@rac3 bin]# ./ocrconfig -help Name: ocrconfig - Configuration tool for Oracle Cluster Registry. Synopsis: ocrconfig [option] option: -export <filename> [-s online] - Export cluster register contents to a file -import <filename> - Import cluster registry contents from a file -upgrade [<user> [<group>]] - Upgrade cluster registry from previous version -downgrade [-version <version string>] - Downgrade cluster registry to the specified version -backuploc <dirname> - Configure periodic backup location -showbackup - Show backup information -restore <filename> - Restore from physical backup -replace ocr|ocrmirror [<filename>] - Add/replace/remove a OCR device/file -overwrite - Overwrite OCR configuration on disk -repair ocr|ocrmirror <filename> - Repair local OCR configuration -help - Print out this help information Note: A log file will be created in $ORACLE_HOME/log/<hostname>/client/ocrconfig_<pid>.log. Please ensure you have file creation privileges in the above directory before running this tool.
ocrconfig命令很是的重要,咱們經過一些試驗來了解該命令:
http://www.cnblogs.com/myrunning/p/4253696.html
##RAC應用層的一些命令
20.crs_stat命令維護crs資源
crs_stat這個命令用於查看CRS維護的全部資源的運行狀態。若是不帶任何參數時,顯示全部資源的概要信息。 每一個資源顯示各個屬性:資源的名稱、類型、目標、資源狀態等。
--查看全部資源詳細信息
[oracle@rac3 ~]$ crs_stat NAME=ora.rac3.ASM1.asm TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.LISTENER_RAC3.lsnr TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.gsd TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 NAME=ora.rac3.ons TYPE=application TARGET=ONLINE STATE=ONLINE on rac3 。。。。。。。。。。。。
--查看指定資源的狀態
[oracle@rac3 ~]$ crs_stat ora.racdb.racdb1.inst NAME=ora.racdb.racdb1.inst TYPE=application TARGET=ONLINE STATE=ONLINE on rac3
--使用-v選項查看詳細
[oracle@rac3 ~]$ crs_stat -v ora.racdb.racdb1.inst NAME=ora.racdb.racdb1.inst TYPE=application RESTART_ATTEMPTS=5 RESTART_COUNT=0 FAILURE_THRESHOLD=0 FAILURE_COUNT=0 TARGET=ONLINE STATE=ONLINE on rac3
--使用-p選項查看更詳細信息
[oracle@rac3 ~]$ crs_stat -p ora.racdb.racdb1.inst NAME=ora.racdb.racdb1.inst TYPE=application ACTION_SCRIPT=/opt/ora10g/product/10.2.0/db_1/bin/racgwrap ACTIVE_PLACEMENT=0 AUTO_START=1 CHECK_INTERVAL=600 DESCRIPTION=CRS application for Instance FAILOVER_DELAY=0 FAILURE_INTERVAL=0 FAILURE_THRESHOLD=0 HOSTING_MEMBERS=rac3 OPTIONAL_RESOURCES= PLACEMENT=restricted REQUIRED_RESOURCES=ora.rac3.vip ora.rac3.ASM1.asm RESTART_ATTEMPTS=5 SCRIPT_TIMEOUT=600 START_TIMEOUT=0 STOP_TIMEOUT=0 UPTIME_THRESHOLD=7d USR_ORA_ALERT_NAME= USR_ORA_CHECK_TIMEOUT=0 USR_ORA_CONNECT_STR=/ as sysdba USR_ORA_DEBUG=0 USR_ORA_DISCONNECT=false USR_ORA_FLAGS= USR_ORA_IF= USR_ORA_INST_NOT_SHUTDOWN= USR_ORA_LANG= USR_ORA_NETMASK= USR_ORA_OPEN_MODE= USR_ORA_OPI=false USR_ORA_PFILE= USR_ORA_PRECONNECT=none USR_ORA_SRV= USR_ORA_START_TIMEOUT=0 USR_ORA_STOP_MODE=immediate USR_ORA_STOP_TIMEOUT=0 USR_ORA_VIP=
--使用-ls選項,查看每一個資源的權限定義
[oracle@rac3 ~]$ crs_stat -ls Name Owner Primary PrivGrp Permission ----------------------------------------------------------------- ora....SM1.asm oracle oinstall rwxrwxr-- ora....C3.lsnr oracle oinstall rwxrwxr-- ora.rac3.gsd oracle oinstall rwxr-xr-- ora.rac3.ons oracle oinstall rwxr-xr-- ora.rac3.vip root oinstall rwxr-xr-- ora....SM2.asm oracle oinstall rwxrwxr-- ora....C4.lsnr oracle oinstall rwxrwxr-- ora.rac4.gsd oracle oinstall rwxr-xr-- ora.rac4.ons oracle oinstall rwxr-xr-- ora.rac4.vip root oinstall rwxr-xr-- ora.racdb.db oracle oinstall rwxrwxr-- ora....b1.inst oracle oinstall rwxrwxr-- ora....b2.inst oracle oinstall rwxrwxr-- ora....rver.cs oracle oinstall rwxrwxr-- ora....db2.srv oracle oinstall rwxrwxr--
21.rac srvctl命令使用理解
srvctl命令是RAC維護中最經常使用到的命令,也最爲複雜,使用這個命令能夠操做CRS上的Database,Instance, ASM,Service、Listener和Node Application資源,其中Node Application資源又包括了GSD、ONS、VIP。
這些 資源還有獨立的管理工具,好比
ONS可使用onsctl命令進行管理: http://www.cnblogs.com/myrunning/p/4265522.html
listener還能夠經過lsnrctl命令進行管理:http://www.cnblogs.com/myrunning/p/3977931.html
srvctl命令使用理解:http://www.cnblogs.com/myrunning/p/4265539.html