參考:http://www.javashuo.com/article/p-rgdtbnha-bm.htmlhtml
軟件環境:VMware、redhat6.6、oracle12c(linuxx64_12201_database.zip)、12cgrid(linuxx64_12201_grid_home.zip)node
虛擬機先配置一個節點便可,第二個節點由第一個節點克隆再修改相關參數(環境變量中的sid名稱、網絡等)linux
(操做系統、安裝包、網絡、用戶、環境變量)c++
1.1.一、服務器安裝操做系統shell
選擇最小安裝便可,磁盤分配:35G,內存:4G(最少可能也得2G),swap:8Gbash
關閉防火牆、SELinux服務器
關閉ntpd(mv /etc/ntp.conf /etc/ntp.conf_bak)網絡
添加四塊網卡:分別用於公網2塊(僅主機模式,並進行bonding)、私網2塊(隨便劃分一個vlan1模擬私網,同事做爲存儲雙路徑)session
1.1.二、檢查並安裝oracle12c須要的rpm包oracle
檢查
rpm -q binutils compat-libcap1 compat-libstdc++-33 gcc gcc-c++\ e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \ libxcb libX11 libXau libXi libXtst make \ net-tools nfs-utils smartmontools sysstat
//基本上就須要這些包,到安裝那一步的檢查的時候若是有其餘包提示未安裝則補充安裝
將查詢到的未安裝的包安裝(VMware鏈接鏡像,配置本地yum)
[root@jydb1 ~]#mount /dev/cdrom /mnt
[root@jydb1 ~]# cat /etc/yum.repos.d/rhel-source.repo
[ISO] name=iso baseurl=file:///mnt enabled=1 gpgcheck=0
yum install安裝
yum install binutils compat-libcap1 compat-libstdc++-33 \ e2fsprogs e2fsprogs-libs glibc glibc-devel ksh libaio-devel libaio libgcc libstdc++ libstdc++-devel \ libxcb libX11 libXau libXi libXtst make \ net-tools nfs-utils smartmontools sysstat
另外再安裝cvuqdisk包(rac_grid自檢須要的包,在grid的安裝包中有)
rpm -qi cvuqdisk CVUQDISK_GRP=oinstall; export CVUQDISK_GRP \\這裏須要先建立oinstall組再安裝,後面教程有建立,因此等建立後再進行這一步 rpm -iv cvuqdisk-1.0.10-1.rpm
1.1.三、配置各節點的/etc/hosts
[root@jydb1 ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 jydb1.rac ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 jydb1.rac #eth0 public 192.168.137.11 jydb1 192.168.137.12 jydb2 #eth0 vip 192.168.137.21 jydb1-vip 192.168.137.22 jydb2-vip #eth1 private 10.0.0.1 jydb1-priv 10.0.0.2 jydb2-priv 10.0.0.11 jydb1-priv2 10.0.0.22 jydb2-priv2 #scan ip 192.168.137.137 jydb-cluster-scan
1.1.四、各節點建立須要的用戶和組
建立group & user:
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54324 backupdba groupadd -g 54325 dgdba groupadd -g 54326 kmdba groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin groupadd -g 54330 racdba useradd -u 54321 -g oinstall -G dba,asmdba,backupdba,dgdba,kmdba,racdba,oper oracle useradd -u 54322 -g oinstall -G asmadmin,asmdba,asmoper,dba grid
自行設置oracle、grid密碼
1.1.五、各節點建立安裝目錄(root)
mkdir -p /u01/app/12.2.0/grid mkdir -p /u01/app/grid mkdir -p /u01/app/oracle chown -R grid:oinstall /u01 chown oracle:oinstall /u01/app/oracle chmod -R 775 /u01/
1.1.六、各節點配置文件修改
內核參數修改:vi /etc/sysctl.conf
# vi /etc/sysctl.conf 增長以下內容: fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 6597069766656 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 #net.ipv4.conf.eth3.rp_filter = 2 #net.ipv4.conf.eth2.rp_filter = 2 #net.ipv4.conf.eth0.rp_filter = 1 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500
修改生效:sysctl -p
用戶shell的限制:vi /etc/security/limits.conf
#在/etc/security/limits.conf 增長以下內容: grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240
-加載 pam_limits.so插入式認證模塊:vi /etc/pam.d/login
vi /etc/pam.d/login 添加以下內容:
session required pam_limits.so
1.1.七、各節點用戶環境變量配置
[root@jydb1 ~]# cat /home/grid/.bash_profile
export ORACLE_SID=+ASM1; export ORACLE_HOME=/u01/app/12.2.0/grid; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib export DISPLAY=192.168.88.121:0.0
[root@jydb1 ~]# cat /home/oracle/.bash_profile
export ORACLE_SID=racdb1; export ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_1; export ORACLE_HOSTNAME=jydb1; export PATH=$ORACLE_HOME/bin:$PATH; export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export DISPLAY=192.168.88.121:0.0
上面的步驟完成後能夠克隆node2了,克隆完後,修改下第二臺的環境變量
1.1.八、配置各節點ssh互信
克隆出第二臺,網絡更改沒問題後
以grid用戶爲例,oracle用戶一樣要配置互信:
①先生成節點一grid的公鑰 [grid@jydb1 ~]$ ssh-keygen -t rsa -P '' Generating public/private rsa key pair. Enter file in which to save the key (/home/grid/.ssh/id_rsa): Your identification has been saved in /home/grid/.ssh/id_rsa. Your public key has been saved in /home/grid/.ssh/id_rsa.pub. The key fingerprint is: b6:07:65:3f:a2:e8:75:14:33:26:c0:de:47:73:5b:95 grid@jydb1.rac The key's randomart image is: +--[ RSA 2048]----+ | .. .o| | .. o . .E | | . ...Bo o | | . .=.=. | | S.o o | | o = . . | | . + o | | . . o | | . | +-----------------+ 把它經過命令傳到節點二, [grid@jydb1 ~]$ ssh-copy-id -i .ssh/id_rsa.pub grid@10.0.0.2 grid@10.0.0.2's password: Now try logging into the machine, with "ssh 'grid@10.0.0.2'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting. ②在第二個節點上也生成公鑰,並追加到authorized_keys [grid@jydb2 .ssh]$ ssh-keygen -t rsa -P '' ...... [grid@jydb2 .ssh]$ cat id_rsa.pub >> authorized_keys [grid@jydb2 .ssh]$ scp authorized_keys grid@10.0.0.1:.ssh/ The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established. RSA key fingerprint is d1:21:03:35:9d:f2:a2:81:e7:e1:7b:d0:79:f4:d3:be. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts. grid@10.0.0.1's password: authorized_keys 100% 792 0.8KB/s 00:00 ③驗證 [grid@jydb1 .ssh]$ ssh jydb1 date 2018年 03月 30日 星期五 08:01:20 CST
[grid@jydb1 .ssh]$ ssh jydb2 date
2018年 03月 30日 星期五 08:01:20 CST
[grid@jydb1 .ssh]$ ssh jydb1-priv date
2018年 03月 30日 星期五 08:01:20 CST
[grid@jydb2 .ssh]$ ssh jydb2-priv date
2018年 03月 30日 星期五 08:01:20 CST
jydb2上只須要修改藍色字體
添加一臺服務器模擬存儲服務器,配置兩個私有地址和rac客戶端鏈接多路徑,磁盤劃分和配置
目標:從存儲中劃分出來兩臺主機能夠同時看到的共享LUN,一共六個:3個1G的盤用做OCR和Voting Disk,1個40G的盤作GIMR,其他規劃作DATA和FRA。
注:因爲是實驗環境,重點說明磁盤的做用,生產環境須要將DATA規劃的大一些。
爲存儲服務器加63g的硬盤
//2.3的lv劃分 asmdisk1 1G asmdisk2 1G asmdisk3 1G asmdisk4 40G asmdisk5 10G asmdisk6 10G
1.2.一、檢查存儲網絡
rac爲存儲客戶端
VMware創建vlan1,兩個rac節點、存儲服務器上的兩塊網卡,劃分到vlan1,這樣就能夠經過多路徑和存儲進行鏈接。
存儲(服務端):10.0.0.1十一、10.0.0.222
rac-jydb1(客戶端):10.0.0.一、10.0.0.2
rac-jydb2(客戶端):10.0.0.十一、10.0.0.22
最後測試網路互通沒問題便可進行下一步
1.2.二、安裝iscsi軟件包
--服務端
yum安裝scsi-target-utils
yum install scsi-target-utils
--客戶端
yum安裝iscsi-initiator-utils
yum install iscsi-initiator-utils
1.2.三、模擬存儲加盤
--服務端操做
填加一個63G的盤,實際就是用來模擬存儲新增實際的一塊盤。
我這裏新增長的盤顯示爲/dev/sdb,我將它建立成lvm
# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created # vgcreate vg_storage /dev/sdb Volume group "vg_storage" successfully created # lvcreate -L 10g -n lv_lun1 vg_storage //按照以前劃分的磁盤容量分配多少g Logical volume "lv_lun1" created
1.2.四、配置iscsi服務端
iSCSI服務端主要配置文件:/etc/tgt/targets.conf
因此我這裏按照規範設置的名稱,添加好以下配置:
<target iqn.2018-03.com.cnblogs.test:alfreddisk> backing-store /dev/vg_storage/lv_lun1 # Becomes LUN 1 backing-store /dev/vg_storage/lv_lun2 # Becomes LUN 2 backing-store /dev/vg_storage/lv_lun3 # Becomes LUN 3 backing-store /dev/vg_storage/lv_lun4 # Becomes LUN 4 backing-store /dev/vg_storage/lv_lun5 # Becomes LUN 5 backing-store /dev/vg_storage/lv_lun6 # Becomes LUN 6 </target>
配置完成後,就啓動服務和設置開機自啓動:
[root@Storage ~]# service tgtd start Starting SCSI target daemon: [ OK ] [root@Storage ~]# chkconfig tgtd on [root@Storage ~]# chkconfig --list|grep tgtd tgtd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@Storage ~]# service tgtd status tgtd (pid 1763 1760) is running...
而後查詢下相關的信息,好比佔用的端口、LUN信息(Type:disk):
[root@Storage ~]# netstat -tlunp |grep tgt tcp 0 0 0.0.0.0:3260 0.0.0.0:* LISTEN 1760/tgtd tcp 0 0 :::3260 :::* LISTEN 1760/tgtd [root@Storage ~]# tgt-admin --show Target 1: iqn.2018-03.com.cnblogs.test:alfreddisk System information: Driver: iscsi State: ready I_T nexus information: LUN information: LUN: 0 Type: controller SCSI ID: IET 00010000 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 10737 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No Backing store type: rdwr Backing store path: /dev/vg_storage/lv_lun1 Backing store flags: Account information: ACL information: ALL
1.2.五、配置iscsi客戶端
確認開機啓動項設置開啓:
# chkconfig --list|grep scsi iscsi 0:off 1:off 2:off 3:on 4:on 5:on 6:off iscsid 0:off 1:off 2:off 3:on 4:on 5:on 6:off
使用iscsiadm命令掃描服務端的LUN(探測iSCSI Target)
iscsiadm -m discovery -t sendtargets -p 10.0.1.99
[root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.1.99 10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk [root@jydb1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.2.99 10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
查看iscsiadm -m node
[root@jydb1 ~]# iscsiadm -m node
10.0.1.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
10.0.2.99:3260,1 iqn.2018-03.com.cnblogs.test:alfreddisk
查看/var/lib/iscsi/nodes/下的文件:
[root@jydb1 ~]# ll -R /var/lib/iscsi/nodes/ /var/lib/iscsi/jydbs/: 總用量 4 drw------- 4 root root 4096 3月 29 00:59 iqn.2018-03.com.cnblogs.test:alfreddisk /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk: 總用量 8 drw------- 2 root root 4096 3月 29 00:59 10.0.1.99,3260,1 drw------- 2 root root 4096 3月 29 00:59 10.0.2.99,3260,1 /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.1.99,3260,1: 總用量 4 -rw------- 1 root root 2049 3月 29 00:59 default /var/lib/iscsi/jydbs/iqn.2018-03.com.cnblogs.test:alfreddisk/10.0.2.99,3260,1: 總用量 4 -rw------- 1 root root 2049 3月 29 00:59 default
掛載iscsi磁盤
根據上面探測的結果,執行下面命令,掛載共享磁盤:
iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login
[root@jydb1 ~]# iscsiadm -m node -T iqn.2018-03.com.cnblogs.test:alfreddisk --login Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] (multiple) Logging in to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] (multiple) Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.2.99,3260] successful. Login to [iface: default, target: iqn.2018-03.com.cnblogs.test:alfreddisk, portal: 10.0.1.99,3260] successful.
顯示掛載成功
經過(fdisk -l或lsblk)命令查看掛載的iscsi硬盤
[root@jydb1 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 35G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 7.8G 0 part [SWAP] └─sda3 8:3 0 27G 0 part / sr0 11:0 1 3.5G 0 rom /mnt sdb 8:16 0 1G 0 disk sdc 8:32 0 1G 0 disk sdd 8:48 0 1G 0 disk sde 8:64 0 1G 0 disk sdf 8:80 0 1G 0 disk sdg 8:96 0 1G 0 disk sdi 8:128 0 40G 0 disk sdk 8:160 0 10G 0 disk sdm 8:192 0 10G 0 disk sdj 8:144 0 10G 0 disk sdh 8:112 0 40G 0 disk sdl 8:176 0 10G 0 disk
1.2.六、配置multipath多路徑
安裝多路徑軟件包:
rpm -qa |grep device-mapper-multipath
沒有安裝則yum安裝
#yum install -y device-mapper-multipath
或下載安裝這兩個rpm
device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm
device-mapper-multipath-0.4.9-72.el6.x86_64.rpm
添加開機啓動
chkconfig multipathd on
生成多路徑配置文件
--生成multipath配置文件 /sbin/mpathconf --enable --顯示多路徑的佈局 multipath -ll --從新刷取 multipath -v2 或-v3 --清空全部多路徑 multipath -F
如下是操做輸出,供參考
[root@jydb1 ~]# multipath -ll Mar 29 03:40:10 | multipath.conf line 109, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 115, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 121, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 127, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 133, invalid keyword: multipaths Mar 29 03:40:10 | multipath.conf line 139, invalid keyword: multipaths asmdisk6 (1IET 00010006) dm-5 IET,VIRTUAL-DISK //wwid size=10.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:6 sdj 8:144 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:6 sdm 8:192 active ready running asmdisk5 (1IET 00010005) dm-2 IET,VIRTUAL-DISK size=10G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:5 sdh 8:112 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:5 sdl 8:176 active ready running asmdisk4 (1IET 00010004) dm-4 IET,VIRTUAL-DISK size=40G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:4 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:4 sdk 8:160 active ready running asmdisk3 (1IET 00010003) dm-3 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:3 sdd 8:48 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:3 sdi 8:128 active ready running asmdisk2 (1IET 00010002) dm-1 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:2 sdc 8:32 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:2 sdg 8:96 active ready running asmdisk1 (1IET 00010001) dm-0 IET,VIRTUAL-DISK size=1.0G features='0' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=1 status=active | `- 33:0:0:1 sdb 8:16 active ready running `-+- policy='round-robin 0' prio=1 status=enabled `- 34:0:0:1 sde 8:64 active ready running
啓動multipath服務
#service multipathd start
配置multipath
修改第一處: #建議user_friendly_names設爲no。若是設定爲 no,即指定該系統應使用WWID 做爲該多路徑的別名。若是將其設爲 yes,系統使用文件 #/etc/multipath/mpathn 做爲別名。 #當將 user_friendly_names 配置選項設爲 yes 時,該多路徑設備的名稱對於一個節點來講是惟一的,但不保證對使用多路徑設備的全部節點都一致。也就是說, 在節點一上的mpath1和節點二上的mpath1可能不是同一個LUN,可是各個服務器上看到的相同LUN的WWID都是同樣的,因此不建議設爲yes,而是設爲#no,用WWID做爲別名。 defaults { user_friendly_names no path_grouping_policy failover //表示multipath工做模式爲主備,path_grouping_policy multibus爲主主 } 添加第二處:綁定wwid<br>這裏的wwid在multipath -l中體現 multipaths { multipath { wwid "1IET 00010001" alias asmdisk1 } multipaths { multipath { wwid "1IET 00010002" alias asmdisk2 } multipaths { multipath { wwid "1IET 00010003" alias asmdisk3 } multipaths { multipath { wwid "1IET 00010004" alias asmdisk4 } multipaths { multipath { wwid "1IET 00010005" alias asmdisk5 } multipaths { multipath { wwid "1IET 00010006" alias asmdisk6 }
配置完成要生效得重啓multipathd
綁定後查看multipath別名
[root@jydb1 ~]# cd /dev/mapper/
[root@jydb1 mapper]# ls
asmdisk1 asmdisk2 asmdisk3 asmdisk4 asmdisk5 asmdisk6 control
udev綁定裸設備
首先進行UDEV權限綁定,不然權限不對安裝時將掃描不到共享磁盤
修改以前:
[root@jydb1 ~]# ls -lh /dev/dm* brw-rw---- 1 root disk 253, 0 4月 2 16:18 /dev/dm-0 brw-rw---- 1 root disk 253, 1 4月 2 16:18 /dev/dm-1 brw-rw---- 1 root disk 253, 2 4月 2 16:18 /dev/dm-2 brw-rw---- 1 root disk 253, 3 4月 2 16:18 /dev/dm-3 brw-rw---- 1 root disk 253, 4 4月 2 16:18 /dev/dm-4 brw-rw---- 1 root disk 253, 5 4月 2 16:18 /dev/dm-5 crw-rw---- 1 root audio 14, 9 4月 2 16:18 /dev/dmmidi
我這裏系統是RHEL6.6,對於multipath的權限,手工去修改幾秒後會變回root。因此須要使用udev去綁定好權限。
搜索對應的配置文件模板:
[root@jyrac1 ~]# find / -name 12-* /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules
根據模板新增12-dm-permissions.rules文件在/etc/udev/rules.d/下面:
vi /etc/udev/rules.d/12-dm-permissions.rules # MULTIPATH DEVICES # # Set permissions for all multipath devices ENV{DM_UUID}=="mpath-?*", OWNER:="grid", GROUP:="asmadmin", MODE:="660" //修改這裏 # Set permissions for first two partitions created on a multipath device (and detected by kpartx) # ENV{DM_UUID}=="part[1-2]-mpath-?*", OWNER:="root", GROUP:="root", MODE:="660"
完成後啓動start_udev,30s後權限正常則OK
[root@jydb1 ~]# start_udev 正在啓動 udev:[肯定] [root@jydb1 ~]# ls -lh /dev/dm* brw-rw---- 1 grid asmadmin 253, 0 4月 2 16:25 /dev/dm-0 brw-rw---- 1 grid asmadmin 253, 1 4月 2 16:25 /dev/dm-1 brw-rw---- 1 grid asmadmin 253, 2 4月 2 16:25 /dev/dm-2 brw-rw---- 1 grid asmadmin 253, 3 4月 2 16:25 /dev/dm-3 brw-rw---- 1 grid asmadmin 253, 4 4月 2 16:25 /dev/dm-4 brw-rw---- 1 grid asmadmin 253, 5 4月 2 16:25 /dev/dm-5 crw-rw---- 1 root audio 14, 9 4月 2 16:24 /dev/dmmidi
磁盤設備綁定
查詢裸設備的主設備號、次設備號
[root@jydb1 ~]# ls -lt /dev/dm-* brw-rw---- 1 grid asmadmin 253, 5 3月 29 04:00 /dev/dm-5 brw-rw---- 1 grid asmadmin 253, 3 3月 29 04:00 /dev/dm-3 brw-rw---- 1 grid asmadmin 253, 2 3月 29 04:00 /dev/dm-2 brw-rw---- 1 grid asmadmin 253, 4 3月 29 04:00 /dev/dm-4 brw-rw---- 1 grid asmadmin 253, 1 3月 29 04:00 /dev/dm-1 brw-rw---- 1 grid asmadmin 253, 0 3月 29 04:00 /dev/dm-0 [root@jydb1 ~]# dmsetup ls|sort asmdisk1 (253:0) asmdisk2 (253:1) asmdisk3 (253:3) asmdisk4 (253:4) asmdisk5 (253:2) asmdisk6 (253:5) 根據對應關係綁定裸設備 vi /etc/udev/rules.d/60-raw.rules # Enter raw device bindings here. # # An example would be: # ACTION=="add", KERNEL=="sda", RUN+="/bin/raw /dev/raw/raw1 %N" # to bind /dev/raw/raw1 to /dev/sda, or # ACTION=="add", ENV{MAJOR}=="8", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" # to bind /dev/raw/raw2 to the device with major 8, minor 1. ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="0", RUN+="/bin/raw /dev/raw/raw1 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="1", RUN+="/bin/raw /dev/raw/raw2 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="2", RUN+="/bin/raw /dev/raw/raw3 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="3", RUN+="/bin/raw /dev/raw/raw4 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="4", RUN+="/bin/raw /dev/raw/raw5 %M %m" ACTION=="add", ENV{MAJOR}=="253", ENV{MINOR}=="5", RUN+="/bin/raw /dev/raw/raw6 %M %m" ACTION=="add", KERNEL=="raw1", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw2", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw3", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw4", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw5", OWNER="grid", GROUP="asmadmin", MODE="660" ACTION=="add", KERNEL=="raw6", OWNER="grid", GROUP="asmadmin", MODE="660"
完成後查看
[root@jydb1 ~]# start_udev 正在啓動 udev:[肯定][root@jydb1 ~]# ll /dev/raw/raw*crw-rw---- 1 grid asmadmin 162, 1 5月 25 05:03 /dev/raw/raw1crw-rw---- 1 grid asmadmin 162, 2 5月 25 05:03 /dev/raw/raw2crw-rw---- 1 grid asmadmin 162, 3 5月 25 05:03 /dev/raw/raw3crw-rw---- 1 grid asmadmin 162, 4 5月 25 05:03 /dev/raw/raw4crw-rw---- 1 grid asmadmin 162, 5 5月 25 05:03 /dev/raw/raw5crw-rw---- 1 grid asmadmin 162, 6 5月 25 05:03 /dev/raw/raw6crw-rw---- 1 root disk 162, 0 5月 25 05:03 /dev/raw/rawctl