Oracle 12c新特性--ASMFD(ASM Filter Driver)特性php
從Oracle 12.1.0.2開始,可使用asmfd來取代udev規則下的asm磁盤設備綁定,同時他也具備過濾非法IO操做的特性。ASMFD是ASMLIB和UDEV的替代產品,實際上,ASMFD也用到了UDEV。
css
能夠在安裝GRID時配置ASMFD,也能夠在安裝Grid後再進行配置。html
檢查操做系統版本是否支持ASMFD,可使用以下代碼:node
acfsdriverstate -orahome $ORACLE_HOME supported
ASMFD 是 12.1 中就引入的新特性,它能夠不用手動配置 ASM 磁盤,更重要的是它能夠保護磁盤被其餘非 Oracle 操做複寫,例如 dd , echo 等命令。linux
下面的環境基於RHEL 7.4,測試將UDEV存儲轉移到AFD磁盤路徑。sql
1、afd配置調整數據庫
一、root用戶下添加grid環境變量centos
[root@rac1 ~]# export ORACLE_HOME=/u01/app/12.2.0/gridapi
[root@rac1 ~]# export ORACLE_BASE=/tmp安全
二、獲取當前asm磁盤組發現路徑
[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget
parameter:/dev/asm*
profile:/dev/asm*
三、添加AFD發現路徑
[root@rac1 ~]# asmcmd dsset '/dev/asm*','AFD:*'
[root@rac1 ~]# $ORACLE_HOME/bin/asmcmd dsget
parameter:/dev/asm*, AFD:*
profile:/dev/asm*,AFD:*
四、查看節點信息
[root@rac1 ~]# olsnodes -a
rac1 Hub
rac2 Hub
如下須要在全部節點運行
五、中止crs
[root@rac1 ~]# crsctl stop crs
1
六、安裝oracle afd
節點一、節點2
加載以及配置AFD
[root@rac1 yum.repos.d]# asmcmd afd_configure
1
備註:在7.4以上的redhat或者centos下須要升級kmod才能夠啓用AFD,在前面一篇文章中已有介紹
解決在RHEL/CentOS7.4以上版本沒法使用AFD(Oracle ASMFD)特性
https://blog.csdn.net/kiral07/article/details/87629679
加載afd過程:
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
查詢afd狀態信息
[root@rac1 yum.repos.d]# asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac1'
七、啓動crs
[root@rac1 yum.repos.d]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
八、查看當前存儲設備
[root@rac1 yum.repos.d]# ll /dev/mapper/mpath*
lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathc -> ../dm-1
lrwxrwxrwx 1 root root 7 Feb 15 17:18 /dev/mapper/mpathd -> ../dm-0
此處使用多路徑設備mpathc、mpathd
[root@rac2 ~]# multipath -ll
mpathd (14f504e46494c45526147693538302d577037452d39596459) dm-1 OPNFILER,VIRTUAL-DISK
size=30G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 34:0:0:1 sdc 8:32 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 35:0:0:1 sde 8:64 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 36:0:0:1 sdg 8:96 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 37:0:0:1 sdi 8:128 active ready running
mpathc (14f504e46494c45524f444c7844412d717a557a2d6b7a6752) dm-0 OPNFILER,VIRTUAL-DISK
size=40G features='0' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 34:0:0:0 sdb 8:16 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 35:0:0:0 sdd 8:48 active ready running
|-+- policy='service-time 0' prio=1 status=enabled
| `- 36:0:0:0 sdf 8:80 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 37:0:0:0 sdh 8:112 active ready running
九、添加afd發現路徑
切換到grid用戶
[root@rac2 ~]# su - grid
使用afd_dsset添加存儲路徑
[grid@rac2:/home/grid]$asmcmd afd_dsset '/dev/mapper/mpath*'
[grid@rac2:/home/grid]$asmcmd afd_dsget
AFD discovery string: /dev/mapper/mpath*
此時未添加afd label因此爲空
[grid@rac2:/home/grid]$asmcmd afd_lsdsk
There are no labelled devices.
至此從步驟5,在rac全部節點已運行全部命令
2、轉移UDEV設備到AFD路徑
一、查看當前ocr以及voting files磁盤組
[root@rac1 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +CRS
[root@rac1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4e20265767f54f49bf12bd72f367217f (/dev/asm_crs) [CRS]
Located 1 voting disk(s).
二、查看crs磁盤組對應的udev存儲路徑
[root@rac1 ~]# su - grid
[grid@rac1:/home/grid]$asmcmd lsdsk -G crs
Path
/dev/asm_crs
三、中止rac集羣
[root@rac1 ~]# crsctl stop cluster -all
1
四、轉移udev設備到afd
使用label添加asmcrs磁盤組,將udev規則下的磁盤路徑轉移到afd
[grid@rac1:/home/grid]$asmcmd afd_label asmcrs /dev/mapper/mpathc --migrate
crs磁盤組已加載完畢
[grid@rac1:/home/grid]$asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMCRS ENABLED /dev/mapper/mpathc
備註:因爲當前磁盤組已被asm使用,必須使用migrate才能夠進行轉移。
添加另一塊data磁盤組
[grid@rac1:/home/grid]$asmcmd afd_label asmdata /dev/mapper/mpathd --migrate
查看afd磁盤組,已加載完畢
[grid@rac1:/home/grid]$asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMCRS ENABLED /dev/mapper/mpathc
ASMDATA ENABLED /dev/mapper/mpathd
五、在其他節點掃描afd設備
[grid@rac2:/home/grid]$asmcmd afd_scan
[grid@rac2:/home/grid]$asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMCRS ENABLED /dev/mapper/mpathc
ASMDATA ENABLED /dev/mapper/mpathd
六、啓動rac集羣
[root@rac1 ~]# crsctl start cluster -all
1
七、在asm實例下查詢asm磁盤信息
原udev asm存儲路徑信息還能查看到
[root@rac1 ~]# asmcmd lsdsk
Path
AFD:ASMCRS
AFD:ASMDATA
SQL> col name for a10
SQL> col label for a10
SQL> col path for a15
SQL> select NAME,LABEL,PATH from V$ASM_DISK;
NAME LABEL PATH
---------- ---------- ---------------
ASMDATA /dev/asm_data ---->以前udev asm路徑
ASMCRS /dev/asm_crs ---->以前udev asm路徑
CRS_0000 ASMCRS AFD:ASMCRS
DATA_0000 ASMDATA AFD:ASMDATA
八、修改發現路徑
[grid@rac1:/home/grid]$asmcmd dsget
parameter:/dev/asm*, AFD:*
profile:/dev/asm*,AFD:*
只保留afd路徑
[grid@rac1:/home/grid]$asmcmd dsset 'AFD:*'
[grid@rac1:/home/grid]$asmcmd dsget
parameter:AFD:*
profile:AFD:*
再次查詢udev路徑下的設備已沒有
SQL> select NAME,LABEL,PATH from V$ASM_DISK;
NAME LABEL PATH
---------- ---------- ---------------
CRS_0000 ASMCRS AFD:ASMCRS
DATA_0000 ASMDATA AFD:ASMDATA
九、移除UDEV規則文件
[root@rac1 ~]# ll -hrt /etc/udev/rules.d/
total 12K
-rw-r--r-- 1 root root 297 Nov 3 17:04 99-oracle-asmdevices.rules.old
-rw-r--r-- 1 root root 224 Feb 15 17:13 53-afd.rules
-rw-r--r-- 1 root root 957 Feb 18 08:55 70-persistent-ipoib.rules
99-oracle-asmdevices.rules重命名以後已沒法發現磁盤
[root@rac1 ~]# ll /dev/asm*
ls: cannot access /dev/asm*: No such file or directory
使用afd特性的磁盤組未受影響
[grid@rac2:/home/grid]$asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMCRS ENABLED /dev/mapper/mpathc
ASMDATA ENABLED /dev/mapper/mpathd
到此爲止已完成afd的配置與加載
3、ASM磁盤組dd格式化測試
Oracle的afd特性能夠過濾掉」非規範「的io操做,只要不是用於oracle的io操做都會被過濾掉,下面使用dd格式化asm整個磁盤組作測試
一、增長一個磁盤組」asmtest「用來作dd格式化實驗
[root@rac1 ~]# asmcmd afd_label asmtest /dev/mapper/mpathe
[root@rac1 ~]# asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMCRS ENABLED /dev/mapper/mpathc
ASMDATA ENABLED /dev/mapper/mpathd
ASMTEST ENABLED /dev/mapper/mpathe
[root@rac1 ~]# su - grid
[grid@rac1:/home/grid]$sqlplus / as sysasm
建立asmtest磁盤組
SQL> create diskgroup asmtest external redundancy disk 'AFD:asmtest';
Diskgroup created.
二、建立測試表空間asmtest以及測試表
SQL> create tablespace asmtest datafile '+asmtest' size 100m;
SQL> create table afd (id number) tablespace asmtest;
SQL> insert into afd values (1);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from afd;
ID
----------
1
三、dd格式化
格式化整個磁盤組」asmtest「 —>/dev/mapper/mpathe
[root@rac1 ~]# dd if=/dev/zero of=/dev/mapper/mpathe
dd: writing to dev/mapper/mpathe No space left on device
2097153+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 70.6 s, 15.2 MB/s
四、再次作建立表空間操做
SQL> create tablespace asmtest2 datafile '+asmtest' size 100m;
SQL> create table afd2 (id number) tablespace asmtest2;
SQL> insert into afd2 values (2);
1 row created.
SQL> commit;
Commit complete.
SQL> select * from afd;
ID
----------
2
SQL> ALTER system checkpoint;
System altered.
checkpoint以後也沒有報錯
五、禁用afd Filter
[root@rac1 ~]# asmcmd afd_filter -d
備註(-d是disable、-e是enable)
[root@rac1 ~]# asmcmd afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMCRS DISABLED /dev/mapper/mpathc
ASMDATA DISABLED /dev/mapper/mpathd
ASMTEST DISABLED /dev/mapper/mpathe
六、再次作dd格式化
[root@rac1 ~]# dd if=/dev/zero of=/dev/mapper/mpathe
dd: writing to dev/mapper/mpathe No space left on device
2097153+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 89.2804 s, 12.0 MB/s
七、在數據庫中測試
SQL> insert into afd values (3);
1 row created.
SQL> commit;
Commit complete.
SQL> alter system checkpoint;
alter system checkpoint
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 21718
Session ID: 35 Serial number: 63358
數據庫已崩潰
數據庫重啓以後已沒法啓動
[oracle@rac1:/home/oracle]$sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Mon Feb 18 14:55:41 2019
Copyright © 1982, 2016, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup
ORA-39510: CRS error performing start on instance 'orcl1' on 'orcl'
CRS-2672: Attempting to start 'ora.AFDTEST.dg' on 'rac1'
CRS-2672: Attempting to start 'ora.AFDTEST.dg' on 'rac2'
CRS-2674: Start of 'ora.AFDTEST.dg' on 'rac1' failed
CRS-2674: Start of 'ora.AFDTEST.dg' on 'rac2' failed
CRS-0215: Could not start resource 'ora.orcl.db 1 1'.
clsr_start_resource:260 status:215
clsrapi_start_db:start_asmdbs status:215
4、拓展研究
在配置完afd以後,在dev路徑下會有lable事後的磁盤
[root@rac1 ~]# ll /dev/oracleafd/disks/
total 8
-rwxrwx--- 1 grid oinstall 19 Feb 18 13:40 ASMCRS
-rwxrwx--- 1 grid oinstall 19 Feb 18 13:40 ASMDATA
查看此設備內容發現對應爲多路徑設備
[root@rac1 ~]# cat /dev/oracleafd/disks/ASMCRS
/dev/mapper/mpathc
[root@rac1 ~]# cat /dev/oracleafd/disks/ASMDATA
/dev/mapper/mpathd
udev規則下會有afd的規則文件
[grid@rac1:/home/grid]$ll -hrt /etc/udev/rules.d/
total 12K
-rw-r--r-- 1 root root 297 Nov 3 17:04 99-oracle-asmdevices.rules.old
-rw-r--r-- 1 root root 224 Feb 15 17:13 53-afd.rules
-rw-r--r-- 1 root root 957 Feb 18 08:55 70-persistent-ipoib.rules
[grid@rac1:/home/grid]$cat /etc/udev/rules.d/53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmadmin", MODE="0775"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmadmin", MODE="0664"
5、總結
Oracle的afd特性能夠將」危險「IO操做進行重定向,具體原理不得而知,其本質仍是經過系統內核使用udev規則加載磁盤設備。
ASMFD 是 12.1 中就引入的新特性,它能夠不用手動配置 ASM 磁盤,更重要的是它能夠保護磁盤被其餘非 Oracle 操做複寫,例如 dd , echo 等命令。
更爲詳盡的介紹,請查看官方文檔:
--12.1 新特性 ASMFD
https://docs.oracle.com/database/121/OSTMG/GUID-2F5E344F-AFC2-4768-8C00-6F3C56302123.htm#OSTMG95729
在 12.2 中, ASMFD 被增強了,它能夠自動配置磁盤,只須要執行一個命令,該磁盤就能夠被 ASM 使用。
很是方便。
[root@rac1 software]# mkdir -p /u01/app/12.2.0/grid [root@rac1 software]# chown grid:oinstall /u01/app/12.2.0/grid
使用 grid 用戶解壓
[grid@rac1 software]# cd /u01/app/12.2.0/grid [grid@rac1 grid]# unzip -q /software/grid_home_image.zip
[root@rac1 grid]# su - root [root@rac1 grid]# export ORACLE_HOME=/u01/app/12.2.0/grid [root@rac1 grid]# export ORACLE_BASE=/tmp [root@rac1 grid]# echo $ORACLE_BASE /tmp [root@rac1 grid]# echo $ORACLE_HOME /u01/app/12.2.0/grid
以下,初始化三塊磁盤,就不須要使用 udev , ASMLIB 等方式來綁定磁盤並賦權限
[root@rac1 grid]# /u01/app/12.2.0/grid/bin/asmcmd afd_label DATA1 /dev/sdb --init [root@rac1 grid]#
[root@rac1 grid]# /u01/app/12.2.0/grid/bin/asmcmd afd_lslbl /dev/sdb -------------------------------------------------------------------------------- Label Duplicate Path ====================================DATA1 /dev/sdb
而後下面安裝 GRID 的時候,就能夠直接使用該磁盤了 /dev/sdb 。
注意 :/dev/sdb 重啓後,可能會變名稱,因此 Oracle 使用了 label 標籤將其綁定
unset ORACLE_BASE
./gridSetup.sh
更多信息,請參考官方文檔:
https://docs.oracle.com/en/database/oracle/oracle-database/12.2/tdprc/installing-oracle-grid.html#GUID-72D1994F-E838-415A-8E7D-30EA780D74A8
ASMFD 12.1.0.2 Supported Platforms |
---|
Vendor | Version | Update/Kernel | Architecture | Bug or PSU |
---|---|---|---|---|
Oracle Linux – RedHat Compatible Kernel | 5 | Update 3 and later, 2.6.18 kernel series (RedHat Compatible Kernel) | X86_64 | 12.1.0.2.3 DB PSU Patch |
Oracle Linux – Unbreakable Enterprise Kernel | 5 | Update 3 and later, 2.6.39-100 and later UEK 2.6.39 kernels | X86_64 | 12.1.0.2.3 DB PSU Patch (See Note 1) |
Oracle Linux – RedHat Compatible Kernel | 6 | All Updates, 2.6.32-71 and later 2.6.32 RedHat Compatible kernels | X86_64 | 12.1.0.2.3 DB PSU Patch |
Oracle Linux - Unbreakable Enterprise Kernel | 6 | All Updates, 2.6.39-100 and later UEK 2.6.39 kernels | X86_64 | 12.1.0.2.3 DB PSU Patch (See Note 1) |
Oracle Linux - Unbreakable Enterprise Kernel | 6 | All Updates, 3.8.13-13 and later UEK 3.8.13 kernels | X86_64 | 12.1.0.2.3 DB PSU Patch (See Note 1) |
Oracle Linux - Unbreakable Enterprise Kernel | 6 | All Updates, 4.1.12 and later UEK 4.1.12 kernels | X86_64 | 12.1.0.2.160719 (Base Bug 22810422 ) |
Oracle Linux – RedHat Compatible Kernel | 7 | Update 0, RedHat Compatible Kernel 3.10.0-123
|
X86_64 | 12.1.0.2.3 (Base Bug 18321597 ) |
Oracle Linux – RedHat Compatible Kernel | 7 | Update 1 and later, 3.10-0-229 and later RedHat Compatible kernels
|
X86_64 | 12.1.0.2.160119 (Base Bug 21233961 ) |
Oracle Linux - Unbreakable Enterprise Kernel | 7 | All Updates, 3.8.13-35 and later UEK 3.8.13 kernels
|
X86_64 | 12.1.0.2.3 (Base Bug 18321597 ) Base ( See Note 1 ) |
Oracle Linux - Unbreakable Enterprise Kernel | 7 | All Updates, 4.1.12 and later UEK 4.1.12 kernels |
X86_64 | 12.1.0.2.160719 (Base Bug 22810422 ) |
RedHat Linux | 5 | Update 3 and later, 2.6.18 kernel series | X86_64 | 12.1.0.2.3 DB PSU Patch |
RedHat Linux | 6 | All Updates, 2.6.32-279 and later RedHat kernels | X86_64 | 12.1.0.2.3 DB PSU Patch |
RedHat Linux | 7 | Update 0, 3.10.0-123 kernel |
X86_64 | 12.1.0.2.3 (Base Bug 18321597 ) |
RedHat Linux | 7 | Update 1 and later, 3.10.0-229 and later RedHat kernels |
X86_64 | 12.1.0.2.160119 (Base Bug 21233961 ) |
RedHat Linux | 7 | Update 4 |
X86_64 | 12.1.0.2.170718ACFSPSU + Patch 26247490 |
Novell SLES | 11 | SP2 | X86_64 | Base |
Novell SLES | 11 | SP3 | X86_64 | Base |
Novell SLES | 11 | SP4 | X86_64 | 12.1.0.2.160419 (Base Bug 21231953 ) |
Novell SLES | 12 | SP1 | X86_64 | 12.1.0.2.170117ACFSPSU(Base Bug 23321114 ) |
時間: 2016-05-17 22:32:08 | 做者: ohsdba | English
如非註明,本站文章皆爲原創。歡迎轉載,轉載時請註明出處和做者信息。
從12.1.0.2開始,Oracle 引入了ASMFD(ASM Filter Driver),ASMFD只適應於Linux平臺。安裝完Grid Infrastructure後,你能夠決定是否配置她。若是以前使用了ASMLIB(能夠簡單的理解爲對設備作標籤來標識磁盤)或者udev(能夠動態管理設備),遷移到ASMFD以後,須要卸載ASMLIB或禁用udev的規則。經過Filter driver能夠過濾無效的請求,避免由於非oracle的I/O請求形成意外的覆寫,進而保證了系統的安全和穩定。
官方文檔中關於ASMFD的描述
This feature is available on Linux systems starting with Oracle Database 12c Release 1 (12.1.0.2).
Oracle ASM Filter Driver (Oracle ASMFD) is a kernel module that resides in the I/O path of the Oracle ASM disks.Oracle ASM uses the filter driver to validate write I/O requests to Oracle ASM disks. After installation of Oracle Grid Infrastructure, you can optionally configure Oracle ASMFD for your system. If ASMLIB is configured for an existing Oracle ASM installation, then you must explicitly migrate the existing ASMLIB configuration to Oracle ASMFD.
The Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.
The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.
ASMFD會拒絕全部的無效的I/O請求。這種行爲能夠避免由於意外的覆寫形成ASM Disk的損壞或磁盤組中文件的損壞。好比她會過濾出全部可能形成覆寫的non-oracle的I/O請求。
本文以Oracle Restart(測試版本12.1.0.2.0)環境測試爲例來講明如何安裝配置ASMFD。首先安裝GI(Install Softeware Only),而後配置ASMFD,配置Label ASMFD Disks,建立ASM實例,建立ASM磁盤組(ASMFD),建立spfile並遷移至ASM磁盤組。最後在啓用和關閉Filter功能狀況下分別測試。
詳情參考:http://docs.oracle.com/database/121/OSTMG/GUID-06B3337C-07A3-4B3F-B6CD-04F2916C11F6.htm
配置Oracle Restart(SIHA)
[root@db1 ~]# /orgrid/oracle/product/121/root.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= orgrid
ORACLE_HOME= /orgrid/oracle/product/121
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user:
/orgrid/oracle/product/121/perl/bin/perl -I/orgrid/oracle/product/121/perl/lib -I/orgrid/oracle/product/121/crs/install /orgrid/oracle/product/121/crs/install/roothas.pl 執行這個腳本配置HAS,能夠沒必要在GUI下運行
To configure Grid Infrastructure for a Cluster execute the following command as orgrid user:
/orgrid/oracle/product/121/crs/config/config.sh
安裝GI,選擇只安裝軟件,若是要配置RAC,須要 運行config.sh腳本( 必須在GUI模式下運行 ),會讓你輸入cluster信息,scan信息,感興趣的能夠嘗試下。
This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media.
[root@db1 ~]#
[root@db1 ~]# /orgrid/oracle/product/121/perl/bin/perl -I/orgrid/oracle/product/121/perl/lib -I/orgrid/oracle/product/121/crs/install /orgrid/oracle/product/121/crs/install/roothas.pl
Using configuration parameter file: /orgrid/oracle/product/121/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'orgrid', privgrp 'asmadmin'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node db1 successfully pinned.
2016/05/16 22:10:54 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf'
db1 2016/05/16 22:11:11 /orgrid/oracle/product/121/cdata/db1/backup_20160516_221111.olr 0
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db1'
CRS-2673: Attempting to stop 'ora.evmd' on 'db1'
CRS-2677: Stop of 'ora.evmd' on 'db1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2016/05/16 22:12:19 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
用udev綁定,加載並查看
[root@db1 ~]# cd /etc/udev/rules.d/
[root@db1 rules.d]# cat 99-oracle-asmdevices.rules
KERNEL=="sdb1",NAME="asmdisk1",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdb2",NAME="asmdisk2",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdb3",NAME="asmdisk3",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdb4",NAME="asmdisk4",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdc1",NAME="asmdisk5",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdc2",NAME="asmdisk6",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdc3",NAME="asmdisk7",OWNER="orgrid",GROUP="asmadmin",MODE="0660" KERNEL=="sdc4",NAME="asmdisk8",OWNER="orgrid",GROUP="asmadmin",MODE="0660" [root@db1 rules.d]# [root@db1 rules.d]# udevadm control --reload-rules [root@db1 rules.d]# udevadm trigger [root@db1 rules.d]# ls -l /dev/asmdisk* brw-rw---- 1 orgrid asmadmin 8, 17 May 16 23:03 /dev/asmdisk1 brw-rw---- 1 orgrid asmadmin 8, 18 May 16 23:03 /dev/asmdisk2 brw-rw---- 1 orgrid asmadmin 8, 19 May 16 23:03 /dev/asmdisk3 brw-rw---- 1 orgrid asmadmin 8, 20 May 16 23:03 /dev/asmdisk4 brw-rw---- 1 orgrid asmadmin 8, 33 May 16 23:03 /dev/asmdisk5 brw-rw---- 1 orgrid asmadmin 8, 34 May 16 23:03 /dev/asmdisk6 brw-rw---- 1 orgrid asmadmin 8, 35 May 16 23:03 /dev/asmdisk7 brw-rw---- 1 orgrid asmadmin 8, 36 May 16 23:03 /dev/asmdisk8 [root@db1 rules.d]#
更多關於udev,請參考 http://www.ibm.com/developerworks/cn/linux/l-cn-udev/
查看ASMFD是否安裝
[root@db1 ~]# export ORACLE_HOME=/orgrid/oracle/product/121
[root@db1 ~]# export ORACLE_SID=+ASM
[root@db1 ~]# export PATH=$ORACLE_HOME/bin:$PATH
[root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'NOT INSTALLED' and filtering is 'DEFAULT' on host 'db1'
[root@db1 ~]#
安裝ASMFD(必須先關掉CRS(RAC)/HAS(SIHA)服務)
[root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_configure Connected to an idle instance. ASMCMD-9523: command cannot be used when Oracle Clusterware stack is up [root@db1 ~]# crsctl stop has CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'db1' CRS-2673: Attempting to stop 'ora.evmd' on 'db1' CRS-2677: Stop of 'ora.evmd' on 'db1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'db1' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@db1 ~]# $ORACLE_HOME/bin/asmcmd afd_configure Connected to an idle instance. AFD-627: AFD distribution files found. AFD-636: Installing requested AFD software. AFD-637: Loading installed AFD drivers. AFD-9321: Creating udev for AFD. AFD-9323: Creating module dependencies - this may take some time. AFD-9154: Loading 'oracleafd.ko' driver. AFD-649: Verifying AFD devices. AFD-9156: Detecting control device '/dev/oracleafd/admin'. AFD-638: AFD installation correctness verified. Modifying resource dependencies - this may take some time. [root@db1 ~]#
查看ASMFD詳情
[orgrid@db1 ~]$ $ORACLE_HOME/bin/asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is ' LOADED ' and filtering is 'DEFAULT' on host 'db1'
[root@db1 ~]# /orgrid/oracle/product/121/bin/crsctl start has CRS-4123: Oracle High Availability Services has been started. [root@db1 ~]# [orgrid@db1 ~]$ [orgrid@db1 bin]$ pwd /orgrid/oracle/product/121/bin [orgrid@db1 bin]$ ls -ltr afd* -rwxr-x--- 1 orgrid asmadmin 1000 May 23 2014 afdroot -rwxr-xr-x 1 orgrid asmadmin 72836515 Jul 1 2014 afdboot -rwxr-xr-x 1 orgrid asmadmin 184403 Jul 1 2014 afdtool.bin -rwxr-x--- 1 orgrid asmadmin 766 May 16 23:29 afdload -rwxr-x--- 1 orgrid asmadmin 1254 May 16 23:29 afddriverstate -rwxr-xr-x 1 orgrid asmadmin 2829 May 16 23:29 afdtool
[root@db1 ~]# crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.ons OFFLINE OFFLINE db1 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 OFFLINE OFFLINE STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.driver.afd 1 ONLINE ONLINE db1 STABLE ora.evmd 1 ONLINE ONLINE db1 STABLE -------------------------------------------------------------------------------- [root@db1 ~]#
安裝成功後,你看到afd的一些文件,還能看到資源ora.driver.afd
用afd_label標識磁盤
[orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK1 /dev/asmdisk1
Connected to an idle instance. [orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 ENABLED /dev/asmdisk1 [orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK2 /dev/asmdisk2 Connected to an idle instance. [orgrid@db1 bin]$ $ORACLE_HOME/bin/asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 ENABLED /dev/asmdisk1 ASMDISK2 ENABLED /dev/asmdisk2 [orgrid@db1 bin]$ asmcmd Connected to an idle instance. ASMCMD> afd_label ASMDISK3 /dev/asmdisk3 ASMCMD> afd_lsdsk -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 ENABLED /dev/asmdisk1 ASMDISK2 ENABLED /dev/asmdisk2 ASMDISK3 ENABLED /dev/asmdisk3 ASMCMD>
[root@db1 rules.d]# ls -ltr|tail -5 -rw-r--r--. 1 root root 789 Mar 10 05:18 70-persistent-cd.rules -rw-r--r--. 1 root root 341 Mar 10 05:25 99-vmware-scsi-udev.rules -rw-r--r-- 1 root root 190 May 16 22:11 55-usm.rules -rw-r--r-- 1 root root 600 May 16 23:03 99-oracle-asmdevices.rules -rw-r--r-- 1 root root 230 May 17 00:31 53-afd.rules [root@db1 rules.d]# [orgrid@db1 rules.d]$ pwd /etc/udev/rules.d [root@db1 rules.d]# cat 53-afd.rules # # AFD devices KERNEL=="oracleafd/.*", OWNER="orgrid", GROUP="asmadmin", MODE="0770" KERNEL=="oracleafd/*", OWNER="orgrid", GROUP="asmadmin", MODE="0770" KERNEL=="oracleafd/disks/*", OWNER="orgrid", GROUP="asmadmin", MODE="0660" [root@db1 rules.d]# cat 55-usm.rules # # ADVM devices KERNEL=="asm/*", GROUP="asmadmin", MODE="0770" KERNEL=="asm/.*", GROUP="asmadmin", MODE="0770" # # ACFS devices KERNEL=="ofsctl", GROUP="asmadmin", MODE="0664" [root@db1 rules.d]#
安裝後會看到udev rules下面多了一些文件,實際上ASMFD仍使用了udev
建立ASM實例(也能夠經過asmca去建立)
[orgrid@db1 dbs]$ srvctl add asm [orgrid@db1 dbs]$ ps -ef|grep pmon orgrid 42414 36911 0 14:26 pts/2 00:00:00 grep pmon [orgrid@db1 dbs]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.asm OFFLINE OFFLINE db1 STABLE ora.ons OFFLINE OFFLINE db1 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 OFFLINE OFFLINE STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.driver.afd 1 ONLINE ONLINE db1 STABLE ora.evmd 1 ONLINE ONLINE db1 STABLE -------------------------------------------------------------------------------- [orgrid@db1 dbs]$ [orgrid@db1 ~]$ cat $ORACLE_HOME/dbs/init*.ora *.asm_power_limit=1 *.diagnostic_dest='/orgrid/grid_base' *.instance_type='asm' *.large_pool_size=12M *.memory_target=1024M *.remote_login_passwordfile='EXCLUSIVE' [orgrid@db1 ~]$ [orgrid@db1 ~]$ ps -ef|grep pmon orgrid 42724 42694 0 14:30 pts/2 00:00:00 grep pmon [orgrid@db1 ~]$ srvctl start asm [orgrid@db1 ~]$ ps -ef|grep pmon orgrid 42807 1 0 14:30 ? 00:00:00 asm_pmon_+ASM orgrid 42888 42694 0 14:31 pts/2 00:00:00 grep pmon [orgrid@db1 ~]$ crsctl stat res -t -------------------------------------------------------------------------------- Name Target State Server State details -------------------------------------------------------------------------------- Local Resources -------------------------------------------------------------------------------- ora.asm ONLINE ONLINE db1 Started,STABLE ora.ons OFFLINE OFFLINE db1 STABLE -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.cssd 1 ONLINE ONLINE db1 STABLE ora.diskmon 1 OFFLINE OFFLINE STABLE ora.driver.afd 1 ONLINE ONLINE db1 STABLE ora.evmd 1 ONLINE ONLINE db1 STABLE -------------------------------------------------------------------------------- [orgrid@db1 ~]$
經過asmca建立DiskGroup
[orgrid@db1 ~]$ asmca -silent -sysAsmPassword oracle -asmsnmpPassword oracle -createDiskGroup -diskString 'AFD:*' -diskGroupName DATA_AFD -disk 'AFD:ASMDISK1' -disk 'AFD:ASMDISK2' -redundancy Normal -au_size 4 -compatible.asm 12.1 -compatible.rdbms 12.1
Disk Group DATA_AFD created successfully.
[orgrid@db1 ~]$
建立spfile並遷移到磁盤組
[orgrid@db1 ~]$ asmcmd spget [orgrid@db1 ~]$ [orgrid@db1 ~]$ sqlplus / as sysdba SQL*Plus: Release 12.1.0.2.0 Production on Tue May 17 15:09:26 2016 Copyright (c) 1982, 2014, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Automatic Storage Management option SQL> show parameter spf NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string SQL> create spfile='+DATA_AFD' from pfile; File created. SQL> show parameter spf NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ spfile string SQL> exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production With the Automatic Storage Management option [orgrid@db1 ~]$ asmcmd spget +DATA_AFD/ASM/ASMPARAMETERFILE/registry.253.912092995 [orgrid@db1 ~]$
備份並移除udev rule文件99-oracle-asmdevices.rules
重命名99-oracle-asmdevices.rules爲99-oracle-asmdevices.rules.bak。若是不move 99-oracle-asmdevices.rules文件,下次重啓以後,以前ASMFD標識過的磁盤,看不到。
[orgrid@db1 ~]$ asmcmd afd_lsdsk There are no labelled devices. [root@db1 ~]# ls -l /dev/oracleafd/disks total 0 [root@db1 ~]# ls -l /dev/oracleafd/ admin disks/
設置磁盤Discovery String字符串
ASMCMD> afd_dsget AFD discovery string: ASMCMD> afd_dsset '/dev/sd*' --設置ASMFD discovery string爲原來物理磁盤的信息 ASMCMD> afd_dsget AFD discovery string: '/dev/sd*' ASMCMD> [orgrid@db1 ~]$ asmcmd afd_dsget AFD discovery string: '/dev/sd*' [orgrid@db1 ~]$ asmcmd dsget --設置ASM磁盤組iscovery string爲AFD:* parameter:AFD:* profile:AFD:* [orgrid@db1 ~]$
重啓服務器並驗證
[root@db1 ~]# ls -l /dev/oracleafd/disks/ total 12 -rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK1 -rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK2 -rw-r--r-- 1 root root 10 May 17 00:15 ASMDISK3 [root@db1 ~]# ASMCMD> lsdsk --candidate Path AFD:ASMDISK2 AFD:ASMDISK3 ASMCMD> afd_lsdsk -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 DISABLED /dev/sdb1 ASMDISK2 DISABLED /dev/sdb2 ASMDISK3 DISABLED /dev/sdb3 ASMCMD> [orgrid@db1 ~]$ ls -l /dev/disk/by-label/ total 0 lrwxrwxrwx 1 root root 10 May 17 00:30 ASMDISK1 -> ../../sdb1 lrwxrwxrwx 1 root root 10 May 17 00:30 ASMDISK2 -> ../../sdb2 lrwxrwxrwx 1 root root 10 May 17 00:31 ASMDISK3 -> ../../sdb3 [orgrid@db1 ~]$
重啓後會發現,ASMFD用的磁盤的屬性變成了root權限
啓用Filter功能
ASMCMD> help afd_filter afd_filter Sets the AFD filtering mode on a given disk path. If the command is executed without specifying a disk path then filtering is set at node level. ASMCMD> ASMCMD> afd_filter -e /dev/sdb2 ASMCMD> afd_lsdsk -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 DISABLED /dev/sdb1 ASMDISK2 DISABLED /dev/sdb2 ASMDISK3 DISABLED /dev/sdb3 ASMCMD> afd_filter -e ASMCMD> afd_lsdsk -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 ENABLED /dev/sdb1 ASMDISK2 ENABLED /dev/sdb2 ASMDISK3 ENABLED /dev/sdb3 ASMCMD>
建立新磁盤組DATA_PGOLD
SQL> create diskgroup DATA_PGOLD external redundancy disk 'AFD:ASMDISK3';
Diskgroup created.
SQL>
[orgrid@db1 ~]$ kfed read AFD:ASMDISK3
kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 771071217 ; 0x00c: 0x2df59cf1 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr: ORCLDISKASMDISK3 ; 0x000: length=16 kfdhdb.driver.reserved[0]: 1145918273 ; 0x008: 0x444d5341 kfdhdb.driver.reserved[1]: 843797321 ; 0x00c: 0x324b5349 kfdhdb.driver.reserved[2]: 0 ; 0x010: 0x00000000 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 168820736 ; 0x020: 0x0a100000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: ASMDISK3 ; 0x028: length=8 kfdhdb.grpname: DATA_PGOLD ; 0x048: length=10 kfdhdb.fgname: ASMDISK2 ; 0x068: length=8 kfdhdb.capname: ; 0x088: length=0 kfdhdb.crestmp.hi: 33035808 ; 0x0a8: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0 kfdhdb.crestmp.lo: 3231790080 ; 0x0ac: USEC=0x0 MSEC=0x4d SECS=0xa MINS=0x30 kfdhdb.mntstmp.hi: 33035808 ; 0x0b0: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0 kfdhdb.mntstmp.lo: 3239631872 ; 0x0b4: USEC=0x0 MSEC=0x237 SECS=0x11 MINS=0x30 kfdhdb.secsize: 512 ; 0x0b8: 0x0200 kfdhdb.blksize: 4096 ; 0x0ba: 0x1000 kfdhdb.ausize: 1048576 ; 0x0bc: 0x00100000 kfdhdb.mfact: 113792 ; 0x0c0: 0x0001bc80 kfdhdb.dsksize: 2055 ; 0x0c4: 0x00000807 kfdhdb.pmcnt: 2 ; 0x0c8: 0x00000002 kfdhdb.fstlocn: 1 ; 0x0cc: 0x00000001 kfdhdb.altlocn: 2 ; 0x0d0: 0x00000002 kfdhdb.f1b1locn: 2 ; 0x0d4: 0x00000002 kfdhdb.redomirrors[0]: 0 ; 0x0d8: 0x0000 kfdhdb.redomirrors[1]: 0 ; 0x0da: 0x0000 kfdhdb.redomirrors[2]: 0 ; 0x0dc: 0x0000 kfdhdb.redomirrors[3]: 0 ; 0x0de: 0x0000 kfdhdb.dbcompat: 168820736 ; 0x0e0: 0x0a100000 kfdhdb.grpstmp.hi: 33035808 ; 0x0e4: HOUR=0x0 DAYS=0x11 MNTH=0x5 YEAR=0x7e0 kfdhdb.grpstmp.lo: 3231717376 ; 0x0e8: USEC=0x0 MSEC=0x6 SECS=0xa MINS=0x30
在啓用Filter功能下,用dd作測試
[root@db1 log]# dd if=/dev/zero of=/dev/sdb3 dd: writing to `/dev/sdb3': No space left on device 4209031+0 records in 4209030+0 records out 2155023360 bytes (2.2 GB) copied, 235.599 s, 9.1 MB/s [root@db1 log]#
[root@db1 ~]# strings -a /dev/sdb3 ORCLDISKASMDISK3 ASMDISK3 DATA_PGOLD ASMDISK3 0 ...省去了一部分 ORCLDISKASMDISK3 ASMDISK3 DATA_PGOLD ASMDISK3 [root@db1 ~]# [root@db1 ~]#
經過strings查看/dev/sdb3,能夠發現,裏面的內容並無被清空
卸載、掛載磁盤組正常
[orgrid@db1 ~]$ asmcmd ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/ MOUNTED EXTERN N 512 4096 1048576 2055 1993 0 1993 0 N DATA_PGOLD/ ASMCMD> umount data_pgold ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/ ASMCMD> mount data_pgold ASMCMD> lsdg State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/ MOUNTED EXTERN N 512 4096 1048576 2055 1993 0 1993 0 N DATA_PGOLD/ ASMCMD>
/var/log/messages裏顯示的錯誤信息
[root@db1 log]# tail -3 messages
May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173] afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18) not supported i=2 start=8418038 seccnt=2 pstart=4209030 pend=8418060 May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173] afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18) not supported i=2 start=8418040 seccnt=2 pstart=4209030 pend=8418060 May 17 01:10:34 db1 kernel: F 4297082.224/160516171034 flush-8:16[9173] afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=18) not supported i=2 s [root@db1 log]#
在關閉Filter功能狀況下作測試
ASMCMD> afd_filter -d ASMCMD> afd_lsdsk -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK1 DISABLED /dev/sdb1 ASMDISK2 DISABLED /dev/sdb2 ASMDISK3 DISABLED /dev/sdb3 ASMCMD> exit [orgrid@db1 ~]$
備份磁盤的前1024字節並清除,普通用戶沒權限讀
[orgrid@db1 ~]$ dd if=/dev/sdb3 of=block1 bs=1024 count=1 dd: opening `/dev/sdb3': Permission denied [orgrid@db1 ~]$ exit logout [root@db1 ~]# dd if=/dev/sdb3 of=block1 bs=1024 count=1 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00236493 s, 433 kB/s [root@db1 ~]# dd if=/dev/zero of=/dev/sdb3 bs=1024 count=1 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.000458225 s, 2.2 MB/s [root@db1 ~]# su - orgrid
卸載、掛載磁盤組DATA_PGOLD
[orgrid@db1 ~]$ asmcmd
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/
MOUNTED EXTERN N 512 4096 1048576 2055 1993 0 1993 0 N DATA_PGOLD/
ASMCMD> umount data_pgold
ASMCMD> mount data_pgold
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA_PGOLD" cannot be mounted
ORA-15040: diskgroup is incomplete (DBD ERROR: OCIStmtExecute)
ASMCMD>
能夠看出,關閉了Filter功能以後,就會失去保護功能
經過kfed修復
[root@db1 ~]# /orgrid/oracle/product/121/bin/kfed repair /dev/sdb3
[root@db1 ~]# su - orgrid
[orgrid@db1 ~]$ asmcmd
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/
ASMCMD> mount data_pgold
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/
MOUNTED EXTERN N 512 4096 1048576 2055 1993 0 1993 0 N DATA_PGOLD/
ASMCMD>
經過以前dd備份的塊作修復
[root@db1 ~]# dd if=block1 of=/dev/sdb2 bs=1024 count=1 conv=notrunc
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied, 0.000467297 s, 2.2 MB/s
[root@db1 ~]# su - orgrid
[orgrid@db1 ~]$ asmcmd
ASMCMD> mount data_pgold
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 4110 4058 0 4058 0 N DATA_ADF/
MOUNTED EXTERN N 512 4096 1048576 2055 1993 0 1993 0 N DATA_PGOLD/
ASMCMD> exit
[orgrid@db1 ~]$
增長AFD DISK,通常用戶沒權限添加,必須用root用戶
ASMCMD> help afd_label
afd_label
To set the given label to the specified disk
ASMCMD>
[orgrid@db1 ~]$ $ORACLE_HOME/bin/asmcmd afd_label ASMDISK4 /dev/sdb4 ORA-15227: could not perform label set/clear operation ORA-15031: disk specification '/dev/sdb4' matches no disks (DBD ERROR: OCIStmtExecute) ASMCMD-9513: ASM disk label set operation failed. [root@db1 ~]# /orgrid/oracle/product/121/bin/asmcmd afd_label ASMDISK4 /dev/sdb4 Connected to an idle instance. [root@db1 ~]#
常見問題和解答
問:運行afd_configure時遇到ASMCMD-9524: AFD configuration failed 'ERROR: OHASD start failed'
答:安裝是若是遇到這個錯誤,須要安裝p19035573_121020_Generic.zip,這個patch其實是一個asmcmdsys.pm文件
問:何時用afd_label --migrate
答:若是是從現有DiskGroup遷移到ASMFD,須要加參數--migrate,不然不須要
Reference
Configure ASMFD
http://docs.oracle.com/database/121/OSTMG/GUID-2F5E344F-AFC2-4768-8C00-6F3C56302123.htm#OSTMG95729
http://docs.oracle.com/database/121/OSTMG/GUID-BB2B3A64-4B83-4A6D-816C-6472FAF9B27A.htm#OSTMG95909
Configure in Restart
http://docs.oracle.com/database/121/OSTMG/GUID-06B3337C-07A3-4B3F-B6CD-04F2916C11F6.htm
http://www.ibm.com/developerworks/cn/linux/l-cn-udev/
https://wiki.archlinux.org/index.php/Udev#Setting_static_device_names
ASMFD 12.2.0.1 Supported Platforms |
---|
Vendor |
Version | Update/Kernel | Architecture | Bug or PSU |
---|---|---|---|---|
Oracle Linux – RedHat Compatible Kernel |
6 | All Updates, 2.6.32-71 and later 2.6.32 RedHat Compatible kernels |
X86_64 | Base |
Oracle Linux - Unbreakable Enterprise Kernel |
6 | All Updates, 2.6.39-100 and later UEK 2.6.39 kernels |
X86_64 | Base |
Oracle Linux – Unbreakable Enterprise Kernel |
6 | All Updates, 3.8.13-13 and later UEK 3.8.13 kernels | X86_64 | Base |
Oracle Linux – Unbreakable Enterprise Kernel
|
6 | All Updates, 4.1 and later UEK 4.1 kernels |
X86_64 | Base |
Oracle Linux – RedHat Compatible Kernel |
7 | GA release, 3.10.0-123 and through 3.10.0-513 | X86_64 | Base |
Oracle Linux – RedHat Compatible Kernel |
7 | Update 3, 3.10.0-514 and later | X86_64 | Base + Patch 25078431 |
Oracle Linux – RedHat Compatible Kernel |
7 | Update 4, 3.10.0-663 and later RedHat Compatible Kernels | X86_64 | 12.2.0.1.180116 (Base Bug 26247490) |
Oracle Linux – Unbreakable Enterprise Kernel |
7 | All Updates, 3.8.13-35 and later UEK 3.8.13 kernels | X86_64 | Base |
Oracle Linux – Unbreakable Enterprise Kernel |
7 | All Updates, 4.1 and later UEK 4.1 kernels | X86_64 | Base |
RedHat Linux |
6 | All Updates, 2.6.32-279 and later RedHat kernels | X86_64 | Base |
RedHat Linux |
7 | GA release, 3.10.0-123 and through 3.10.0-513 | X86_64 | Base |
RedHat Linux |
7 | Update 4, 3.10.0-663 and later RedHat Compatible Kernels | X86_64 | 12.2.0.1.180116 (Base Bug 26247490) |
Novell SLES |
12 | GA, SP1 | X86_64 | Base |
Solaris |
10 | Update 10 or later | X86_64 and SPARC64 | Base |
Solaris |
11 | Update 10 and later | X86_64 and SPARC64 | Base |
解決在RHEL/CentOS7.4以上版本沒法使用AFD(Oracle ASMFD)特性
在7.4以上的redhat或者centos,配置afd不能配置成功,AFD is not ‘supported’,通過查詢MOS資料,發如今7.4以上的redhat內核須要升級」kmod」
一、加載afd
[root@rac1 ~]# asmcmd afd_configure
ASMCMD-9520: AFD is not 'supported' -----僅輸出不支持的信息,沒法得知具體內容
二、查看afd的支持的內核版本
[root@rac1 ~]# afdroot install
AFD-620: AFD is not supported on this operating system version: '3.10.0-693.el7.x86_64'
[root@rac1 ~]# acfsdriverstate -orahome $ORACLE_HOME supported
ACFS-9459: ADVM/ACFS is not supported on this OS version: '3.10.0-693.el7.x86_64'
ACFS-9201: Not Supported
[root@rac1 ~]# uname -a
Linux rac1 3.10.0-693.el7.x86_64 #1 SMP Thu Jul 6 19:56:57 EDT 2017 x86_64 x86_64 x86_64 GNU/Linux
----查看當前系統內核版本確實不被支持
[root@rac1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.4 (Maipo)
三、查看kmod版本
[root@rac2 ~]# rpm -qa|grep kmod
kmod-libs-20-15.el7.x86_64
kmod-20-15.el7.x86_64 ----20-15版本
四、升級kmod
[root@rac1 yum.repos.d]# yum install kmod
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Local | 3.6 kB 00:00:00
(1/2): Local/group_gz | 166 kB 00:00:00
(2/2): Local/primary_db | 3.1 MB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package kmod.x86_64 0:20-15.el7 will be updated
---> Package kmod.x86_64 0:20-21.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
=======================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================
Updating:
kmod x86_64 20-21.el7 Local 121 k
Transaction Summary
=======================================================================================================================
Upgrade 1 Package
Total download size: 121 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
Updating : kmod-20-21.el7.x86_64 1/2
Cleanup : kmod-20-15.el7.x86_64 2/2
Verifying : kmod-20-21.el7.x86_64 1/2
Verifying : kmod-20-15.el7.x86_64 2/2
Updated:
kmod.x86_64 0:20-21.el7
Complete!
五、再次查看lmod
[grid@rac1:/home/grid]$rpm -qa|grep kmod
kmod-libs-20-15.el7.x86_64
kmod-20-21.el7.x86_64 --->已升級到20-21版本
六、查看afd驅動信息
[root@rac1 yum.repos.d]# acfsdriverstate -orahome $ORACLE_HOME supported
ACFS-9200: Supported 升級kmod以後afd驅動已支持
七、從新安裝afd
加載以及配置AFD
[root@rac1 yum.repos.d]# asmcmd afd_configure
加載afd過程:
AFD-627: AFD distribution files found.
AFD-634: Removing previous AFD installation.
AFD-635: Previous AFD components successfully removed.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
查詢afd狀態信息
[root@rac1 yum.repos.d]# asmcmd afd_state
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'ENABLED' on host 'rac1'
無任何報錯,配置成功
MOS id:2303388.1
ACFS and AFD report 「Not Supported」 after installing appropriate Oracle Grid Infrastructure Patches on RedHat (文檔 ID 2303388.1)
瞭解KMOD
https://www.linux.org/docs/man8/kmod.html
上述爲linux對kmod的一段簡單描述,簡單意思是KMOD是管理linux內核的一個模塊,對於用戶來講不會直接使用kmod,而是其餘一些系統命令會調用。
進入sbin目錄發現kmod被作了軟鏈接,不少系統命令會被重定向
[grid@rac1:/sbin]$pwd
/sbin
[grid@rac1:/sbin]$ls -alt | grep kmod
lrwxrwxrwx 1 root root 11 Feb 15 17:11 insmod -> ../bin/kmod
lrwxrwxrwx 1 root root 11 Feb 15 17:11 lsmod -> ../bin/kmod
lrwxrwxrwx 1 root root 11 Feb 15 17:11 modinfo -> ../bin/kmod
lrwxrwxrwx 1 root root 11 Feb 15 17:11 modprobe -> ../bin/kmod
lrwxrwxrwx 1 root root 11 Feb 15 17:11 rmmod -> ../bin/kmod
lrwxrwxrwx 1 root root 11 Feb 15 17:11 depmod -> ../bin/kmod
做者:張樂奕
2014 年 12 月發佈
簡單地說,這是一個能夠取代 ASMLIB 和 udev 設置的新功能,而且還增長了 I/O Filter 功能,這也體如今該功能的命名中。ASMFD 目前只在 Linux 操做系統中有效,而且必需要使用最新版的 Oracle ASM 12.1.0.2。在以前,因爲 Linux 操做系統對於塊設備的發現順序不定,因此在系統重啓之後,沒法保證原來的 /dev/sda 仍是 sda,因此不能直接使用這樣原始的設備名來作 ASM Disk 的 Path,所以出現了 ASMLIB,以 Label 的方式給予設備一個固定的名字,而 ASM 則直接使用這個固定的名字來建立 ASM 磁盤,後來 ASMLIB 必需要 ULN 賬號才能夠下載了,因而你們所有轉到了 udev 方式,我還所以寫了幾篇文章來闡述在 Linux 中如何設置 udev rule。好比:
How to use udev for Oracle ASM in Oracle Linux 6
Oracle Datafiles & Block Device & Parted & Udev
如今 Oracle 推出了 ASMFD,能夠一舉取代 ASMLIB 和手動設置 udev rules 文件的繁瑣,而且最重要的是 I/O Filter 功能。
文檔原文以下:
The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.
意思是:該功能能夠拒絕全部無效的 I/O 請求,最主要的做用是防止意外覆寫 ASM 磁盤的底層盤,在後面的測試中能夠看到對於 root 用戶的 dd 全盤清零這樣的變態操做也都是能夠過濾的。
一般咱們原先的 ASM 中都應該使用的是 ASMLIB 或者 udev 綁定的設備,這裏就直接描述如何將原先的設備名從新命名爲新的 AFD 設備名。
--確認目前 ASMFD 模塊(如下簡稱 AFD)的狀態,未加載。 [grid@dbserver1 ~]$ asmcmd afd_state ASMCMD-9526: The AFD state is 'NOT INSTALLED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com' --獲取當前 ASM 磁盤發現路徑,我這裏是使用 udev 綁定的名稱 [grid@dbserver1 ~]$ asmcmd dsget parameter:/dev/asm* profile:/dev/asm* --設置 ASM 磁盤路徑,將新的 Disk String 加入 --能夠看到在設置該參數時,ASM 會檢查現有已經加載的磁盤,若是不在發現路徑上,將會報錯。 [grid@dbserver1 ~]$ asmcmd dsset AFD:* ORA-02097: parameter cannot be modified because specified value is invalid ORA-15014: path '/dev/asm-disk7' is not in the discovery set (DBD ERROR: OCIStmtExecute) --所以咱們必須將新的路徑加在原始路徑後面,設置成多種路徑,該操做會運行一段時間,視 ASM 磁盤多少而定 [grid@dbserver1 ~]$ asmcmd dsset '/dev/asm*','AFD:*' [grid@dbserver1 ~]$ asmcmd dsget parameter:/dev/asm*, AFD:* profile:/dev/asm*,AFD:* --檢查如今 GI 環境中的節點。 [grid@dbserver1 ~]$ olsnodes -a dbserver1 Hub dbserver2 Hub --如下命令必須在全部 Hub 節點上都運行,可使用 Rolling 的方式。如下命令有些須要 root 用戶,有些須要 grid 用戶,請注意 # 或者 $ 不一樣的提示符表示不一樣的用戶。 --先中止crs [root@dbserver1 ~]# crsctl stop crs --做 AFD Configure,實際上這是一個解壓程序包,安裝,並加載 Driver 的過程,須要消耗一些時間 [root@dbserver1 ~]# asmcmd afd_configure Connected to an idle instance. AFD-627: AFD distribution files found. AFD-636: Installing requested AFD software. AFD-637: Loading installed AFD drivers. AFD-9321: Creating udev for AFD. AFD-9323: Creating module dependencies - this may take some time. AFD-9154: Loading 'oracleafd.ko' driver. AFD-649: Verifying AFD devices. AFD-9156: Detecting control device '/dev/oracleafd/admin'. AFD-638: AFD installation correctness verified. Modifying resource dependencies - this may take some time. --結束之後,能夠再次檢查 AFD 狀態,顯示已加載。 [root@dbserver1 ~]# asmcmd afd_state Connected to an idle instance. ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com' --接下來須要設置 AFD 本身的磁盤發現路徑了,其實這裏很像之前 ASMLIB 的操做。 --設置 AFD 磁盤發現路徑,必須先啓動 CRS,不然將會遇到下面的錯誤。同時也能夠看到這個信息是存儲在每一個節點本身的 OLR 中,所以 必須在全部節點中都設置。 [root@dbserver1 ~]# asmcmd afd_dsget Connected to an idle instance. ASMCMD-9511: failed to obtain required AFD disk string from Oracle Local Repository [root@dbserver1 ~]# [root@dbserver1 ~]# asmcmd afd_dsset '/dev/sd*' Connected to an idle instance. ASMCMD-9512: failed to update AFD disk string in Oracle Local Repository. --啓動 CRS [root@dbserver1 ~]# crsctl start crs CRS-4123: Oracle High Availability Services has been started. --此時查看後臺的 ASM 告警日誌,能夠看到加載的磁盤仍然使用的是原始路徑。可是也能夠看到 libafd 已經成功加載。 2014-11-20 17:01:04.545000 +08:00 NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libafd12.so ORACLE_BASE from environment = /u03/app/grid SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:9:3} */ NOTE: Diskgroup used for Voting files is: CRSDG Diskgroup with spfile:CRSDG NOTE: Diskgroup used for OCR is:CRSDG NOTE: Diskgroups listed in ASM_DISKGROUP are DATADG NOTE: cache registered group CRSDG 1/0xB8E8EA0B NOTE: cache began mount (first) of group CRSDG 1/0xB8E8EA0B NOTE: cache registered group DATADG 2/0xB8F8EA0C NOTE: cache began mount (first) of group DATADG 2/0xB8F8EA0C NOTE: Assigning number (1,2) to disk (/dev/asm-disk3) NOTE: Assigning number (1,1) to disk (/dev/asm-disk2) NOTE: Assigning number (1,0) to disk (/dev/asm-disk1) NOTE: Assigning number (1,5) to disk (/dev/asm-disk10) NOTE: Assigning number (1,3) to disk (/dev/asm-disk8) NOTE: Assigning number (1,4) to disk (/dev/asm-disk9) NOTE: Assigning number (2,3) to disk (/dev/asm-disk7) NOTE: Assigning number (2,2) to disk (/dev/asm-disk6) NOTE: Assigning number (2,1) to disk (/dev/asm-disk5) NOTE: Assigning number (2,5) to disk (/dev/asm-disk12) NOTE: Assigning number (2,0) to disk (/dev/asm-disk4) NOTE: Assigning number (2,6) to disk (/dev/asm-disk13) NOTE: Assigning number (2,4) to disk (/dev/asm-disk11) --將 afd_ds 設置爲 ASM 磁盤的底層磁盤設備名,這樣之後就再也不須要手工配置 udev rules 了。 [grid@dbserver1 ~]$ asmcmd afd_dsset '/dev/sd*' [grid@dbserver1 ~]$ asmcmd afd_dsget AFD discovery string: /dev/sd* --我在測試的時候,上面犯了一個錯誤,將路徑設置爲了「dev/sd*」,缺乏了最開始的根目錄。所以此處沒有發現任何磁盤,若是你的測試中, 這一步已經能夠發現磁盤,請告訴我。 [grid@dbserver1 ~]$ asmcmd afd_lsdsk There are no labelled devices. --再次提醒,到此爲止的全部命令,都必需要在集羣環境的全部節點中都執行。 --接下來就是將原先的 ASM 磁盤路徑從 udev 轉到 AFD --先檢查如今的磁盤路徑 [root@dbserver1 ~]# ocrcheck -config Oracle Cluster Registry configuration is : Device/File Name : +CRSDG [root@dbserver1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 4838a0ee7bfa4fbebf8ff9f58642c965 (/dev/asm-disk1) [CRSDG] 2. ONLINE 72057097a36e4f02bfc7b5e23672e4cc (/dev/asm-disk2) [CRSDG] 3. ONLINE 7906e2fb24d24faebf9b82bba6564be3 (/dev/asm-disk3) [CRSDG] Located 3 voting disk(s). [root@dbserver1 ~]# su - grid [grid@dbserver1 ~]$ asmcmd lsdsk -G CRSDG Path /dev/asm-disk1 /dev/asm-disk10 /dev/asm-disk2 /dev/asm-disk3 /dev/asm-disk8 /dev/asm-disk9 --因爲要修改 OCR 所在的磁盤,所以修改以前須要中止 Cluster。 [root@dbserver1 ~]# crsctl stop cluster -all --直接修改會報錯,由於 /dev/asm-disk1 已經存在在 ASM 中了。 [grid@dbserver1 ~]$ asmcmd afd_label asmdisk01 /dev/asm-disk1 Connected to an idle instance. ASMCMD-9513: ASM disk label set operation failed. disk /dev/asm-disk1 is already provisioned for ASM --必需要增長 migrate 關鍵字,才能夠成功。 [grid@dbserver1 ~]$ asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate Connected to an idle instance. [grid@dbserver1 ~]$ asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK01 ENABLED /dev/asm-disk1 --在個人測試 ASM 中,一共有 13 塊磁盤,所以依次修改。 asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate asmcmd afd_label asmdisk02 /dev/asm-disk2 --migrate asmcmd afd_label asmdisk03 /dev/asm-disk3 --migrate asmcmd afd_label asmdisk04 /dev/asm-disk4 --migrate asmcmd afd_label asmdisk05 /dev/asm-disk5 --migrate asmcmd afd_label asmdisk06 /dev/asm-disk6 --migrate asmcmd afd_label asmdisk07 /dev/asm-disk7 --migrate asmcmd afd_label asmdisk08 /dev/asm-disk8 --migrate asmcmd afd_label asmdisk09 /dev/asm-disk9 --migrate asmcmd afd_label asmdisk10 /dev/asm-disk10 --migrate asmcmd afd_label asmdisk11 /dev/asm-disk11 --migrate asmcmd afd_label asmdisk12 /dev/asm-disk12 --migrate asmcmd afd_label asmdisk13 /dev/asm-disk13 --migrate [grid@dbserver1 ~]$ asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK01 ENABLED /dev/asm-disk1 ASMDISK02 ENABLED /dev/asm-disk2 ASMDISK03 ENABLED /dev/asm-disk3 ASMDISK04 ENABLED /dev/asm-disk4 ASMDISK05 ENABLED /dev/asm-disk5 ASMDISK06 ENABLED /dev/asm-disk6 ASMDISK07 ENABLED /dev/asm-disk7 ASMDISK08 ENABLED /dev/asm-disk8 ASMDISK09 ENABLED /dev/asm-disk9 ASMDISK10 ENABLED /dev/asm-disk10 ASMDISK11 ENABLED /dev/asm-disk11 ASMDISK12 ENABLED /dev/asm-disk12 ASMDISK13 ENABLED /dev/asm-disk13 --在另外的節點中,再也不須要做 label,而是直接 scan 便可,這跟使用 ASMLIB 的操做很是相像。 [grid@dbserver2 ~]$ asmcmd afd_scan Connected to an idle instance. [grid@dbserver2 ~]$ asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK12 ENABLED /dev/asm-disk12 ASMDISK09 ENABLED /dev/asm-disk9 ASMDISK08 ENABLED /dev/asm-disk8 ASMDISK11 ENABLED /dev/asm-disk11 ASMDISK10 ENABLED /dev/asm-disk10 ASMDISK13 ENABLED /dev/asm-disk13 ASMDISK01 ENABLED /dev/asm-disk1 ASMDISK04 ENABLED /dev/asm-disk4 ASMDISK06 ENABLED /dev/asm-disk6 ASMDISK07 ENABLED /dev/asm-disk7 ASMDISK05 ENABLED /dev/asm-disk5 ASMDISK03 ENABLED /dev/asm-disk3 ASMDISK02 ENABLED /dev/asm-disk2 --從新啓動 Cluster [root@dbserver1 ~]# crsctl start cluster -all --能夠看到 ASM 告警日誌中已經顯示開始使用新的名稱。關於其中 WARNING 的含義表示目前 AFD 還不支持 Advanced Format 格式的 磁盤,普通磁盤格式一個扇區是 512 字節,而高級格式則爲 4K 字節。 2014-11-20 17:46:16.695000 +08:00 * allocate domain 1, invalid = TRUE * instance 2 validates domain 1 NOTE: cache began mount (not first) of group CRSDG 1/0x508D0B98 NOTE: cache registered group DATADG 2/0x509D0B99 * allocate domain 2, invalid = TRUE * instance 2 validates domain 2 NOTE: cache began mount (not first) of group DATADG 2/0x509D0B99 WARNING: Library 'AFD Library - Generic , version 3 (KABI_V3)' does not support advanced format disks NOTE: Assigning number (1,0) to disk (AFD:ASMDISK01) NOTE: Assigning number (1,1) to disk (AFD:ASMDISK02) NOTE: Assigning number (1,2) to disk (AFD:ASMDISK03) NOTE: Assigning number (1,3) to disk (AFD:ASMDISK08) NOTE: Assigning number (1,4) to disk (AFD:ASMDISK09) NOTE: Assigning number (1,5) to disk (AFD:ASMDISK10) NOTE: Assigning number (2,0) to disk (AFD:ASMDISK04) NOTE: Assigning number (2,1) to disk (AFD:ASMDISK05) NOTE: Assigning number (2,2) to disk (AFD:ASMDISK06) NOTE: Assigning number (2,3) to disk (AFD:ASMDISK07) NOTE: Assigning number (2,4) to disk (AFD:ASMDISK11) NOTE: Assigning number (2,5) to disk (AFD:ASMDISK12) NOTE: Assigning number (2,6) to disk (AFD:ASMDISK13) --檢查磁盤加載路徑,以及功能所有是 AFD 樣式了。 [grid@dbserver1 ~]$ asmcmd lsdsk Path AFD:ASMDISK01 AFD:ASMDISK02 AFD:ASMDISK03 AFD:ASMDISK04 AFD:ASMDISK05 AFD:ASMDISK06 AFD:ASMDISK07 AFD:ASMDISK08 AFD:ASMDISK09 AFD:ASMDISK10 AFD:ASMDISK11 AFD:ASMDISK12 AFD:ASMDISK13 --可是咱們能夠看到在數據字典中仍然存在以前的磁盤路徑。 SQL> select NAME,LABEL,PATH from V$ASM_DISK; NAME LABEL PATH -------------------- ------------------------------- --------------------------------------------- /dev/asm-disk7 /dev/asm-disk6 /dev/asm-disk13 /dev/asm-disk12 /dev/asm-disk11 /dev/asm-disk4 /dev/asm-disk2 /dev/asm-disk9 /dev/asm-disk3 /dev/asm-disk5 /dev/asm-disk10 /dev/asm-disk8 /dev/asm-disk1 CRSDG_0000 ASMDISK01 AFD:ASMDISK01 CRSDG_0001 ASMDISK02 AFD:ASMDISK02 CRSDG_0002 ASMDISK03 AFD:ASMDISK03 DATADG_0000 ASMDISK04 AFD:ASMDISK04 DATADG_0001 ASMDISK05 AFD:ASMDISK05 DATADG_0002 ASMDISK06 AFD:ASMDISK06 DATADG_0003 ASMDISK07 AFD:ASMDISK07 CRSDG_0003 ASMDISK08 AFD:ASMDISK08 CRSDG_0004 ASMDISK09 AFD:ASMDISK09 CRSDG_0005 ASMDISK10 AFD:ASMDISK10 DATADG_0004 ASMDISK11 AFD:ASMDISK11 DATADG_0005 ASMDISK12 AFD:ASMDISK12 DATADG_0006 ASMDISK13 AFD:ASMDISK13 26 rows selected. --須要將 ASM 磁盤發現路徑(注意,這跟設置 AFD 磁盤發現路徑不是一個命令)中原先的路徑去除,只保留 AFD 路徑。 [grid@dbserver1 ~]$ asmcmd dsset 'AFD:*' [grid@dbserver1 ~]$ asmcmd dsget parameter:AFD:* profile:AFD:* --再次重啓 ASM,一切正常了。 SQL> select NAME,LABEL,PATH from V$ASM_DISK; NAME LABEL PATH -------------------- ------------------------------- ------------------------------------------------------- CRSDG_0000 ASMDISK01 AFD:ASMDISK01 CRSDG_0001 ASMDISK02 AFD:ASMDISK02 CRSDG_0002 ASMDISK03 AFD:ASMDISK03 DATADG_0000 ASMDISK04 AFD:ASMDISK04 DATADG_0001 ASMDISK05 AFD:ASMDISK05 DATADG_0002 ASMDISK06 AFD:ASMDISK06 DATADG_0003 ASMDISK07 AFD:ASMDISK07 CRSDG_0003 ASMDISK08 AFD:ASMDISK08 CRSDG_0004 ASMDISK09 AFD:ASMDISK09 CRSDG_0005 ASMDISK10 AFD:ASMDISK10 DATADG_0004 ASMDISK11 AFD:ASMDISK11 DATADG_0005 ASMDISK12 AFD:ASMDISK12 DATADG_0006 ASMDISK13 AFD:ASMDISK13 13 rows selected. --收尾工做,將原先的 udev rules 文件移除。固然,這要在全部節點中都運行。之後若是服務器再次重啓,AFD 就會徹底接管了。 [root@dbserver1 ~]# mv /etc/udev/rules.d/99-oracle-asmdevices.rules ~oracle/
其實,AFD 也在使用 udev。囧。
[grid@dbserver1 ~]$ cat /etc/udev/rules.d/53-afd.rules # # AFD devices KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmdba", MODE="0770" KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmdba", MODE="0770" KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmdba", MODE="0660"
Label 事後的磁盤在 /dev/oracleafd/disks 目錄中能夠找到。
[root@dbserver2 disks]# ls -l /dev/oracleafd/disks total 52 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK01 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK02 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK03 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK04 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK05 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK06 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK07 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK08 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK09 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK10 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK11 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK12 -rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK13
這裏有一個很大不一樣,全部磁盤的屬主變成了 root,而且只有 root 纔有寫入的權限。不少文章認爲,這就是 AFD 的 filter 功能體現了,由於如今用 oracle 或者 grid 用戶都沒有辦法直接對 ASM 磁盤進行寫入操做,天然就得到了一層保護。好比如下命令會直接報權限不足。
[oracle@dbserver1 disks]$ echo "do some evil" > ASMDISK99 -bash: ASMDISK99: Permission denied
可是若是你認爲這就是 AFD 的保護功能,那也過小看 Oracle 了,僅僅是這樣也對不起名字中 Filter 字樣。且看後面分解。
操做系統中也能夠看到 AFD 磁盤和底層磁盤的對應關係。
[grid@dbserver1 /]$ ls -l /dev/disk/by-label/ total 0 lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK01 -> ../../sdc lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK02 -> ../../sdd lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK03 -> ../../sde lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK04 -> ../../sdf lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK05 -> ../../sdg lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK06 -> ../../sdh lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK07 -> ../../sdi lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK08 -> ../../sdj lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK09 -> ../../sdk lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK10 -> ../../sdl lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK11 -> ../../sdm lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK12 -> ../../sdn lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK13 -> ../../sdo
再次重啓服務器之後,afd_lsdsk 的結果中顯示的路徑都已經變爲底層磁盤,可是 Filtering 卻變成了 DISABLED。不要在乎這裏的 Label 和 Path 的對應和上面的不同,由於有些是在節點 1 中執行的結果,有些是在節點 2 中執行的結果,而這也是 AFD 功能的展現,無論兩邊機器發現塊設備的順序是否是同樣,只要綁定了 AFD 的 Label,就沒問題了。
ASMCMD> afd_lsdsk -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK01 DISABLED /dev/sdd ASMDISK02 DISABLED /dev/sde ASMDISK03 DISABLED /dev/sdf ASMDISK04 DISABLED /dev/sdg ASMDISK05 DISABLED /dev/sdh ASMDISK06 DISABLED /dev/sdi ASMDISK07 DISABLED /dev/sdj ASMDISK08 DISABLED /dev/sdk ASMDISK09 DISABLED /dev/sdl ASMDISK10 DISABLED /dev/sdm ASMDISK11 DISABLED /dev/sdn ASMDISK12 DISABLED /dev/sdo ASMDISK13 DISABLED /dev/sdp
對,這纔是重點。
先看一下如何啓用或者禁用 Filter 功能。在個人測試中,單獨設置某塊盤啓用仍是禁用是不生效的,只能全局啓用或者禁用。
[grid@dbserver1 ~]$ asmcmd help afd_filter afd_filter Sets the AFD filtering mode on a given disk path. If the command is executed without specifying a disk path then filtering is set at node level. Synopsis afd_filter {-e | -d } [<disk-path>] Description The options for afd_filter are described below -e - enable AFD filtering mode -d - disable AFD filtering mode Examples The following example uses afd_filter to enable AFD filtering on a given diskpath. ASMCMD [+] >afd_filter -e /dev/sdq See Also afd_lsdsk afd_state
啓用 Filter 功能。
[grid@dbserver1 ~]$ asmcmd afd_filter -e [grid@dbserver1 ~]$ asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK01 ENABLED /dev/sdb ASMDISK02 ENABLED /dev/sdc ASMDISK03 ENABLED /dev/sdd ASMDISK04 ENABLED /dev/sde ASMDISK05 ENABLED /dev/sdf ASMDISK06 ENABLED /dev/sdg ASMDISK07 ENABLED /dev/sdh ASMDISK08 ENABLED /dev/sdi ASMDISK09 ENABLED /dev/sdj ASMDISK10 ENABLED /dev/sdk ASMDISK11 ENABLED /dev/sdl ASMDISK12 ENABLED /dev/sdm ASMDISK13 ENABLED /dev/sdn
爲了以防萬一,不破壞我本身的實驗環境,增長了一塊磁盤來做測試。
[root@dbserver1 ~]# asmcmd afd_label asmdisk99 /dev/sdo Connected to an idle instance. [root@dbserver1 ~]# asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK01 ENABLED /dev/sdb ASMDISK02 ENABLED /dev/sdc ASMDISK03 ENABLED /dev/sdd ASMDISK04 ENABLED /dev/sde ASMDISK05 ENABLED /dev/sdf ASMDISK06 ENABLED /dev/sdg ASMDISK07 ENABLED /dev/sdh ASMDISK08 ENABLED /dev/sdi ASMDISK09 ENABLED /dev/sdj ASMDISK10 ENABLED /dev/sdk ASMDISK11 ENABLED /dev/sdl ASMDISK12 ENABLED /dev/sdm ASMDISK13 ENABLED /dev/sdn ASMDISK99 ENABLED /dev/sdo
建立一個新的磁盤組。
[grid@dbserver1 ~]$ sqlplus / AS sysasm SQL> CREATE diskgroup DGTEST external redundancy disk 'AFD:ASMDISK99'; Diskgroup created.
先用 KFED 讀取一下磁盤頭,驗證一下確實無誤。
[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99 kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 1854585587 ; 0x00c: 0x6e8abaf3 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17 kfdhdb.driver.reserved[0]: 1145918273 ; 0x008: 0x444d5341 kfdhdb.driver.reserved[1]: 961237833 ; 0x00c: 0x394b5349 kfdhdb.driver.reserved[2]: 57 ; 0x010: 0x00000039 kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000 kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000 kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000 kfdhdb.compat: 168820736 ; 0x020: 0x0a100000 kfdhdb.dsknum: 0 ; 0x024: 0x0000 kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER kfdhdb.dskname: ASMDISK99 ; 0x028: length=9 kfdhdb.grpname: DGTEST ; 0x048: length=6 kfdhdb.fgname: ASMDISK99 ; 0x068: length=9
直接使用 dd 嘗試將整個磁盤清零。 dd 命令自己沒有任何錯誤返回。
[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo dd: writing to `/dev/sdo': No space left on device 409601+0 records in 409600+0 records out 209715200 bytes (210 MB) copied, 19.9602 s, 10.5 MB/s
以後從新 mount 磁盤組,若是磁盤被清零,在從新 mount 的時候必定會出現錯誤,而如今正常掛載。
SQL> ALTER diskgroup DGTEST dismount; Diskgroup altered. SQL> ALTER diskgroup DGTEST mount; Diskgroup altered.
以爲不過癮?那再建立一個表空間,插入一些數據,作一次 checkpoint,仍然一切正常。
SQL> CREATE tablespace test datafile '+DGTEST' SIZE 100M; Tablespace created. SQL> CREATE TABLE t_afd (n NUMBER) tablespace test; TABLE created. SQL> INSERT INTO t_afd VALUES(1); 1 ROW created. SQL> commit; Commit complete. SQL> ALTER system checkpoint; System altered. SQL> SELECT COUNT(*) FROM t_afd; COUNT(*)---------- 1
可是詭異的是,這時候在操做系統級別直接去讀取 /dev/sdo 的內容,會顯示所有都已經被清空爲 0 了。
[root@dbserver1 ~]# od -c -N 256 /dev/sdo 0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 * 0000400
使用 strings 命令也徹底看不到任何有意義的字符。
[root@dbserver1 disks]# strings /dev/sdo [root@dbserver1 disks]#
可是,千萬不要被這樣的假象迷惑,覺得磁盤真的被清空了,在 dd 的時候,/var/log/message 會產生大量日誌,明確表示這些在 ASM 管理的設備上的 IO 操做都是不被支持,這纔是 Filter 真正起做用的場合。
afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=224) not supported
使用 kfed ,仍然能夠讀取到正常的信息。
[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99 kfbh.endian: 1 ; 0x000: 0x01 kfbh.hard: 130 ; 0x001: 0x82 kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD kfbh.datfmt: 1 ; 0x003: 0x01 kfbh.block.blk: 0 ; 0x004: blk=0 kfbh.block.obj: 2147483648 ; 0x008: disk=0 kfbh.check: 1854585587 ; 0x00c: 0x6e8abaf3 kfbh.fcn.base: 0 ; 0x010: 0x00000000 kfbh.fcn.wrap: 0 ; 0x014: 0x00000000 kfbh.spare1: 0 ; 0x018: 0x00000000 kfbh.spare2: 0 ; 0x01c: 0x00000000 kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17 ......
直到從新啓動服務器(從新啓動 ASM,從新啓動 Cluster,在操做系統仍然看到的是清零後的數據),全部的數據又回來了。目前還不確認 Oracle 是使用了怎樣的重定向技術實現了這樣的神奇效果。
[root@dbserver1 ~]# od -c -N 256 /dev/sdo 0000000 001 202 001 001 \0 \0 \0 \0 \0 \0 \0 200 u 177 D I 0000020 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000040 O R C L D I S K A S M D I S K 9 0000060 9 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000100 \0 \0 020 \n \0 \0 001 003 A S M D I S K 9 0000120 9 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000140 \0 \0 \0 \0 \0 \0 \0 \0 D G T E S T \0 \0 0000160 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000200 \0 \0 \0 \0 \0 \0 \0 \0 A S M D I S K 9 0000220 9 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000240 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 * 0000300 \0 \0 \0 \0 \0 \0 \0 \0 022 257 367 001 \0 X \0 247 0000320 022 257 367 001 \0 h 036 344 \0 002 \0 020 \0 \0 020 \0 0000340 200 274 001 \0 310 \0 \0 \0 002 \0 \0 \0 001 \0 \0 \0 0000360 002 \0 \0 \0 002 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000400 [root@dbserver1 ~]# [root@dbserver1 ~]# strings /dev/sdo | grep ASM ORCLDISKASMDISK99 ASMDISK99 ASMDISK99 ORCLDISKASMDISK99 ASMDISK99 ASMDISK99 ASMDISK99 ASMDISK99 ASMPARAMETERFILE ASMPARAMETERBAKFILE ASM_STALE
最後將 Filter 禁用以後再測試。
[root@dbserver1 ~]# asmcmd afd_filter -d Connected to an idle instance. [root@dbserver1 ~]# asmcmd afd_lsdsk Connected to an idle instance. -------------------------------------------------------------------------------- Label Filtering Path ================================================================================ ASMDISK01 DISABLED /dev/sdb ASMDISK02 DISABLED /dev/sdc ASMDISK03 DISABLED /dev/sdd ASMDISK04 DISABLED /dev/sde ASMDISK05 DISABLED /dev/sdf ASMDISK06 DISABLED /dev/sdg ASMDISK07 DISABLED /dev/sdh ASMDISK08 DISABLED /dev/sdi ASMDISK09 DISABLED /dev/sdj ASMDISK10 DISABLED /dev/sdk ASMDISK11 DISABLED /dev/sdl ASMDISK12 DISABLED /dev/sdm ASMDISK13 DISABLED /dev/sdn ASMDISK99 DISABLED /dev/sdo
一樣使用 dd 命令清零整個磁盤。
[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo dd: writing to `/dev/sdo': No space left on device 409601+0 records in 409600+0 records out 209715200 bytes (210 MB) copied, 4.46444 s, 47.0 MB/s
從新 mount 磁盤組,如期報錯,磁盤組沒法加載。
SQL> alter diskgroup DGTEST dismount; Diskgroup altered. SQL> alter diskgroup DGTEST mount; alter diskgroup DGTEST mount * ERROR at line 1: ORA-15032: not all alterations performed ORA-15017: diskgroup "DGTEST" cannot be mounted ORA-15040: diskgroup is incomplete
從新啓動數據庫,也會發現因爲表空間中數據庫不存在而致使數據庫沒法正常 Open。
SQL> startup ORACLE instance started. Total System Global Area 838860800 bytes Fixed SIZE 2929936 bytes Variable >SIZE 385878768 bytesDATABASE Buffers 226492416 bytes Redo Buffers 5455872 bytes In-Memory Area 218103808 bytesDATABASE mounted. ORA-01157: cannot identify/LOCK DATA file 15 - see DBWR trace file ORA-01110: DATA file 15: '+DGTEST/CDB12C/DATAFILE/test.256.864163075'