AIX平臺安裝Oracle11gR2數據庫

 

1. 前提條件node

1.1 認證操做系統redis

Certification Information for Oracle Database on IBM AIX on Power systems(Doc ID 1307544.1)數據庫

Certification Information for Oracle Database on IBM Linux on System z(Doc ID 1309988.1)緩存

AIX 5L V5.3 TL 09 SP1 ("5300-09-01") or higher, 64 bit kernel (Part Number E10854-01)服務器

AIX 6.1 TL 02 SP1 ("6100-02-01") or higher, 64-bit kernel網絡

AIX 7.1 TL 00 SP1 ("7100-00-01") or higher, 64-bit kernel架構

AIX 7.2 TL 0 SP 1 ("7200-00-01") or higher, 64-bit kernel (11.2.0.4 only)併發

-- 檢查操做系統版本oracle

# oslevel -sapp

-- 檢查操做系統內核

# bootinfo -K

1.2 系統硬件環境檢查

1)物理內存至少4G

# lsattr -El sys0 -a realmem

2)swap需求

Between 1GB and 2GB then 1.5 times RAM 

Between 2GB and 16 GB then match RAM 

More than 16 GB then 16GB RAM

# lsps -a

若不知足,能夠使用smit chps進行擴容

3)tmp至少1G

若不符合,能夠使用chfs -a size=5G /tmp進行擴容

4)Oracle軟件目錄建議80G

5)系統架構--64位硬件架構

# getconf HARDWARE_BITMODE

6)系統所需補丁

<1> AIX 5.3 required packages:

bos.adt.base

bos.adt.lib

bos.adt.libm

bos.perf.libperfstat 5.3.9.0 or later

bos.perf.perfstat

bos.perf.proctools

rsct.basic.rte (For RAC configurations only)

rsct.compat.clients.rte (For RAC configurations only)

xlC.aix50.rte:10.1.0.0 or later

xlC.rte.10.1.0.0 or later

gpfs.base 3.2.1.8 or later (Only for RAC systems that will use GPFS cluster filesystems)

<2> AIX 6.1 required packages:

bos.adt.base

bos.adt.lib

bos.adt.libm

bos.perf.libperfstat 6.1.2.1 or later

bos.perf.perfstat

bos.perf.proctools

rsct.basic.rte (For RAC configurations only)

rsct.compat.clients.rte (For RAC configurations only)

xlC.aix61.rte:10.1.0.0 or later

xlC.rte.10.1.0.0 or later

gpfs.base 3.2.1.8 or later (Only for RAC systems that will use GPFS cluster filesystems)

<3> AIX 7.1 required packages:

bos.adt.base

bos.adt.lib

bos.adt.libm

bos.perf.libperfstat

bos.perf.perfstat

bos.perf.proctools

xlC.rte.11.1.0.2 or later

gpfs.base 3.3.0.11 or later (Only for RAC systems that will use GPFS cluster filesystems)

<i> Authorized Problem Analysis Reports (APARs) for AIX 5.3:

IZ42940

IZ49516

IZ52331

IY84780

See Note:1379908.1 for other AIX 5.3 patches that may be required

<ii> APARs for AIX 6.1:

IZ41855

IZ51456

IZ52319

IZ97457

IZ89165

IY84780

See Note:1264074.1 and Note:1379908.1 for other AIX 6.1 patches that may be required

<iii> APARs for AIX 7.1:

IZ87216

IZ87564

IZ89165

IZ97035

IY84780 <<< AIX 5.3 上,應用 APAR IY84780 以修復每一個 cpu 的空閒列表的已知內核問題

See Note:1264074.1 and Note:1379908.1 for other AIX 7.1 patches that may be required

--使用lslpp -l xxx確認補丁是否安裝

-- aix 7.1

lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.libperfstat bos.perf.perfstat bos.perf.proctools xlC.rte gpfs.base

/usr/sbin/instfix -i -k "IZ87216 IZ87564 IZ89165 IZ97035"

1.3 建立用戶和組並賦予權限

-- 建立用戶組

# mkgroup -'A' id='1000' adms='root' oinstall

# mkgroup -'A' id='1100' adms='root' asmadmin

# mkgroup -'A' id='1200' adms='root' dba

# mkgroup -'A' id='1300' adms='root' asmdba

# mkgroup -'A' id='1301' adms='root' asmoper

-- 建立用戶

# mkuser id='1100' pgrp='oinstall' groups='asmadmin,asmdba,asmoper' home='/home/grid' grid

# mkuser id='1101' pgrp='oinstall' groups='dba,asmdba' home='/home/oracle' oracle

-- 查看用戶能力

# lsuser -a capabilities grid

# lsuser -a capabilities oracle

-- 修改用戶能力

# chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE grid

# chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

-- 相關參數含義

CAP_BYPASS_RAC_VMM:進程具備繞過對 VMM 資源用法限制的能力。

CAP_NUMA_ATTACH:進程具備綁定到特定資源的能力。

CAP_PROPAGATE:子進程繼承全部能力。

注意:在11g中,必須確保 GI 和 ORACLE 全部者賬戶具備 CAP_NUMA_ATTACH、CAP_BYPASS_RAC_VMM 和 CAP_PROPAGATE 功能。若缺乏此功能,將在跑root.sh腳本時,會以下錯誤:

Creating trace directory

User oracle is missing the following capabilities required to run CSSD in realtime:

CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATEITPUB

To add the required capabilities, please run:

/usr/bin/chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE oracle

CSS cannot be run in realtime mode at /grid/crs/install/crsconfig_lib.pm line 8119.

-- 修改grid、oracle密碼

# passwd oracle

# passwd grid

若登陸異常,以下:

[compat]: 3004-610 You are required to change your password.

Please choose a new one.

則執行命令

# pwdadm -f NOCHECK oracle

# pwdadm -f NOCHECK grid

1.4 修改grid、oracle用戶環境變量

su - grid

vi ~/.profile

export ORACLE_SID=+ASM1

export ORACLE_BASE=/oracle/app/grid

export ORACLE_HOME=/oracle/app/11.2.0/grid

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin

su - oracle

vi ~/.profile

export ORACLE_SID=orcl1

export ORACLE_BASE=/oracle/app/oracle

export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1

export ORA_CRS_HOME=/oracle/app/11.2.0/grid

export LIBPATH=$ORACLE_HOME/lib

export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:/bin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin:${HOME}/dba:$PATH

umask 022

export TNS_ADMIN=$ORA_CRS_HOME/network/admin

其它節點參照配置

1.5 建立軟件安裝目錄

mkdir -p /oracle/app/grid

mkdir -p /oracle/app/11.2.0/grid

chown -R grid:oinstall /oracle

mkdir -p /oracle/app/oracle

mkdir -p /oracle/app/oracle/product/11.2.0/db_1

chown oracle:oinstall /oracle/app/oracle

chmod -R 775 /oracle

1.6 調整SHELL限制

vi /etc/security/limits

default:

fsize = -1

core = -1

cpu = -1

data = -1

rss = -1

stack_hard = -1

stack = -1

nofiles = 65536

1.7 檢查及配置內核參數

--查看最大進程數:

lsattr -El sys0 |grep maxuproc

lsattr -E -l sys0 -a maxuproc

-- 修改最大進程數爲16384:

smitty chgsys

Maximum number of PROCESSES allowed per user [16384]

chdev -l sys0 -a maxuproc=16384

-- 檢查系統塊大小ncargs(至少128)

# lsattr -E -l sys0 -a ncargs

-- 配置系統塊大小爲256

# chdev -l sys0 -a ncargs=256

--查看當前值:

lsattr -E -l sys0 -a minpout

lsattr -E -l sys0 -a maxpout

-- 修改

# smitty chgsys

# chdev -l sys0 -a minpout=8 -a maxpout=12

Oracle 測試代表,minpout 爲 8 和 maxpout 爲 12 的起始值對於大多數 Oracle 客戶都是比較好的基準。

clipboard

1.8 配置AIO

-- AIX 6.1以上會自動開啓,無需配置。推薦的 aio_maxreqs 值爲 64k (65536)

# ioo -a | more

# ioo -o aio_maxreqs

# lsattr -El aio0 -a maxreqs #<< aix 5.3

1.9 配置優化內存參數

-- 查看內存參數

vmo -aF

--修改內存參數:

vmo -p -o minperm%=3

vmo -p -o maxclient%=15

vmo -p -o maxperm%=15

vmo -p -o strict_maxclient=1

vmo -p -o strict_maxperm=1 <<< Doc ID 1526555.1 中的值改爲0;

vmo -p -o strict maxperm=1 <<< Doc ID 1526555.1 中的值改爲0;

vmo -p -o strict maxclient=1

vmo -r -o page_steal_method=1 <<< 須要重啓生效;

vmo -p -o lru_file_repage=0 <<< aix 7.1 默認值是0;

注意:lru_file_repage 建議設置爲"0",表示VMM僅竊取文件緩衝區高速緩存並保留計算內存(SGA);lru_file_repage參數僅在AIX 5.2 ML04或更高版本以及AIX 5.3 ML01或更高版本上可用

-- 經過設置vmm_klock_mode=2來確保AIX的Kernel Memory被固定在內存中(在AIX7.1 是默認設定)。 對於AIX 6.1來講,這個功能選項須要AIX6.1 TL06或者以上版本的支持

-- 檢查參數設置:

# vmo -L vmm_klock_mode

-- 設置參數:

# vmo -r -o vmm_klock_mode=2

-- AIX:數據庫運行的時間越長,數據庫性能就越慢(Doc ID 316533.1)

strict_maxperm = 0(默認值)

strict_maxclient = 1(默認值)

lru_file_repage = 0

maxperm% = 90 (BM建議不要下降maxpin%值)

minperm% = 5(物理RAM <32 GB)

minperm% = 10(物理RAM> 32 GB但<64 GB)

minperm% = 20(物理RAM> 64 GB)

v_pinshm = 1

maxpin% = (( SGA的大小/物理內存的大小 ) * 100 ) + 3

1.10 配置網絡參數

-- 檢查

# lsattr -El sys0 -a pre520tune

pre520tune disable Pre-520 tuning compatibility mode True <<不兼容,須要修改配置

# /usr/sbin/no -a | more

-- 執行如下命令修改配置

/usr/sbin/no -r -o ipqmaxlen=512

/usr/sbin/no -p -o rfc1323=1

/usr/sbin/no -p -o sb_max=4194304

/usr/sbin/no -p -o tcp_recvspace=65536

/usr/sbin/no -p -o tcp_sendspace=65536

/usr/sbin/no -p -o udp_recvspace=655360 <<< 該值是udp_sendspace的10倍,但須小於sb_max)

/usr/sbin/no -p -o udp_sendspace=65536 <<< ((DB_BLOCK_SIZE * DB_MULTIBLOCK_READ_COUNT) + 4 KB) but no lower than 65536

/usr/sbin/no -p -o tcp_ephemeral_low=9000

/usr/sbin/no -p -o udp_ephemeral_low=9000

/usr/sbin/no -p -o tcp_ephemeral_high=65500

/usr/sbin/no -p -o udp_ephemeral_high=65500

注意: 對於 GI 版本 11.2.0.2 的安裝,設置 udp_sendspace 失敗將致使 root.sh 失敗。

能夠查看mos:11.2.0.2 Grid Infrastructure Upgrade/Install on More Than One Node Cluster Fails With "gipchaLowerProcessNode: no valid interfaces found to node" in crsd.log (文檔 ID 1280234.1)

若是安裝遇到檢測通不過,則遇到bug 13077654(文檔號:1373242.1),能夠經過如下方式修復

1)vi /etc/rc.net

if [ -f /usr/sbin/bo ]; then

/usr/sbin/no -r -o ipqmaxlen=512

/usr/sbin/no -p -o rfc1323=1

/usr/sbin/no -p -o sb_max=4194304

/usr/sbin/no -p -o tcp_recvspace=65536

/usr/sbin/no -p -o tcp_sendspace=65536

/usr/sbin/no -p -o udp_recvspace=655360

/usr/sbin/no -p -o udp_sendspace=65536

/usr/sbin/no -p -o tcp_ephemeral_low=9000

/usr/sbin/no -p -o udp_ephemeral_low=9000

/usr/sbin/no -p -o tcp_ephemeral_high=65500

/usr/sbin/no -p -o udp_ephemeral_high=65500

fi

2)root用戶建立軟連接

ln -s /usr/sbin/no /etc/no

1.11 NTP配置

1)時區通常爲Asia/Shanghai

-- 檢查操做系統主機時區:

grep TZ /etc/environment

TZ=Asia/Shanghai

-- 檢查ORACLE集羣時區:

grep TZ $GRID_HOME/crs/install/s_crsconfig_$(hostname)_env.txt

TZ=Asia/Shanghai

2) NTP的同步設置 編輯 /etc/ntp.conf文件, 內容以下:

#broadcastclient

server 127.127.0.1

driftfile /etc/ntp.driff

tracefile /etc/ntp.trace

slewalways yes

注意:微調slewalways ,這個值的默認設置是no,也就是說若是您不設置,NTP最大可一次調整1000秒. 根據IBM的官方說明,若是咱們不指定slewthreshold 那麼默認值是 0.128 seconds. 若是想特別設置,請指定slewthreshold 的值,注意單位是second。

3)啓用網絡同步時間的SLEWING選項

-- 將 /etc/rc.tcpip 下的start /usr/sbin/xntpd "$src_running"改爲以下:

start /usr/sbin/xntpd "$src_running" "-x"

4)在NTP客戶端啓動xntpd守護進程

# startsrc -s xntpd -a "-x"

5)查詢xntpd的狀態

-- 當 system peer 不爲 'insane' 時, 代表客戶端已與服務器端成功地進行了同步.

# lssrc -ls xntpd

###

-- 檢查ntp客戶端 xntpd 運行狀態:

# lssrc -ls xntpd

-- 當 sys peer 爲 「insane」 ,代表 xntpd 尚未完成,等待幾分鐘後,sys peer 會顯示爲NTP服務器地址(127.127.0.1),代表同步完成。

6) 經過-d命令看客戶端跟時鐘服務端時間誤差

#ntpdate -d 127.127.0.1

0ffset: 應小於幾s

offset正負概念:正數表明NTP server比NTP client快,負數表明NTP server比NTP client慢

PS:若是server,client時間超過1000s,則需先手動設置client的時間,保證在1000s之內作同步

1.12 修改/etc/hosts文件

cp /etc/hosts{,_$(date +%Y%m%d)}

cat >> /etc/hosts <EOF

127.0.0.1 loopback localhost

::1 loopback localhost

# Public Network - (bond0)

192.168.8.145 orcl1

192.168.8.146 orcl2

# Private Interconnect - (bond1)

192.168.168.145 orcl1-priv

192.168.168.146 orcl2-priv

# Public Virtual IP (VIP) addresses - (bond0:X)

192.168.8.147 orcl1-vip

192.168.8.148 orcl2-vip

# SCAN IP - (bond0:X)

192.168.8.149 orcl-scan

EOF

注意:引用官方文檔The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.(11g hostname是不容許用下劃線 _ 進行命名。)

1.13 配置SSH

# su - grid

$ mkdir ~/.ssh

$ chmod 700 ~/.ssh

$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

節點一:

-- 將生成的rsa和dsa密鑰複製到authorized_keys

$ cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys

-- 將節點2的生成的密鑰追加到節點1的authorized_keys中,如有多個節點,依次執行

$ ssh node2 "cat ~/.ssh/*.pub" >> ~/.ssh/authorized_keys

-- 將最終的authorized_keys文件傳輸到其它全部節點

$ scp ~/.ssh/authorized_keys node2:~/.ssh/authorized_keys

-- 修改文件權限

grid@rac1:~/.ssh$ chmod 600 ~/.ssh/authorized_keys

-- oracle用戶參照上面步驟配置

-- 互信驗證(oracle,grid)全部節點操做

su - grid

export SSH='ssh -o ConnectTimeout=3 -o ConnectionAttempts=5 -o PasswordAuthentication=no -o StrictHostKeyChecking=no'

$ ${SSH} rac1 date

$ ${SSH} rac1-priv date

$ ${SSH} rac2 date

$ ${SSH} rac2-priv date

1.14 共享存儲配置

-- 容量規劃

1) ocr voting:3個2G的lun,ASM對應劃分1個normal的dg

2) 控制文件:3個2G的lun,ASM對應劃分3個dg

3) redo:2個64G的lun(建議raid 1+0,儘可能不用raid5),ASM對應劃分2個dg

4) 數據文件:最多500G 1個lun,ASM對應劃分1個dg,注意單個dg不超過10T

-- 設置磁盤屬性(RAC:存儲驅動器可以併發讀/寫)

Error ORA-27091, ORA-27072 When Mounting Diskgroup (文檔 ID 422075.1)

clipboard

To enable simultaneous access to a disk device from multiple nodes, you must set the appropriate Object Data Manager (ODM) attribute listed in the following table to the value shown, depending on the disk type:

Disk Type

Attribute

Value

SSA, FAStT, or non-MPIO-capable disks

reserve_lock

no

SS, EMC, HDS, CLARiiON, or MPIO-capable disks

reserve_policy

no_reserve

To determine whether the attribute has the correct value, enter a command similar to the following on all cluster nodes for each disk device that you want to use:

# /usr/sbin/lsattr -E -l hdiskn

If the required attribute is not set to the correct value on any node, then enter a command similar to one of the following on that node:

■ SSA and FAStT devices

# /usr/sbin/chdev -l hdiskn -a reserve_lock=no

■ ESS, EMC, HDS, CLARiiON, and MPIO-capable devices

# /usr/sbin/chdev -l hdiskn -a reserve_policy=no_reserve

-- 查看磁盤屬性

lsattr -El hdisk4 | grep reserve

-- 修改磁盤屬性

-- node1

chdev -l hdisk4 -a reserve_policy=no_reserve

chdev -l hdisk5 -a reserve_policy=no_reserve

chdev -l hdisk6 -a reserve_policy=no_reserve

chdev -l hdisk7 -a reserve_policy=no_reserve

chdev -l hdisk8 -a reserve_policy=no_reserve

chdev -l hdisk9 -a reserve_policy=no_reserve

chdev -l hdisk10 -a reserve_policy=no_reserve

chdev -l hdisk11 -a reserve_policy=no_reserve

-- node2

chdev -l hdisk2 -a reserve_policy=no_reserve

chdev -l hdisk3 -a reserve_policy=no_reserve

chdev -l hdisk4 -a reserve_policy=no_reserve

chdev -l hdisk5 -a reserve_policy=no_reserve

chdev -l hdisk6 -a reserve_policy=no_reserve

chdev -l hdisk7 -a reserve_policy=no_reserve

chdev -l hdisk8 -a reserve_policy=no_reserve

chdev -l hdisk9 -a reserve_policy=no_reserve

--磁盤屬組及權限

-- 查看權限

ls -l /dev/rhisk*

-- 修改權限

chown grid:asmadmin /dev/rhdisk4

chown grid:asmadmin /dev/rhdisk5

chown grid:asmadmin /dev/rhdisk6

chown grid:asmadmin /dev/rhdisk7

chown grid:asmadmin /dev/rhdisk8

chown grid:asmadmin /dev/rhdisk9

chown grid:asmadmin /dev/rhdisk10

chown grid:asmadmin /dev/rhdisk11

chmod 775 /dev/rhdisk4

chmod 775 /dev/rhdisk5

chmod 775 /dev/rhdisk6

chmod 775 /dev/rhdisk7

chmod 775 /dev/rhdisk8

chmod 775 /dev/rhdisk9

chmod 775 /dev/rhdisk10

chmod 775 /dev/rhdisk11

-- node2

chown grid:asmadmin /dev/rhdisk2

chown grid:asmadmin /dev/rhdisk3

chown grid:asmadmin /dev/rhdisk4

chown grid:asmadmin /dev/rhdisk5

chown grid:asmadmin /dev/rhdisk6

chown grid:asmadmin /dev/rhdisk7

chown grid:asmadmin /dev/rhdisk8

chown grid:asmadmin /dev/rhdisk9

chmod 775 /dev/rhdisk2

chmod 775 /dev/rhdisk3

chmod 775 /dev/rhdisk4

chmod 775 /dev/rhdisk5

chmod 775 /dev/rhdisk6

chmod 775 /dev/rhdisk7

chmod 775 /dev/rhdisk8

chmod 775 /dev/rhdisk9

-- 建立軟連接,固定盤符(可選)

-- node1

mkdir /sharedisk

chown grid:oinstall /sharedisk

chmod 775 /sharedisk

su - grid

ln -s /dev/rhdisk4 /sharedisk/asm_data1

ln -s /dev/rhdisk5 /sharedisk/asm_data2

ln -s /dev/rhdisk6 /sharedisk/asm_data3

ln -s /dev/rhdisk7 /sharedisk/asm_data4

ln -s /dev/rhdisk8 /sharedisk/asm_data5

ln -s /dev/rhdisk9 /sharedisk/asm_grid1

ln -s /dev/rhdisk10 /sharedisk/asm_grid2

ln -s /dev/rhdisk11 /sharedisk/asm_grid3

-- node2

mkdir /sharedisk

chown grid:oinstall /sharedisk

chmod 775 /sharedisk

su - grid

ln -s /dev/rhdisk2 /sharedisk/asm_data1

ln -s /dev/rhdisk3 /sharedisk/asm_data2

ln -s /dev/rhdisk4 /sharedisk/asm_data3

ln -s /dev/rhdisk5 /sharedisk/asm_data4

ln -s /dev/rhdisk6 /sharedisk/asm_data5

ln -s /dev/rhdisk7 /sharedisk/asm_grid1

ln -s /dev/rhdisk8 /sharedisk/asm_grid2

ln -s /dev/rhdisk9 /sharedisk/asm_grid3

2. 圖形界面安裝

2.1 預檢查

su - grid

export AIXTHREAD_SCOPE=S << only on AIX5L, AIX 6.1 及更高版本上默認爲 S

./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose

2.2 圖形界面安裝GI軟件

以grid用戶登陸,進入到安裝文件解壓目錄

./runInstaller

 

-- 後面圖形界面操做略

相關文章
相關標籤/搜索