DB Namejava |
DB Idnode |
Instanceios |
Inst num算法 |
Releasesql |
RAC數據庫 |
Hostc# |
ICCI數組 |
1314098396緩存 |
ICCI1安全 |
1 |
10.2.0.3.0 |
YES |
HPGICCI1 |
|
Snap Id |
Snap Time |
Sessions |
Cursors/Session |
Begin Snap: |
2678 |
25-Dec-08 14:04:50 |
24 |
1.5 |
End Snap: |
2680 |
25-Dec-08 15:23:37 |
26 |
1.5 |
Elapsed: |
|
78.79 (mins) |
|
|
DB Time: |
|
11.05 (mins) |
|
|
DB Time不包括Oracle後臺進程消耗的時間。若是DBTime遠遠小於Elapsed時間,說明數據庫比較空閒。
db time= cpu time + wait time(不包含空閒等待) (非後臺進程)說白了就是dbtime就是記錄的服務器花在數據庫運算(非後臺進程)和等待(非空閒等待)上的時間DBtime = cpu time + all of nonidle wait event time
在79分鐘裏(其間收集了3次快照數據),數據庫耗時11分鐘,RDA數據中顯示系統有8個邏輯CPU(4個物理CPU),平均每一個CPU耗時1.4分鐘,CPU利用率只有大約2%(1.4/79)。說明系統壓力很是小。
列出下面這兩個來作解釋:
Report A:
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 4610 24-Jul-08 22:00:54 68 19.1
End Snap: 4612 24-Jul-08 23:00:25 17 1.7
Elapsed: 59.51 (mins)
DB Time: 466.37 (mins)
Report B:
Snap Id Snap Time Sessions Curs/Sess
--------- ------------------- -------- ---------
Begin Snap: 3098 13-Nov-07 21:00:37 39 13.6
End Snap: 3102 13-Nov-07 22:00:15 40 16.4
Elapsed: 59.63 (mins)
DB Time: 19.49 (mins)
服務器是AIX的系統,4個雙核cpu,共8個核:
/sbin>bindprocessor -q
The available processors are: 0 1 2 3 4 5 6 7
先說Report A,在snapshot間隔中,總共約60分鐘,cpu就共有60*8=480分鐘,DB time爲466.37分鐘,則:
cpu花費了466.37分鐘在處理Oralce非空閒等待和運算上(比方邏輯讀)
也就是說cpu有 466.37/480*100% 花費在處理Oracle的操做上,這還不包括後臺進程
看Report B,總共約60分鐘,cpu有 19.49/480*100% 花費在處理Oracle的操做上
很顯然,2中服務器的平均負載很低。
從awr report的Elapsed time和DB Time就能大概瞭解db的負載。
但是對於批量系統,數據庫的工做負載老是集中在一段時間內。若是快照週期不在這一段時間內,或者快照週期跨度太長而包含了大量的數據庫空閒時間,所得出的分析結果是沒有意義的。這也說明選擇分析時間段很關鍵,要選擇可以表明性能問題的時間段。
|
Begin |
End |
|
|
Buffer Cache: |
3,344M |
3,344M |
Std Block Size: |
8K |
Shared Pool Size: |
704M |
704M |
Log Buffer: |
14,352K |
顯示SGA中每一個區域的大小(在AMM改變它們以後),可用來與初始參數值比較。
shared pool主要包括library cache和dictionarycache。library cache用來存儲最近解析(或編譯)後SQL、PL/SQL和Java classes等。library cache用來存儲最近引用的數據字典。發生在librarycache或dictionary cache的cache miss代價要比發生在buffercache的代價高得多。所以shared pool的設置要確保最近使用的數據都能被cache。
|
Per Second |
Per Transaction |
Redo size: |
918,805.72 |
775,912.72 |
Logical reads: |
3,521.77 |
2,974.06 |
Block changes: |
1,817.95 |
1,535.22 |
Physical reads: |
68.26 |
57.64 |
Physical writes: |
362.59 |
306.20 |
User calls: |
326.69 |
275.88 |
Parses: |
38.66 |
32.65 |
Hard parses: |
0.03 |
0.03 |
Sorts: |
0.61 |
0.51 |
Logons: |
0.01 |
0.01 |
Executes: |
354.34 |
299.23 |
Transactions: |
1.18 |
|
% Blocks changed per Read: |
51.62 |
Recursive Call %: |
51.72 |
Rollback per transaction %: |
85.49 |
Rows per Sort: |
######## |
顯示數據庫負載概況,將之與基線數據比較才具備更多的意義,若是每秒或每事務的負載變化不大,說明應用運行比較穩定。單個的報告數據只說明應用的負載狀況,絕大多數據並無一個所謂「正確」的值,然而Logons大於每秒1~2個、Hardparses大於每秒100、所有parses超過每秒300代表可能有爭用問題。
Redo size:每秒產生的日誌大小(單位字節),可標誌數據變動頻率, 數據庫任務的繁重與否。
Logical reads:每秒/每事務邏輯讀的塊數.平決每秒產生的邏輯讀的block數。Logical Reads= Consistent Gets + DB BlockGets
Block changes:每秒/每事務修改的塊數
Physical reads:每秒/每事務物理讀的塊數
Physical writes:每秒/每事務物理寫的塊數
User calls:每秒/每事務用戶call次數
Parses:SQL解析的次數.每秒解析次數,包括fast parse,soft parse和hardparse三種數量的綜合。 軟解析每秒超過300次意味着你的"應用程序"效率不高,調整session_cursor_cache。在這裏,fastparse指的是直接在PGA中命中的狀況(設置了session_cached_cursors=n);soft parse是指在shared pool中命中的情形;hardparse則是指都不命中的狀況。
Hard parses:其中硬解析的次數,硬解析太多,說明SQL重用率不高。每秒產生的硬解析次數, 每秒超過100次,就可能說明你綁定使用的很差,也多是共享池設置不合理。這時候能夠啓用參數cursor_sharing=similar|force,該參數默認值爲exact。但該參數設置爲similar時,存在bug,可能致使執行計劃的不優。
Sorts:每秒/每事務的排序次數
Logons:每秒/每事務登陸的次數
Executes:每秒/每事務SQL執行次數
Transactions:每秒事務數.每秒產生的事務數,反映數據庫任務繁重與否。
Blocks changed per Read:表示邏輯讀用於修改數據塊的比例.在每一次邏輯讀中更改的塊的百分比。
Recursive Call:遞歸調用佔全部操做的比率.遞歸調用的百分比,若是有不少PL/SQL,那麼這個值就會比較高。
Rollback per transaction:每事務的回滾率.看回滾率是否是很高,由於回滾很耗資源 ,若是回滾率太高,可能說明你的數據庫經歷了太多的無效操做 ,過多的回滾可能還會帶來Undo Block的競爭 該參數計算公式以下:
Round(Userrollbacks / (user commits + user rollbacks) ,4)* 100% 。
Rows per Sort:每次排序的行數
注:
Oracle的硬解析和軟解析
提到軟解析(soft prase)和硬解析(hardprase),就不能不說一下Oracle對sql的處理過程。當你發出一條sql語句交付Oracle,在執行和獲取結果前,Oracle對此sql將進行幾個步驟的處理過程:
一、語法檢查(syntax check)
檢查此sql的拼寫是否語法。
二、語義檢查(semantic check)
諸如檢查sql語句中的訪問對象是否存在及該用戶是否具有相應的權限。
三、對sql語句進行解析(prase)
利用內部算法對sql進行解析,生成解析樹(parse tree)及執行計劃(execution plan)。
四、執行sql,返回結果(execute and return)
其中,軟、硬解析就發生在第三個過程裏。
Oracle利用內部的hash算法來取得該sql的hash值,而後在library cache裏查找是否存在該hash值;
假設存在,則將此sql與cache中的進行比較;
假設「相同」,就將利用已有的解析樹與執行計劃,而省略了優化器的相關工做。這也就是軟解析的過程。
誠然,若是上面的2個假設中任有一個不成立,那麼優化器都將進行建立解析樹、生成執行計劃的動做。這個過程就叫硬解析。
建立解析樹、生成執行計劃對於sql的執行來講是開銷昂貴的動做,因此,應當極力避免硬解析,儘可能使用軟解析。
Buffer Nowait %: |
100.00 |
Redo NoWait %: |
100.00 |
Buffer Hit %: |
98.72 |
In-memory Sort %: |
99.86 |
Library Hit %: |
99.97 |
Soft Parse %: |
99.92 |
Execute to Parse %: |
89.09 |
Latch Hit %: |
99.99 |
Parse CPU to Parse Elapsd %: |
7.99 |
% Non-Parse CPU: |
99.95 |
本節包含了Oracle關鍵指標的內存命中率及其它數據庫實例操做的效率。其中Buffer Hit Ratio 也稱Cache Hit Ratio,LibraryHit ratio也稱LibraryCache Hit ratio。同LoadProfile一節相同,這一節也沒有所謂「正確」的值,而只能根據應用的特色判斷是否合適。在一個使用直接讀執行大型並行查詢的DSS環境,20%的BufferHit Ratio是能夠接受的,而這個值對於一個OLTP系統是徹底不能接受的。根據Oracle的經驗,對於OLTPT系統,BufferHit Ratio理想應該在90%以上。
Buffer Nowait表示在內存得到數據的未等待比例。在緩衝區中獲取Buffer的未等待比率。Buffer Nowait的這個值通常須要大於99%。不然可能存在爭用,能夠在後面的等待事件中進一步確認。
buffer hit表示進程從內存中找到數據塊的比率,監視這個值是否發生重大變化比這個值自己更重要。對於通常的OLTP系統,若是此值低於80%,應該給數據庫分配更多的內存。數據塊在數據緩衝區中的命中率,一般應在95%以上。不然,小於95%,須要調整重要的參數,小於90%多是要加db_cache_size。一個高的命中率,不必定表明這個系統的性能是最優的,好比大量的非選擇性的索引被頻繁訪問,就會形成命中率很高的假相(大量的db file sequential read),可是一個比較低的命中率,通常就會對這個系統的性能產生影響,須要調整。命中率的突變,每每是一個很差的信息。若是命中率忽然增大,能夠檢查top buffer get SQL,查看致使大量邏輯讀的語句和索引,若是命中率忽然減少,能夠檢查top physical reads SQL,檢查產生大量物理讀的語句,主要是那些沒有使用索引或者索引被刪除的。
Redo NoWait表示在LOG緩衝區得到BUFFER的未等待比例。若是過低(可參考90%閥值),考慮增長LOG BUFFER。當redo buffer達到1M時,就須要寫到redolog文件,因此通常當redo buffer設置超過1M,不太可能存在等待buffer空間分配的狀況。當前,通常設置爲2M的redo buffer,對於內存總量來講,應該不是一個太大的值。
library hit表示Oracle從Library Cache中檢索到一個解析過的SQL或PL/SQL語句的比率,當應用程序調用SQL或存儲過程時,Oracle檢查Library Cache肯定是否存在解析過的版本,若是存在,Oracle當即執行語句;若是不存在,Oracle解析此語句,並在Library Cache中爲它分配共享SQL區。低的library hit ratio會致使過多的解析,增長CPU消耗,下降性能。若是library hit ratio低於90%,可能須要調大shared pool區。STATEMENT在共享區的命中率,一般應該保持在95%以上,不然須要要考慮:加大共享池;使用綁定變量;修改cursor_sharing等參數。
Latch Hit:Latch是一種保護內存結構的鎖,能夠認爲是SERVER進程獲取訪問內存數據結構的許可。要確保Latch Hit>99%,不然意味着Shared Pool latch爭用,可能因爲未共享的SQL,或者LibraryCache過小,可以使用綁定變動或調大Shared Pool解決。要確保>99%,不然存在嚴重的性能問題。當該值出現問題的時候,咱們能夠藉助後面的等待時間和latch分析來查找解決問題。
Parse CPU to Parse Elapsd:解析實際運行時間/(解析實際運行時間+解析中等待資源時間),越高越好。計算公式爲:Parse CPU to ParseElapsd %= 100*(parse time cpu / parse time elapsed)。即:解析實際運行時間/(解析實際運行時間+解析中等待資源時間)。若是該比率爲100%,意味着CPU等待時間爲0,沒有任何等待。
Non-Parse CPU :SQL實際運行時間/(SQL實際運行時間+SQL解析時間),過低表示解析消耗時間過多。計算公式爲:% Non-Parse CPU=round(100*1-PARSE_CPU/TOT_CPU),2)。若是這個值比較小,表示解析消耗的CPU時間過多。與PARSE_CPU相比,若是TOT_CPU很高,這個比值將接近100%,這是很好的,說明計算機執行的大部分工做是執行查詢的工做,而不是分析查詢的工做。
Execute to Parse:是語句執行與分析的比例,若是要SQL重用率高,則這個比例會很高。該值越高表示一次解析後被重複執行的次數越多。計算公式爲:Execute to Parse =100 * (1 - Parses/Executions)。本例中,差很少每execution 5次須要一次parse。因此若是系統Parses> Executions,就可能出現該比率小於0的狀況。該值<0一般說明shared pool設置或者語句效率存在問題,形成反覆解析,reparse可能較嚴重,或者是可能同snapshot有關,一般說明數據庫性能存在問題。
In-memory Sort:在內存中排序的比率,若是太低說明有大量的排序在臨時表空間中進行。考慮調大PGA(10g)。若是低於95%,能夠經過適當調大初始化參數PGA_AGGREGATE_TARGET或者SORT_AREA_SIZE來解決,注意這兩個參數設置做用的範圍時不一樣的,SORT_AREA_SIZE是針對每一個session設置的,PGA_AGGREGATE_TARGET則時針對全部的sesion的。
Soft Parse:軟解析的百分比(softs/softs+hards),近似看成sql在共享區的命中率,過低則須要調整應用使用綁定變量。 sql在共享區的命中率,小於<95%,須要考慮綁定,若是低於80%,那麼就能夠認爲sql基本沒有被重用
|
Begin |
End |
Memory Usage %: |
47.19 |
47.50 |
% SQL with executions>1: |
88.48 |
79.81 |
% Memory for SQL w/exec>1: |
79.99 |
73.52 |
Memory Usage %:對於一個已經運行一段時間的數據庫來講,共享池內存使用率,應該穩定在75%-90%間,若是過小,說明Shared Pool有浪費,而若是高於90,說明共享池中有爭用,內存不足。這個數字應該長時間穩定在75%~90%。若是這個百分比過低,代表共享池設置過大,帶來額外的管理上的負擔,從而在某些條件下會致使性能的降低。若是這個百分率過高,會使共享池外部的組件老化,若是SQL語句被再次執行,這將使得SQL語句被硬解析。在一個大小合適的系統中,共享池的使用率將處於75%到略低於90%的範圍內.
SQL with executions>1:執行次數大於1的sql比率,若是此值過小,說明須要在應用中更多使用綁定變量,避免過多SQL解析。在一個趨向於循環運行的系統中,必須認真考慮這個數字。在這個循環系統中,在一天中相對於另外一部分時間的部分時間裏執行了一組不一樣的SQL語句。在共享池中,在觀察期間將有一組未被執行過的SQL語句,這僅僅是由於要執行它們的語句在觀察期間沒有運行。只有系統連續運行相同的SQL語句組,這個數字纔會接近100%。
Memory for SQL w/exec>1:執行次數大於1的SQL消耗內存的佔比。這是與不頻繁使用的SQL語句相比,頻繁使用的SQL語句消耗內存多少的一個度量。這個數字將在整體上與% SQL with executions>1很是接近,除非有某些查詢任務消耗的內存沒有規律。在穩定狀態下,整體上會看見隨着時間的推移大約有75%~85%的共享池被使用。若是Statspack報表的時間窗口足夠大到覆蓋全部的週期,執行次數大於一次的SQL語句的百分率應該接近於100%。這是一個受觀察之間持續時間影響的統計數字。能夠指望它隨觀察之間的時間長度增大而增大。
小結:經過ORACLE的實例有效性統計數據,咱們能夠得到大概的一個總體印象,然而咱們並不能由此來肯定數據運行的性能。當前性能問題的肯定,咱們主要仍是依靠下面的等待事件來確認。咱們能夠這樣理解兩部分的內容,hit統計幫助咱們發現和預測一些系統將要產生的性能問題,由此咱們能夠作到未雨綢繆。而wait事件,就是代表當前數據庫已經出現了性能問題須要解決,因此是亡羊補牢的性質。
Event |
Waits |
Time(s) |
Avg Wait(ms) |
% Total Call Time |
Wait Class |
CPU time |
|
515 |
|
77.6 |
|
SQL*Net more data from client |
27,319 |
64 |
2 |
9.7 |
Network |
log file parallel write |
5,497 |
47 |
9 |
7.1 |
System I/O |
db file sequential read |
7,900 |
35 |
4 |
5.3 |
User I/O |
db file parallel write |
4,806 |
34 |
7 |
5.1 |
System I/O |
這是報告概要的最後一節,顯示了系統中最嚴重的5個等待,按所佔等待時間的比例倒序列示。當咱們調優時,總但願觀察到最顯著的效果,所以應當從這裏入手肯定咱們下一步作什麼。例如若是‘bufferbusy wait’是較嚴重的等待事件,咱們應當繼續研究報告中BufferWait和File/Tablespace IO區的內容,識別哪些文件致使了問題。若是最嚴重的等待事件是I/O事件,咱們應當研究按物理讀排序的SQL語句區以識別哪些語句在執行大量I/O,並研究Tablespace和I/O區觀察較慢響應時間的文件。若是有較高的LATCH等待,就須要察看詳細的LATCH統計識別哪些LATCH產生的問題。
一個性能良好的系統,cpu time應該在top5的前面,不然說明你的系統大部分時間都用在等待上。
在這裏,log file parallel write是相對比較多的等待,佔用了7%的CPU時間。
一般,在沒有問題的數據庫中,CPU time老是列在第一個。
更多的等待事件,參見本報告 的Wait Events一節。
|
Begin |
End |
Number of Instances: |
2 |
2 |
|
Per Second |
Per Transaction |
Global Cache blocks received: |
4.16 |
3.51 |
Global Cache blocks served: |
5.97 |
5.04 |
GCS/GES messages received: |
408.47 |
344.95 |
GCS/GES messages sent: |
258.03 |
217.90 |
DBWR Fusion writes: |
0.05 |
0.05 |
Estd Interconnect traffic (KB) |
211.16 |
|
Buffer access - local cache %: |
98.60 |
Buffer access - remote cache %: |
0.12 |
Buffer access - disk %: |
1.28 |
Avg global enqueue get time (ms): |
0.1 |
Avg global cache cr block receive time (ms): |
1.1 |
Avg global cache current block receive time (ms): |
0.8 |
Avg global cache cr block build time (ms): |
0.0 |
Avg global cache cr block send time (ms): |
0.0 |
Global cache log flushes for cr blocks served %: |
3.5 |
Avg global cache cr block flush time (ms): |
3.9 |
Avg global cache current block pin time (ms): |
0.0 |
Avg global cache current block send time (ms): |
0.0 |
Global cache log flushes for current blocks served %: |
0.4 |
Avg global cache current block flush time (ms): |
3.0 |
Avg message sent queue time (ms): |
0.0 |
Avg message sent queue time on ksxp (ms): |
0.3 |
Avg message received queue time (ms): |
0.5 |
Avg GCS message process time (ms): |
0.0 |
Avg GES message process time (ms): |
0.0 |
% of direct sent messages: |
14.40 |
% of indirect sent messages: |
77.04 |
% of flow controlled messages: |
8.56 |
/*oracle等待事件是衡量oracle運行情況的重要依據及指示,等待事件分爲兩類:空閒等待事件和非空閒等待事件, TIMED_STATISTICS = TRUE 那麼等待事件按等待的時間排序,= FALSE那麼事件按等待的數量排序。運行statspack期間必須session上設置TIMED_STATISTICS = TRUE,不然統計的數據將失真。空閒等待事件是oracle正等待某種工做,在診斷和優化數據庫時候,不用過多注意這部分事件,非空閒等待事件專門針對oracle的活動,指數據庫任務或應用程序運行過程當中發生的等待,這些等待事件是咱們在調整數據庫應該關注的。
對於常見的等待事件,說明以下:
1) db filescattered read 文件分散讀取
該事件一般與全表掃描或者fast full index scan有關。由於全表掃描是被放入內存中進行的進行的,一般狀況下基於性能的考慮,有時候也多是分配不到足夠長的連續內存空間,因此會將數據塊分散(scattered)讀入Buffer Cache中。該等待過大多是缺乏索引或者沒有合適的索引(能夠調整optimizer_index_cost_adj) 。這種狀況也多是正常的,由於執行全表掃描可能比索引掃描效率更高。當系統存在這些等待時,須要經過檢查來肯定全表掃描是否必需的來調整。由於全表掃描被置於LRU(Least Recently Used,最近最少適用)列表的冷端(cold end),對於頻繁訪問的較小的數據表,能夠選擇把他們Cache 到內存中,以免反覆讀取。當這個等待事件比較顯著時,能夠結合v$session_longops 動態性能視圖來進行診斷,該視圖中記錄了長時間(運行時間超過6 秒的)運行的事物,可能不少是全表掃描操做(無論怎樣,這部分信息都是值得咱們注意的)。
關於參數OPTIMIZER_INDEX_COST_ADJ=n:該參數是一個百分比值,缺省值爲100,能夠理解爲FULL SCAN COST/INDEX SCAN COST。當n%* INDEX SCAN COST<FULL SCAN COST時,oracle會選擇使用索引。在具體設置的時候,咱們能夠根據具體的語句來調整該值。若是咱們但願某個statement使用索引,而實際它確走全表掃描,能夠對比這兩種狀況的執行計劃不一樣的COST,從而設置一個更合適的值。
2) db filesequential read 文件順序讀取整代碼,特別是錶鏈接:該事件說明在單個數據塊上大量等待,該值太高一般是因爲表間鏈接順序很糟糕(沒有正確選擇驅動行源),或者使用了非選擇性索引。經過將這種等待與statspack報表中已知其它問題聯繫起來(如效率不高的sql),經過檢查確保索引掃描是必須的,並確保多表鏈接的鏈接順序來調整。
3) buffer busywait 緩衝區忙增大DB_CACHE_SIZE,加速檢查點,調整代碼:
當進程須要存取SGA中的buffer的時候,它會依次執行以下步驟的操做:
當緩衝區以一種非共享方式或者如正在被讀入到緩衝時,就會出現該等待。該值不該該大於1%。當出 現等待問題時,能夠檢查緩衝等待統計部分(或V$WAITSTAT),肯定該等待發生在什麼位置:
a) 若是等待是否位於段頭(SegmentHeader)。這種狀況代表段中的空閒列表(freelist)的塊比較少。能夠考慮增長空閒列表(freelist,對於Oracle8i DMT)或者增長freelist groups(在不少時候這個調整是立竿見影的(alter table tablename strorage(freelists 2)),在8.1.6以前,這個freelists參數不能動態修改;在8.1.6及之後版本,動態修改feelists須要設置COMPATIBLE至少爲8.1.6)。也能夠增長PCTUSED與PCTFREE之間距離(PCTUSED-to-pctfree gap),其實就是說下降PCTUSED的值,儘快使塊返回freelist列表被重用。若是支持自動段空間管理(ASSM),也能夠使用ASSM模式,這是在ORALCE 920之後的版本中新增的特性。
b) 若是這一等待位於undo header,能夠經過增長回滾段(rollback segment)來解決緩衝區的問題。
c) 若是等待位於undo block上,咱們須要增長提交的頻率,使block能夠儘快被重用;使用更大的回滾段;下降一致讀所選擇的表中數據的密度;增大DB_CACHE_SIZE。
d) 若是等待處於data block,代表出現了hot block,能夠考慮以下方法解決:①將頻繁併發訪問的表或數據移到另外一數據塊或者進行更大範圍的分佈(能夠增大pctfree值,擴大數據分佈,減小競爭),以避開這個"熱點"數據塊。②也能夠減少數據塊的大小,從而減小一個數據塊中的數據行數,下降數據塊的熱度,減少競爭;③檢查對這些熱塊操做的SQL語句,優化語句。④增長hot block上的initrans值。但注意不要把initrans值設置的過於高了,一般設置爲5就足夠了。由於增長事務意味着要增長ITL事務槽,而每一個ITL事務槽將佔用數據塊中24個字節長度。默認狀況下,每一個數據塊或者索引塊中是ITL槽是2個,在增長initrans的時候,能夠考慮增大數據塊所在的表的PCTFREE值,這樣Oracle會利用PCTFREE部分的空間增長ITL slot數量,最大達到maxtrans指定。
e) 若是等待處於index block,應該考慮重建索引、分割索引或使用反向鍵索引。爲了防止與數據塊相關的緩衝忙等待,也能夠使用較小的塊,在這種狀況下,單個塊中的記錄就較少,因此這個塊就不是那麼"繁忙"。或者能夠設置更大的PCTFREE,使數據擴大物理分佈,減小記錄間的熱點競爭。在執行DML(insert/update/ delete)時,Oracle向數據塊中寫入信息,對於多事務併發訪問的數據表,關於ITL的競爭和等待可能出現,爲了減小這個等待,能夠增長initrans,使用多個ITL槽。在Oracle9i 中,能夠使用ASSM這個新特性Oracle 使用位圖來管理空間使用,減少爭用。
【
當進程須要存取SGA中的buffer的時候,它會依次執行以下步驟的操做:
1.得到cache buffers chains latch,遍歷那條buffer chain直到找到須要的buffer header
2.根據須要進行的操做類型(讀或寫),它須要在buffer header上得到一個共享或獨佔模式的buffer pin或者buffer lock
3.若進程得到buffer header pin,它會釋放得到的cache buffers chains latch,而後執行對buffer block的操做
4.若進程沒法得到buffer header pin,它就會在buffer busy waits事件上等待
進程之因此沒法得到buffer header pin,是由於爲了保證數據的一致性,同一時刻一個block只能被一個進程pin住進行存取,所以當一個進程須要存取buffer cache中一個被其餘進程使用的block的時候,這個進程就會產生對該block的buffer busy waits事件。
截至Oracle 9i,buffer busy waits事件的p1,p2,p3三個參數分別是file#,block#和id,分別表示等待的buffer block所在的文件編號,塊編號和具體的等待緣由編號,到了Oracle 10g,前兩個參數沒變,第3個參數變成了塊類型編號,這一點能夠經過查詢v$event_name視圖來進行驗證:
PHP code:
Oracle 9i
SQL> select parameter1,parameter2,parameter3 from v$event_name where name='buffer busy waits'
PARAMETER1 PARAMETER2 PARAMETER3
------------------------------------------------ ------------------------
file# block# id
Oracle 10g
PARAMETER1 PARAMETER2 PARAMETER3
------------------------------------------------ ------------------------
file# block# class#
在診斷buffer busy waits事件的過程當中,獲取以下信息會頗有用:
1.獲取產生buffer busy waits事件的等待緣由編號,這能夠經過查詢該事件的p3參數值得到
2.獲取產生此事件的SQL語句,能夠經過以下的查詢得到:
select sql_text from v$sqlt1,v$session t2,v$session_wait t3
where t1.address=t2.sql_addressand t1.hash_value=t2.sql_hash_value
and t2.sid=t3.sid andt3.event='buffer busy waits';
3.獲取等待的塊的類型以及所在的segment,能夠經過以下查詢得到:
PHP code:
select 'Segment Header' class,a.segment_type,a.segment_name,a.partition_name fromdba_segments a,v$session_wait b
where a.header_file=b.p1 and a.header_block=b.p2 and b.event='bufferbusy waits'
union
select 'Freelist Groups' class,a.segment_type,a.segment_name,a.partition_name fromdba_segments a,v$session_wait b
where a.header_file=b.p1 and b.p2 between a.header_block+1 and (a.header_block+a.freelist_groups) and a.freelist_groups>1and b.event='bufferbusy waits'
union
select a.segment_type||' block' class,a.segment_type,a.segment_name,a.partition_namefrom dba_extents a,v$session_wait b
where a.file_id=b.p1 and b.p2 between a.block_id and a.block_id+a.blocks-1and b.event='bufferbusy waits' andnot exists(select 1 from dba_segments where
header_file=b.p1and header_block= b.p2);
查詢的第一部分:若是等待的塊類型是segment header,那麼能夠直接拿buffer busy waits事件的p1和p2參數去dba_segments視圖中匹配header_file和header_block字段便可找到等待的segment名稱和segment類型,進行相應調整
查詢的第二部分:若是等待的塊類型是freelist groups,也能夠在dba_segments視圖中找出對應的segment名稱和segment類型,注意這裏的參數p2表示的freelist groups的位置是在segment的header_block+1到header_block+freelist groups組數之間,而且freelist groups組數大於1
查詢的第三部分:若是等待的塊類型是普通的數據塊,那麼能夠用p一、p2參數和dba_extents進行聯合查詢獲得block所在的segment名稱和segment類型
對於不一樣的等待塊類型,咱們採起不一樣的處理辦法:
1.data segment header:
進程常常性的訪問data segment header一般有兩個緣由:獲取或修改process freelists信息、擴展高水位標記,針對第一種狀況,進程頻繁訪問process freelists信息致使freelist爭用,咱們能夠增大相應的segment對象的存儲參數freelist或者freelist groups;若因爲數據塊頻繁進出freelist而致使進程常常要修改freelist,則能夠將pctfree值和pctused值設置較大的差距,從而避免數據塊頻繁進出freelist;對於第二種狀況,因爲該segment空間消耗很快,而設置的next extent太小,致使頻繁擴展高水位標記,解決的辦法是增大segment對象的存儲參數next extent或者直接在建立表空間的時候設置extent size uniform
2.data block:
某一或某些數據塊被多個進程同時讀寫,成爲熱點塊,能夠經過以下這些辦法來解決這個問題:
(1)下降程序的併發度,若是程序中使用了parallel查詢,下降parallel degree,以避免多個parallel slave同時訪問一樣的數據對象而造成等待下降性能
(2)調整應用程序使之能讀取較少的數據塊就能獲取所需的數據,減小buffer gets和physical reads
(3)減小同一個block中的記錄數,使記錄分佈於更多的數據塊中,這能夠經過若干途徑實現:能夠調整segment對象的pctfree值,能夠將segment重建到block size較小的表空間中,還能夠用alter table minimizerecords_per_block語句減小每塊中的記錄數
(4)若熱點塊對象是相似自增id字段的索引,則能夠將索引轉換爲反轉索引,打散數據分佈,分散熱點塊
3.undo segment header:
undo segment header爭用是由於系統中undo segment不夠,須要增長足夠的undo segment,根據undo segment的管理方法,如果手工管理模式,須要修改rollback_segments初始化參數來增長rollback segment,如果自動管理模式,能夠減少transactions_per_rollback_segment初始化參數的值來使oracle自動增多rollback segment的數量
4.undo block:
undo block爭用是因爲應用程序中存在對數據的讀和寫同時進行,讀進程須要到undo segment中去得到一致性數據,解決辦法是錯開應用程序修改數據和大量查詢數據的時間
小結:buffer busy waits事件是oracle等待事件中比較複雜的一個,其造成緣由不少,須要根據p3參數對照Oracle提供的緣由代碼表進行相應的診斷,10g之後則須要根據等待的block類型結合引發等待時間的具體SQL進行分析,採起相應的調整措施
】
4) latch free:當閂鎖丟失率高於0.5%時,須要調整這個問題。詳細的咱們在後面的Latch Activity for DB部分說明。
latch是一種低級排隊機制,用於保護SGA中共享內存結構。latch就像是一種快速地被獲取和釋放的內存鎖。用於防止共享內存結構被多個用戶同時訪問。若是latch不可用,就會記錄latch釋放失敗(latch free miss )。有兩種與閂有關的類型:
■ 馬上。
■ 能夠等待。
假如一個進程試圖在馬上模式下得到閂,而該閂已經被另一個進程所持有,若是該閂不能立可用的話,那麼該進程就不會爲得到該閂而等待。它將繼續執行另外一個操做。
大多數latch問題都與如下操做相關:
沒有很好的是用綁定變量(library cache latch)、重做生成問題(redo allocation latch)、緩衝存儲競爭問題(cache buffers LRU chain),以及buffer cache中的存在"熱點"塊(cache buffers chain)。
一般咱們說,若是想設計一個失敗的系統,不考慮綁定變量,這一個條件就夠了,對於異構性強的系統,不使用綁定變量的後果是極其嚴重的。
另外也有一些latch等待與bug有關,應當關注Metalink相關bug的公佈及補丁的發佈。當latch miss ratios大於0.5%時,就應當研究這一問題。
Oracle的latch機制是競爭,其處理相似於網絡裏的CSMA/CD,全部用戶進程爭奪latch,對於願意等待類型(willing-to-wait)的latch,若是一個進程在第一次嘗試中沒有得到latch,那麼它會等待而且再嘗試一次,若是通過_spin_count次爭奪不能得到latch, 而後該進程轉入睡眠狀態,持續一段指定長度的時間,而後再次醒來,按順序重複之前的步驟.在8i/9i中默認值是_spin_count=2000。
若是SQL語句不能調整,在8.1.6版本以上,Oracle提供了一個新的初始化參數: CURSOR_SHARING能夠經過設置CURSOR_SHARING = force 在服務器端強制綁定變量。設置該參數可能會帶來必定的反作用,對於Java的程序,有相關的bug,具體應用應該關注Metalink的bug公告。
***Latch 問題及可能解決辦法
------------------------------
* Library Cache and Shared Pool (未綁定變量---綁定變量,調整shared_pool_size)
每當執行SQL或PL/SQL存儲過程,包,函數和觸發器時,這個Latch即被用到.Parse操做中此Latch也會被頻繁使用.
* Redo Copy (增大_LOG_SIMULTANEOUS_COPIES參數)
重作拷貝Latch用來從PGA向重作日誌緩衝區拷貝重作記錄.
* Redo Allocation (最小化REDO生成,避免沒必要要提交)
此Latch用來分配重作日誌緩衝區中的空間,能夠用NOLOGGING來減緩競爭.
* Row Cache Objects (增大共享池)
數據字典競爭.過分parsing.
* Cache Buffers Chains (_DB_BLOCK_HASH_BUCKETS應增大或設爲質數)
"過熱"數據塊形成了內存緩衝鏈Latch競爭.
* Cache Buffers Lru Chain (調整SQL,設置DB_BLOCK_LRU_LATCHES,或使用多個緩衝區池)
掃描所有內存緩衝區塊的LRU(最近最少使用)鏈時要用到內存緩衝區LRU鏈Latch.過小內存緩衝區、過大的內存緩衝區吞吐量、過多的內存中進行的排序操做、DBWR速度跟不上工做負載等會引發此Latch競爭。
5) Enqueue 隊列是一種鎖,保護一些共享資源,防止併發的DML操做。隊列採用FIFO策略,注意latch並非採用的FIFO機制。比較常見的有3種類型的隊列:ST隊列,HW隊列,TX4隊列。
ST Enqueue的等待主要是在字典管理的表空間中進行空間管理和分配時產生的。解決方法:1)將字典管理的表空間改成本地管理模式2)預先分配分區或者將有問題的字典管理的表空間的next extent設置大一些。
HW Enqueue是用於segment的HWM的。當出現這種等待的時候,能夠經過手工分配extents來解決。
TX4 Enqueue等待是最多見的等待狀況。一般有3種狀況會形成這種類型的等待:1)惟一索引中的重複索引。解決方法:commit或者rollback以釋放隊列。 2)對同一個位圖索引段(bitmap indexfragment)有多個update,由於一個bitmap index fragment可能包含了多個rowid,因此當多個用戶更新時,可能一個用戶會鎖定該段,從而形成等待。解決方法同上。3)有多個用戶同時對一個數據塊做update,固然這些DML操做多是針對這個數據塊的不一樣的行,若是此時沒有空閒的ITL槽,就會產生一個block-level鎖。解決方法:增大表的initrans值使建立更多的ITL槽;或者增大表的pctfree值,這樣oracle能夠根據須要在pctfree的空間建立更多的ITL槽;使用smaller block size,這樣每一個塊中包含行就比較少,能夠減少衝突發生的機會。
AWR報告分析--等待事件-隊列.doc
6) Free Buffer 釋放緩衝區:這個等待事件代表系統正在等待內存中的可用空間,這說明當前Buffer 中已經沒有Free 的內存空間。若是應用設計良好,SQL 書寫規範,充分綁定變量,那這種等待可能說明BufferCache 設置的偏小,你可能須要增大DB_CACHE_SIZE。該等待也可能說明DBWR 的寫出速度不夠,或者磁盤存在嚴重的競爭,能夠須要考慮增長檢查點、使用更多的DBWR 進程,或者增長物理磁盤的數量,分散負載,平衡IO。
7) Log file single write:該事件僅與寫日誌文件頭塊相關,一般發生在增長新的組成員和增進序列號時。頭塊寫單個進行,由於頭塊的部分信息是文件號,每一個文件不一樣。更新日誌文件頭這個操做在後臺完成,通常不多出現等待,無需太多關注。
8) log file parallel write:從log buffer 寫redo 記錄到redo log 文件,主要指常規寫操做(相對於log file sync)。若是你的Log group 存在多個組成員,當flush log buffer 時,寫操做是並行的,這時候此等待事件可能出現。儘管這個寫操做並行處理,直到全部I/O 操做完成該寫操做纔會完成(若是你的磁盤支持異步IO或者使用IO SLAVE,那麼即便只有一個redo log file member,也有可能出現此等待)。這個參數和log file sync 時間相比較能夠用來衡量log file 的寫入成本。一般稱爲同步成本率。改善這個等待的方法是將redo logs放到I/O快的盤中,儘可能不使用raid5,確保表空間不是處在熱備模式下,確保redolog和data的數據文件位於不一樣的磁盤中。
9) log file sync:當一個用戶提交或回滾數據時,LGWR將會話的redo記錄從日誌緩衝區填充到日誌文件中,用戶的進程必須等待這個填充工做完成。在每次提交時都出現,若是這個等待事件影響到數據庫性能,那麼就須要修改應用程序的提交頻率, 爲減小這個等待事件,須一次提交更多記錄,或者將重作日誌REDO LOG 文件訪在不一樣的物理磁盤上,提升I/O的性能。
當一個用戶提交或回滾數據時,LGWR 將會話期的重作由日誌緩衝器寫入到重作日誌中。日誌文件同步過程必須等待這一過程成功完成。爲了減小這種等待事件,能夠嘗試一次提交更多的記錄(頻繁的提交會帶來更多的系統開銷)。將重作日誌置於較快的磁盤上,或者交替使用不一樣物理磁盤上的重作日誌,以下降歸檔對LGWR的影響。
對於軟RAID,通常來講不要使用RAID 5,RAID5 對於頻繁寫入得系統會帶來較大的性能損失,能夠考慮使用文件系統直接輸入/輸出,或者使用裸設備(raw device),這樣能夠得到寫入的性能提升。
10) log buffer space:日誌緩衝區寫的速度快於LGWR寫REDOFILE的速度,能夠增大日誌文件大小,增長日誌緩衝區的大小,或者使用更快的磁盤來寫數據。
當你將日誌緩衝(log buffer)產生重作日誌的速度比LGWR 的寫出速度快,或者是當日志切換(log switch)太慢時,就會發生這種等待。這個等待出現時,一般代表redolog buffer 太小,爲解決這個問題,能夠考慮增大日誌文件的大小,或者增長日誌緩衝器的大小。
另一個可能的緣由是磁盤I/O 存在瓶頸,能夠考慮使用寫入速度更快的磁盤。在容許的條件下設置能夠考慮使用裸設備來存放日誌文件,提升寫入效率。在通常的系統中,最低的標準是,不要把日誌文件和數據文件存放在一塊兒,由於一般日誌文件只寫不讀,分離存放能夠得到性能提高。
11) logfileswitch:一般是由於歸檔速度不夠快。表示全部的提交(commit)的請求都須要等待"日誌文件切換"的完成。Log file Switch 主要包含兩個子事件:
log file switch (archiving needed) 這個等待事件出現時一般是由於日誌組循環寫滿之後,第一個日誌歸檔還沒有完成,出現該等待。出現該等待,可能表示io 存在問題。解決辦法:①能夠考慮增大日誌文件和增長日誌組;②移動歸檔文件到快速磁盤;③調整log_archive_max_processes。
log file switch (checkpoint incomplete) 當日志組都寫完之後,LGWR 試圖寫第一個log file,若是這時數據庫沒有完成寫出記錄在第一個log file 中的dirty 塊時(例如第一個檢查點未完成),該等待事件出現。該等待事件一般表示你的DBWR寫出速度太慢或者IO 存在問題。爲解決該問題,你可能須要考慮增長額外的DBWR或者增長你的日誌組或日誌文件大小,或者也能夠考慮增長checkpoint的頻率。
12) DB File Parallel Write:文件被DBWR並行寫時發生。解決辦法:改善IO性能。
處理此事件時,須要注意
1)db file parallel write事件只屬於DBWR進程。
2)緩慢的DBWR可能影響前臺進程。
3)大量的db file parallel write等待時間極可能是I/O問題引發的。(在確認os支持異步io的前提下,你能夠在系統中檢查disk_asynch_io參數,保證爲TRUE。能夠經過設置db_writer_processes來提升DBWR進程數量,固然前提是不要超過cpu的數量。)
DBWR進程執行通過SGA的全部數據庫寫入,當開始寫入時,DBWR進程編譯一組髒塊(dirty block),而且將系統寫入調用發佈到操做系統。DBWR進程查找在各個時間內寫入的塊,包括每隔3秒的一次查找,當前臺進程提交以清除緩衝區中的內容時:在檢查點處查找,當知足_DB_LARGE_DIRTY_QUEUE、_DB_BLOCK_MAX_DIRTY_TARGET和FAST_START_MTTR_TARGET閥值時,等等。
雖然用戶會話歷來沒有經歷過db file parallel write等待事件,但這並不意味着它們不會受到這種事件的影響。緩慢的DBWR寫入性能能夠形成前臺會話在write complete waits或free buffer waits事件上等待。DBWR寫入性能可能受到以下方面的影響:I/O操做的類型(同步或異步)、存儲設備(裸設備或成熟的文件系統)、數據庫佈局和I/O子系統配置。須要查看的關鍵數據庫統計是當db file parallel write、free buffer waits和write complete waits等待事件互相關聯時,系統範圍內的TIME_WAITED和AVERAGE_WAIT。
若是db file parallel write平均等待時間大於10cs(或者100ms),則一般代表緩慢的I/O吞吐量。能夠經過不少方法來改善平均等待時間。主要的方法是使用正確類型的I/O操做。若是數據文件位於裸設備(raw device)上,而且平臺支持異步I/O,就應該使用異步寫入。可是,若是數據庫位於文件系統上,則應該使用同步寫入和直接I/O(這是操做系統直接I/O)。除了確保正在使用正確類型的I/O操做,還應該檢查你的數據庫佈局並使用常見的命令監控來自操做系統的I/O吞吐量。例如sar -d或iostat -dxnC。
當db file parallel write平均等待時間高而且系統繁忙時,用戶會話可能開始在free buffer waits事件上等待。這是由於DBWR進程不能知足釋放緩衝區的需求。若是free buffer waits事件的TIME_WAITED高,則應該在高速緩存中增長緩衝區數量以前說明DBWR I/O吞吐量的問題。
高db file parallel write平均等待時間的另外一個反響是在write complete waits等待事件上的高TIME_WAITED。前臺進程不容許修改正在傳輸到磁盤的塊。換句話說,也就是位於DBWR批量寫入中的塊。前臺的會話在write complete waits等待事件上等待。所以,write complete waits事件的出現,必定標誌着緩慢的DBWR進程,能夠經過改進DBWR I/O吞吐量修正這種延遲。
13) DB File Single Write:當文件頭或別的單獨塊被寫入時發生,這一等待直到全部的I/O調用完成。解決辦法:改善IO性能。
14) DB FILE Scattered Read:當掃描整個段來根據初始化參數db_file_multiblock_read_count讀取多個塊時發生,由於數據可能分散在不一樣的部分,這與分條或分段)相關,所以一般須要多個分散的讀來讀取全部的數據。等待時間是完成全部I/O調用的時間。解決辦法:改善IO性能。
這種狀況一般顯示與全表掃描相關的等待。
當數據庫進行全表掃時,基於性能的考慮,數據會分散(scattered)讀入Buffer Cache。若是這個等待事件比較顯著,可能說明對於某些全表掃描的表,沒有建立索引或者沒有建立合適的索引,咱們可能須要檢查這些數據表已肯定是否進行了正確的設置。
然而這個等待事件不必定意味着性能低下,在某些條件下Oracle會主動使用全表掃描來替換索引掃描以提升性能,這和訪問的數據量有關,在CBO下Oracle會進行更爲智能的選擇,在RBO下Oracle更傾向於使用索引。
由於全表掃描被置於LRU(Least Recently Used,最近最少適用)列表的冷端(cold end),對於頻繁訪問的較小的數據表,能夠選擇把他們Cache到內存中,以免反覆讀取。
當這個等待事件比較顯著時,能夠結合v$session_longops動態性能視圖來進行診斷,該視圖中記錄了長時間(運行時間超過6秒的)運行的事物,可能不少是全表掃描操做(無論怎樣,這部分信息都是值得咱們注意的)。
15) DB FILE Sequential Read:當前臺進程對數據文件進行常規讀時發生,包括索引查找和別的非整段掃描以及數據文件塊丟棄等待。等待時間是完成全部I/O調用的時間。解決辦法:改善IO性能。
若是這個等待事件比較顯著,可能表示在多表鏈接中,表的鏈接順序存在問題,沒有正確地使用驅動表;或者可能索引的使用存在問題,並不是索引老是最好的選擇。在大多數狀況下,經過索引能夠更爲快速地獲取記錄,因此對於編碼規範、調整良好的數據庫,這個等待事件很大一般是正常的。有時候這個等待太高和存儲分佈不連續、連續數據塊中部分被緩存有關,特別對於DML頻繁的數據表,數據以及存儲空間的不連續可能致使過量的單塊讀,按期的數據整理和空間回收有時候是必須的。
須要注意在不少狀況下,使用索引並非最佳的選擇,好比讀取較大表中大量的數據,全表掃描可能會明顯快於索引掃描,因此在開發中就應該注意,對於這樣的查詢應該進行避免使用索引掃描。
16) Direct Path Read:通常直接路徑讀取是指將數據塊直接讀入PGA中。通常用於排序、並行查詢和read ahead操做。這個等待多是因爲I/O形成的。使用異步I/O模式或者限制排序在磁盤上,可能會下降這裏的等待時間。
與直接讀取相關聯的等待事件。當ORACLE將數據塊直接讀入會話的PGA(進程全局區)中,同時繞過SGA(系統全局區)。PGA中的數據並不和其餘的會話共享。即代表,讀入的這部分數據該會話獨自使用,不放於共享的SGA中。
在排序操做(order by/group by/union/distinct/rollup/合併鏈接)時,因爲PGA中的SORT_AREA_SIZE空間不足,形成須要使用臨時表空間來保存中間結果,當從臨時表空間讀入排序結果時,產生direct path read等待事件。
使用HASH鏈接的SQL語句,將不適合位於內存中的散列分區刷新到臨時表空間中。爲了查明匹配SQL謂詞的行,臨時表空間中的散列分區被讀回到內存中(目的是爲了查明匹配SQL謂詞的行),ORALCE會話在direct path read等待事件上等待。
使用並行掃描的SQL語句也會影響系統範圍的direct path read等待事件。在並行執行過程當中,direct path read等待事件與從屬查詢有關,而與父查詢無關,運行父查詢的會話基本上會在PX Deq:Execute Reply上等待,從屬查詢會產生direct path read等待事件。
直接讀取可能按照同步或異步的方式執行,取決於平臺和初始化參數disk_asynch_io參數的值。使用異步I/O時,系統範圍的等待的事件的統計可能不許確,會形成誤導做用。
17) direct path write:直接路徑寫該等待發生在,系統等待確認全部未完成的異步I/O 都已寫入磁盤。對於這一寫入等待,咱們應該找到I/O操做最爲頻繁的數據文件(若是有過多的排序操做,頗有可能就是臨時文件),分散負載,加快其寫入操做。若是系統存在過多的磁盤排序,會致使臨時表空間操做頻繁,對於這種狀況,能夠考慮使用Local管理表空間,分紅多個小文件,寫入不一樣磁盤或者裸設備。
在DSS系統中,存在大量的direct path read是很正常的,可是在OLTP系統中,一般顯著的直接路徑讀(direct path read)都意味着系統應用存在問題,從而致使大量的磁盤排序讀取操做。
直接路徑寫(direct paht write)一般發生在Oracle直接從PGA寫數據到數據文件或臨時文件,這個寫操做能夠繞過SGA。
這類寫入操做一般在如下狀況被使用:
·直接路徑加載;
·並行DML操做;
·磁盤排序;
·對未緩存的「LOB」段的寫入,隨後會記錄爲direct path write(lob)等待。
最爲常見的直接路徑寫,多數由於磁盤排序致使。對於這一寫入等待,咱們應該找到I/O操做最爲頻繁的數據文件(若是有過多的排序操做,頗有可能就是臨時文件),分散負載,加快其寫入操做。
18) control file parallel write:當server 進程更新全部控制文件時,這個事件可能出現。若是等待很短,能夠不用考慮。若是等待時間較長,檢查存放控制文件的物理磁盤I/O 是否存在瓶頸。
多個控制文件是徹底相同的拷貝,用於鏡像以提升安全性。對於業務系統,多個控制文件應該存放在不一樣的磁盤上,通常來講三個是足夠的,若是隻有兩個物理硬盤,那麼兩個控制文件也是能夠接受的。在同一個磁盤上保存多個控制文件是不具有實際意義的。減小這個等待,能夠考慮以下方法:①減小控制文件的個數(在確保安全的前提下)。②若是系統支持,使用異步IO。③轉移控制文件到IO 負擔輕的物理磁盤。
19) control file sequential read
control file single write :控制文件連續讀/控制文件單個寫對單個控制文件I/O 存在問題時,這兩個事件會出現。若是等待比較明顯,檢查單個控制文件,看存放位置是否存在I/O 瓶頸。
20) library cachepin
該事件一般是發生在先有會話在運行PL/SQL,VIEW,TYPES等object時,又有另外的會話執行從新編譯這些object,即先給對象加上了一個共享鎖,而後又給它加排它鎖,這樣在加排它鎖的會話上就會出現這個等待。P1,P2可與x$kglpn和x$kglob表相關
X$KGLOB (Kernel Generic Library Cache Manager Object)
X$KGLPN (Kernel Generic Library Cache Manager Object Pins)
-- 查詢X$KGLOB,可找到相關的object,其SQL語句以下
(即把V$SESSION_WAIT中的P1raw與X$KGLOB中的KGLHDADR相關連)
select kglnaown,kglnaobj from X$KGLOB
where KGLHDADR =(select p1raw from v$session_wait
where event='library cache pin')
-- 查出引發該等待事件的阻塞者的sid
select sid from x$kglpn , v$session
where KGLPNHDL in
(select p1raw from v$session_wait
where wait_time=0 and event like 'library cache pin%')
and KGLPNMOD <> 0
and v$session.saddr=x$kglpn.kglpnuse
-- 查出阻塞者正執行的SQL語句
select sid,sql_text
from v$session, v$sqlarea
where v$session.sql_address=v$sqlarea.address
and sid=<阻塞者的sid>
這樣,就可找到"library cache pin"等待的根源,從而解決由此引發的性能問題。
21) library cachelock
該事件一般是因爲執行多個DDL操做致使的,即在library cache object上添加一個排它鎖後,又從另外一個會話給它添加一個排它鎖,這樣在第二個會話就會生成等待。可經過到基表x$kgllk中查找其對應的對象。
-- 查詢引發該等待事件的阻塞者的sid、會話用戶、鎖住的對象
select b.sid,a.user_name,a.kglnaobj
from x$kgllk a , v$session b
where a.kgllkhdl in
(select p1raw from v$session_wait
where wait_time=0 and event = 'library cache lock')
and a.kgllkmod <> 0
and b.saddr=a.kgllkuse
固然也能夠直接從v$locked_objects中查看,但沒有上面語句直觀根據sid能夠到v$process中查出pid,而後將其kill或者其它處理。
22)
對於常見的一些IDLE wait事件舉例:
dispatchertimer
lockelement cleanup
Nullevent
parallelquery dequeue wait
parallelquery idle wait - Slaves
pipeget
PL/SQLlock timer
pmontimer- pmon
rdbmsipc message
slavewait
smontimer
SQL*Netbreak/reset to client
SQL*Netmessage from client
SQL*Netmessage to client
SQL*Netmore data to client
virtualcircuit status
clientmessage
SQL*Netmessage from client
下面是關於這裏的常見的等待事件和解決方法的一個快速預覽
等待事件 |
通常解決方法 |
Sequential Read |
調整相關的索引和選擇合適的驅動行源 |
Scattered Read |
代表出現不少全表掃描。優化code,cache小表到內存中。 |
Free Buffer |
增大DB_CACHE_SIZE,增大checkpoint的頻率,優化代碼 |
Buffer Busy Segment header |
增長freelist或者freelistgroups |
Buffer Busy Data block |
隔離熱塊;使用反轉索引;使用更小的塊;增大表的initrans |
Buffer Busy Undo header |
增長回滾段的數量或者大小 |
Buffer Busy Undo block |
Commit more;增長回滾段的數量或者大小 |
Latch Free |
檢查具體的等待latch類型,解決方法參考後面介紹 |
Enqueue–ST |
使用本地管理的表空間或者增長預分配的盤區大小 |
Enqueue–HW |
在HWM之上預先分配盤區 |
Enqueue–TX4 |
在表或者索引上增大initrans的值或者使用更小的塊 |
Log Buffer Space |
增大LOG_BUFFER,改善I/O |
Log File Switch |
增長或者增大日誌文件 |
Log file sync |
減少提交的頻率;使用更快的I/O;或者使用裸設備 |
Write complete waits |
增長DBWR;提升CKPT的頻率; |
Statistic Name |
Time (s) |
% of DB Time |
DB CPU |
514.50 |
77.61 |
sql execute elapsed time |
482.27 |
72.74 |
parse time elapsed |
3.76 |
0.57 |
PL/SQL execution elapsed time |
0.50 |
0.08 |
hard parse elapsed time |
0.34 |
0.05 |
connection management call elapsed time |
0.08 |
0.01 |
hard parse (sharing criteria) elapsed time |
0.00 |
0.00 |
repeated bind elapsed time |
0.00 |
0.00 |
PL/SQL compilation elapsed time |
0.00 |
0.00 |
failed parse elapsed time |
0.00 |
0.00 |
DB time |
662.97 |
|
background elapsed time |
185.19 |
|
background cpu time |
67.48 |
|
此節顯示了各類類型的數據庫處理任務所佔用的CPU時間。
DB time=報表頭部顯示的dbtime=cpu time + all ofnonidle wait event time
Back to Wait Events Statistics
Back to Top
查詢Oracle 10gR1提供的12個等待事件類:
selectwait_class#, wait_class_id, wait_class from v$event_name group by wait_class#,wait_class_id, wait_class order by wait_class#;
Wait Class |
Waits |
%Time -outs |
Total Wait Time (s) |
Avg wait (ms) |
Waits /txn |
User I/O |
66,837 |
0.00 |
120 |
2 |
11.94 |
System I/O |
28,295 |
0.00 |
93 |
3 |
5.05 |
Network |
1,571,450 |
0.00 |
66 |
0 |
280.72 |
Cluster |
210,548 |
0.00 |
29 |
0 |
37.61 |
Other |
81,783 |
71.82 |
28 |
0 |
14.61 |
Application |
333,155 |
0.00 |
16 |
0 |
59.51 |
Concurrency |
5,182 |
0.04 |
5 |
1 |
0.93 |
Commit |
919 |
0.00 |
4 |
4 |
0.16 |
Configuration |
25,427 |
99.46 |
1 |
0 |
4.54 |
Back to Wait Events Statistics
Back to Top
(1)查詢全部等待事件及其屬性:
select event#, name,parameter1, parameter2, parameter3 from v$event_name order by name;
(2)查詢Oracle 10gR1提供的12個等待事件類:
selectwait_class#, wait_class_id, wait_class from v$event_name group by wait_class#,wait_class_id, wait_class order by wait_class#;
wait_event.doc
下面顯示的內容可能來自下面幾個視圖
V$EVENT_NAME視圖包含全部爲數據庫實例定義的等待事件。
V$SYSTEM_EVENT視圖顯示自從實例啓動後,全部Oracle會話遇到的全部等待事件的總計統計。
V$SESSION_EVENT視圖包含當前鏈接到實例的全部會話的總計等待事件統計。該視圖包含了V$SYSTEM_EVENT視圖中出現的全部列。它記錄會話中每個等待事件的總等待次數、已等待時間和最大等待時間。SID列標識出獨立的會話。每一個會話中每一個事件的最大等待時間在MAX_WAIT列中追蹤。經過用SID列將V$SESSION_EVENT視圖和V$SESSION視圖結合起來,可獲得有關會話和用戶的更多信息。
V$SESSION_WAIT視圖提供關於每一個會話正在等待的事件或資源的詳細信息。該視圖在任何給定時間,只包含每一個會話的一行活動的或不活動的信息。
自從OWI在Oracle 7.0.12中引入後,就具備下來4個V$視圖:
· V$EVENT_NAME
· V$SESSION_WAIT
· V$SESSION_EVENT
· V$SYSTEM_EVENT
除了這些等待事件視圖以外,Oracle 10gR1中引入了下列新視圖以從多個角度顯示等待信息:
· V$SYSTEM_WAIT_CLASS
· V$SESSION_WAIT_CLASS
· V$SESSION_WAIT_HISTORY
· V$EVENT_HISTOGRAM
· V$ACTIVE_SESSION_HISTORY
然而,V$SESSION_WAIT、V$SESSION_WAIT和V$SESSION_WAIT仍然是3個重要的視圖,它們提供了不一樣粒度級的等待事件統計和計時信息。三者的關係以下:
V$SESSION_WAIT Ì V$SESSION_EVENT ÌV$SYSTEM_EVENT
Event |
Waits |
%Time -outs |
Total Wait Time (s) |
Avg wait (ms) |
Waits /txn |
SQL*Net more data from client |
27,319 |
0.00 |
64 |
2 |
4.88 |
log file parallel write |
5,497 |
0.00 |
47 |
9 |
0.98 |
db file sequential read |
7,900 |
0.00 |
35 |
4 |
1.41 |
db file parallel write |
4,806 |
0.00 |
34 |
7 |
0.86 |
db file scattered read |
10,310 |
0.00 |
31 |
3 |
1.84 |
direct path write |
42,724 |
0.00 |
30 |
1 |
7.63 |
reliable message |
355 |
2.82 |
18 |
49 |
0.06 |
SQL*Net break/reset to client |
333,084 |
0.00 |
16 |
0 |
59.50 |
db file parallel read |
3,732 |
0.00 |
13 |
4 |
0.67 |
gc current multi block request |
175,710 |
0.00 |
10 |
0 |
31.39 |
control file sequential read |
15,974 |
0.00 |
10 |
1 |
2.85 |
direct path read temp |
1,873 |
0.00 |
9 |
5 |
0.33 |
gc cr multi block request |
20,877 |
0.00 |
8 |
0 |
3.73 |
log file sync |
919 |
0.00 |
4 |
4 |
0.16 |
gc cr block busy |
526 |
0.00 |
3 |
6 |
0.09 |
enq: FB - contention |
10,384 |
0.00 |
3 |
0 |
1.85 |
DFS lock handle |
3,517 |
0.00 |
3 |
1 |
0.63 |
control file parallel write |
1,946 |
0.00 |
3 |
1 |
0.35 |
gc current block 2-way |
4,165 |
0.00 |
2 |
0 |
0.74 |
library cache lock |
432 |
0.00 |
2 |
4 |
0.08 |
name-service call wait |
22 |
0.00 |
2 |
76 |
0.00 |
row cache lock |
3,894 |
0.00 |
2 |
0 |
0.70 |
gcs log flush sync |
1,259 |
42.02 |
2 |
1 |
0.22 |
os thread startup |
18 |
5.56 |
2 |
89 |
0.00 |
gc cr block 2-way |
3,671 |
0.00 |
2 |
0 |
0.66 |
gc current block busy |
113 |
0.00 |
1 |
12 |
0.02 |
SQL*Net message to client |
1,544,115 |
0.00 |
1 |
0 |
275.83 |
gc buffer busy |
15 |
6.67 |
1 |
70 |
0.00 |
gc cr disk read |
3,272 |
0.00 |
1 |
0 |
0.58 |
direct path write temp |
159 |
0.00 |
1 |
5 |
0.03 |
gc current grant busy |
898 |
0.00 |
1 |
1 |
0.16 |
log file switch completion |
29 |
0.00 |
1 |
17 |
0.01 |
CGS wait for IPC msg |
48,739 |
99.87 |
0 |
0 |
8.71 |
gc current grant 2-way |
1,142 |
0.00 |
0 |
0 |
0.20 |
kjbdrmcvtq lmon drm quiesce: ping completion |
9 |
0.00 |
0 |
19 |
0.00 |
enq: US - contention |
567 |
0.00 |
0 |
0 |
0.10 |
direct path read |
138 |
0.00 |
0 |
1 |
0.02 |
enq: WF - contention |
14 |
0.00 |
0 |
9 |
0.00 |
ksxr poll remote instances |
13,291 |
58.45 |
0 |
0 |
2.37 |
library cache pin |
211 |
0.00 |
0 |
1 |
0.04 |
ges global resource directory to be frozen |
9 |
100.00 |
0 |
10 |
0.00 |
wait for scn ack |
583 |
0.00 |
0 |
0 |
0.10 |
log file sequential read |
36 |
0.00 |
0 |
2 |
0.01 |
undo segment extension |
25,342 |
99.79 |
0 |
0 |
4.53 |
rdbms ipc reply |
279 |
0.00 |
0 |
0 |
0.05 |
ktfbtgex |
6 |
100.00 |
0 |
10 |
0.00 |
enq: HW - contention |
44 |
0.00 |
0 |
1 |
0.01 |
gc cr grant 2-way |
158 |
0.00 |
0 |
0 |
0.03 |
enq: TX - index contention |
1 |
0.00 |
0 |
34 |
0.00 |
enq: CF - contention |
64 |
0.00 |
0 |
1 |
0.01 |
PX Deq: Signal ACK |
37 |
21.62 |
0 |
1 |
0.01 |
latch free |
3 |
0.00 |
0 |
10 |
0.00 |
buffer busy waits |
625 |
0.16 |
0 |
0 |
0.11 |
KJC: Wait for msg sends to complete |
154 |
0.00 |
0 |
0 |
0.03 |
log buffer space |
11 |
0.00 |
0 |
2 |
0.00 |
enq: PS - contention |
46 |
0.00 |
0 |
1 |
0.01 |
enq: TM - contention |
70 |
0.00 |
0 |
0 |
0.01 |
IPC send completion sync |
40 |
100.00 |
0 |
0 |
0.01 |
PX Deq: reap credit |
1,544 |
99.81 |
0 |
0 |
0.28 |
log file single write |
36 |
0.00 |
0 |
0 |
0.01 |
enq: TT - contention |
46 |
0.00 |
0 |
0 |
0.01 |
enq: TD - KTF dump entries |
12 |
0.00 |
0 |
1 |
0.00 |
read by other session |
1 |
0.00 |
0 |
12 |
0.00 |
LGWR wait for redo copy |
540 |
0.00 |
0 |
0 |
0.10 |
PX Deq Credit: send blkd |
17 |
5.88 |
0 |
0 |
0.00 |
enq: TA - contention |
14 |
0.00 |
0 |
0 |
0.00 |
latch: ges resource hash list |
44 |
0.00 |
0 |
0 |
0.01 |
enq: PI - contention |
8 |
0.00 |
0 |
0 |
0.00 |
write complete waits |
1 |
0.00 |
0 |
2 |
0.00 |
enq: DR - contention |
3 |
0.00 |
0 |
0 |
0.00 |
enq: MW - contention |
3 |
0.00 |
0 |
0 |
0.00 |
enq: TS - contention |
3 |
0.00 |
0 |
0 |
0.00 |
PX qref latch |
150 |
100.00 |
0 |
0 |
0.03 |
PX qref latch 在並行執行的狀況下偶然會發現PX qref latch等待事件,當系統高峯期同時採用了高併發的狀況下最容易出現。看來要進行特殊照顧了。 概念和原理 在並行執行環境中,query slaves 和query coordinator之間是經過隊列交換數據和信息的。PX qref latch 是用來保護這些隊列的。 PX qref latch 等待事件的出現通常代表信息的發送比接受快,這時須要調整buffer size(能夠經過parallel_execution_message_size參數調整)。 可是有些狀況下也是難以免發生這種狀況的,好比consumer須要長時間的等待數據的處理,緣由在於須要返回大批量的數據包,這種狀況下很正常。 調整和措施 當系統的負載比較高時,須要把並行度下降;若是使用的是默認並行度,能夠經過減少parallel_thread_per_cpu參數的值來達到效果。 DEFAULT degree = PARALLEL_THREADS_PER_CPU * #CPU's 優化parallel_execution_message_size參數 Tuning parallel_execution_message_size is a tradeoff betweenperformance and memory. For parallel query, the connectiontopology between slaves and QC requires (n^2 + 2n) connections(where n is the DOP not the actual number of slaves) at maximum. If each connection has 3 buffers associated with it then you canvery quickly get into high memory consumption on large machines doing high DOP queries |
|||||
enq: MD - contention |
2 |
0.00 |
0 |
0 |
0.00 |
latch: KCL gc element parent latch |
11 |
0.00 |
0 |
0 |
0.00 |
enq: JS - job run lock - synchronize |
1 |
0.00 |
0 |
1 |
0.00 |
SQL*Net more data to client |
16 |
0.00 |
0 |
0 |
0.00 |
latch: cache buffers lru chain |
1 |
0.00 |
0 |
0 |
0.00 |
enq: UL - contention |
1 |
0.00 |
0 |
0 |
0.00 |
gc current split |
1 |
0.00 |
0 |
0 |
0.00 |
enq: AF - task serialization |
1 |
0.00 |
0 |
0 |
0.00 |
latch: object queue header operation |
3 |
0.00 |
0 |
0 |
0.00 |
latch: cache buffers chains |
1 |
0.00 |
0 |
0 |
0.00 |
latch: enqueue hash chains |
2 |
0.00 |
0 |
0 |
0.00 |
SQL*Net message from client |
1,544,113 |
0.00 |
12,626 |
8 |
275.83 |
gcs remote message |
634,884 |
98.64 |
9,203 |
14 |
113.41 |
DIAG idle wait |
23,628 |
0.00 |
4,616 |
195 |
4.22 |
ges remote message |
149,591 |
93.45 |
4,612 |
31 |
26.72 |
Streams AQ: qmn slave idle wait |
167 |
0.00 |
4,611 |
27611 |
0.03 |
Streams AQ: qmn coordinator idle wait |
351 |
47.86 |
4,611 |
13137 |
0.06 |
Streams AQ: waiting for messages in the queue |
488 |
100.00 |
4,605 |
9436 |
0.09 |
virtual circuit status |
157 |
100.00 |
4,596 |
29272 |
0.03 |
PX Idle Wait |
1,072 |
97.11 |
2,581 |
2407 |
0.19 |
jobq slave wait |
145 |
97.93 |
420 |
2896 |
0.03 |
Streams AQ: waiting for time management or cleanup tasks |
1 |
100.00 |
270 |
269747 |
0.00 |
PX Deq: Parse Reply |
40 |
40.00 |
0 |
3 |
0.01 |
PX Deq: Execution Msg |
121 |
26.45 |
0 |
0 |
0.02 |
PX Deq: Join ACK |
38 |
42.11 |
0 |
1 |
0.01 |
PX Deq: Execute Reply |
34 |
32.35 |
0 |
0 |
0.01 |
PX Deq: Msg Fragment |
16 |
0.00 |
0 |
0 |
0.00 |
Streams AQ: RAC qmn coordinator idle wait |
351 |
100.00 |
0 |
0 |
0.06 |
class slave wait |
2 |
0.00 |
0 |
0 |
0.00 |
db file scattered read等待事件是當SESSION等待multi-blockI/O時發生的,經過是因爲fulltable scans或 index fast full scans。發生過多讀操做的Segments能夠在「Segments by Physical Reads」和 「SQLordered by Reads」節中識別(在其它版本的報告中,多是別的名稱)。若是在OLTP應用中,不該該有過多的全掃描操做,而應使用選擇性好的索引操做。
DBfile sequential read等待意味着發生順序I/O讀等待(一般是單塊讀取到連續的內存區域中),若是這個等待很是嚴重,應該使用上一段的方法肯定執行讀操做的熱點SEGMENT,而後經過對大表進行分區以減小I/O量,或者優化執行計劃(經過使用存儲大綱或執行數據分析)以免單塊讀操做引發的sequentialread等待。經過在批量應用中,DB file sequential read是很影響性能的事件,老是應當設法避免。
LogFile Parallel Write 事件是在等待LGWR進程將REDO記錄從LOG 緩衝區寫到聯機日誌文件時發生的。雖然寫操做多是併發的,但LGWR須要等待最後的I/O寫到磁盤上才能認爲並行寫的完成,所以等待時間依賴於OS完成全部請求的時間。若是這個等待比較嚴重,能夠經過將LOG文件移到更快的磁盤上或者條帶化磁盤(減小爭用)而下降這個等待。
BufferBusy Waits事件是在一個SESSION須要訪問BUFFER CACHE中的一個數據庫塊而又不能訪問時發生的。緩衝區「busy」的兩個緣由是:1)另外一個SESSION正在將數據塊讀進BUFFER。2)另外一個SESSION正在以排它模式佔用着這塊被請求的BUFFER。能夠在「Segments by Buffer BusyWaits」一節中找出發生這種等待的SEGMENT,而後經過使用reverse-key indexes並對熱表進行分區而減小這種等待事件。
LogFile Sync事件,當用戶SESSION執行事務操做(COMMIT或ROLLBACK等)後,會通知 LGWR進程將所須要的全部REDO信息從LOG BUFFER寫到LOG文件,在用戶SESSION等待LGWR返回安全寫入磁盤的通知時發生此等待。減小此等待的方法寫Log File Parallel Write事件的處理。
Enqueue Waits是串行訪問本地資源的本鎖,代表正在等待一個被其它SESSION(一個或多個)以排它模式鎖住的資源。減小這種等待的方法依賴於生產等待的鎖類型。致使Enqueue等待的主要鎖類型有三種:TX(事務鎖), TM D(ML鎖)和ST(空間管理鎖)。
Back to Wait Events Statistics
Back to Top
Event |
Waits |
%Time -outs |
Total Wait Time (s) |
Avg wait (ms) |
Waits /txn |
log file parallel write |
5,497 |
0.00 |
47 |
9 |
0.98 |
db file parallel write |
4,806 |
0.00 |
34 |
7 |
0.86 |
events in waitclass Other |
69,002 |
83.25 |
22 |
0 |
12.33 |
control file sequential read |
9,323 |
0.00 |
7 |
1 |
1.67 |
control file parallel write |
1,946 |
0.00 |
3 |
1 |
0.35 |
os thread startup |
18 |
5.56 |
2 |
89 |
0.00 |
direct path read |
138 |
0.00 |
0 |
1 |
0.02 |
db file sequential read |
21 |
0.00 |
0 |
5 |
0.00 |
direct path write |
138 |
0.00 |
0 |
0 |
0.02 |
log file sequential read |
36 |
0.00 |
0 |
2 |
0.01 |
gc cr block 2-way |
96 |
0.00 |
0 |
0 |
0.02 |
gc current block 2-way |
78 |
0.00 |
0 |
0 |
0.01 |
log buffer space |
11 |
0.00 |
0 |
2 |
0.00 |
row cache lock |
59 |
0.00 |
0 |
0 |
0.01 |
log file single write |
36 |
0.00 |
0 |
0 |
0.01 |
buffer busy waits |
151 |
0.66 |
0 |
0 |
0.03 |
gc current grant busy |
29 |
0.00 |
0 |
0 |
0.01 |
library cache lock |
4 |
0.00 |
0 |
1 |
0.00 |
enq: TM - contention |
10 |
0.00 |
0 |
0 |
0.00 |
gc current grant 2-way |
8 |
0.00 |
0 |
0 |
0.00 |
gc cr multi block request |
7 |
0.00 |
0 |
0 |
0.00 |
gc cr grant 2-way |
5 |
0.00 |
0 |
0 |
0.00 |
rdbms ipc message |
97,288 |
73.77 |
50,194 |
516 |
17.38 |
gcs remote message |
634,886 |
98.64 |
9,203 |
14 |
113.41 |
DIAG idle wait |
23,628 |
0.00 |
4,616 |
195 |
4.22 |
pmon timer |
1,621 |
100.00 |
4,615 |
2847 |
0.29 |
ges remote message |
149,591 |
93.45 |
4,612 |
31 |
26.72 |
Streams AQ: qmn slave idle wait |
167 |
0.00 |
4,611 |
27611 |
0.03 |
Streams AQ: qmn coordinator idle wait |
351 |
47.86 |
4,611 |
13137 |
0.06 |
smon timer |
277 |
6.50 |
4,531 |
16356 |
0.05 |
Streams AQ: waiting for time management or cleanup tasks |
1 |
100.00 |
270 |
269747 |
0.00 |
PX Deq: Parse Reply |
40 |
40.00 |
0 |
3 |
0.01 |
PX Deq: Join ACK |
38 |
42.11 |
0 |
1 |
0.01 |
PX Deq: Execute Reply |
34 |
32.35 |
0 |
0 |
0.01 |
Streams AQ: RAC qmn coordinator idle wait |
351 |
100.00 |
0 |
0 |
0.06 |
Back to Wait Events Statistics
Back to Top
Statistic |
Total |
NUM_LCPUS |
0 |
NUM_VCPUS |
0 |
AVG_BUSY_TIME |
101,442 |
AVG_IDLE_TIME |
371,241 |
AVG_IOWAIT_TIME |
5,460 |
AVG_SYS_TIME |
25,795 |
AVG_USER_TIME |
75,510 |
BUSY_TIME |
812,644 |
IDLE_TIME |
2,971,077 |
IOWAIT_TIME |
44,794 |
SYS_TIME |
207,429 |
USER_TIME |
605,215 |
LOAD |
0 |
OS_CPU_WAIT_TIME |
854,100 |
RSRC_MGR_CPU_WAIT_TIME |
0 |
PHYSICAL_MEMORY_BYTES |
8,589,934,592 |
NUM_CPUS |
8 |
NUM_CPU_CORES |
4 |
NUM_LCPUS: 若是顯示0,是由於沒有設置LPARS
NUM_VCPUS: 同上。
AVG_BUSY_TIME: BUSY_TIME/ NUM_CPUS
AVG_IDLE_TIME: IDLE_TIME/ NUM_CPUS
AVG_IOWAIT_TIME: IOWAIT_TIME/ NUM_CPUS
AVG_SYS_TIME: SYS_TIME/ NUM_CPUS
AVG_USER_TIME: USER_TIME/ NUM_CPUSar o
BUSY_TIME: timeequiv of %usr+%sys in sar output
IDLE_TIME: timeequiv of %idle in sar
IOWAIT_TIME: timeequiv of %wio in sar
SYS_TIME: timeequiv of %sys in sar
USER_TIME: timeequiv of %usr in sar
LOAD: 未知
OS_CPU_WAIT_TIME: supposedlytime waiting on run queues
RSRC_MGR_CPU_WAIT_TIME: time waited coz of resource manager
PHYSICAL_MEMORY_BYTES: totalmemory in use supposedly
NUM_CPUS: numberof CPUs reported by OS 操做系統CPU數
NUM_CPU_CORES: numberof CPU sockets on motherboard 主板上CPU插槽數
總的elapsed time也能夠用以公式計算:
BUSY_TIME + IDLE_TIME + IOWAIT TIME
或:SYS_TIME+ USER_TIME + IDLE_TIME + IOWAIT_TIME
(由於BUSY_TIME = SYS_TIME+USER_TIME)
Back to Wait Events Statistics
Back to Top
Service Name |
DB Time (s) |
DB CPU (s) |
Physical Reads |
Logical Reads |
ICCI |
608.10 |
496.60 |
315,849 |
16,550,972 |
SYS$USERS |
54.70 |
17.80 |
6,539 |
58,929 |
ICCIXDB |
0.00 |
0.00 |
0 |
0 |
SYS$BACKGROUND |
0.00 |
0.00 |
282 |
38,990 |
Back to Wait Events Statistics
Back to Top
Service Name |
User I/O Total Wts |
User I/O Wt Time |
Concurcy Total Wts |
Concurcy Wt Time |
Admin Total Wts |
Admin Wt Time |
Network Total Wts |
Network Wt Time |
ICCI |
59826 |
8640 |
4621 |
338 |
0 |
0 |
1564059 |
6552 |
SYS$USERS |
6567 |
3238 |
231 |
11 |
0 |
0 |
7323 |
3 |
SYS$BACKGROUND |
443 |
115 |
330 |
168 |
0 |
0 |
0 |
0 |
Back to Wait Events Statistics
Back to Top
本節按各類資源分別列出對資源消耗最嚴重的SQL語句,並顯示它們所佔統計期內所有資源的比例,這給出咱們調優指南。例如在一個系統中,CPU資源是系統性能瓶頸所在,那麼優化buffergets最多的SQL語句將得到最大效果。在一個I/O等待是最嚴重事件的系統中,調優的目標應該是physicalIOs最多的SQL語句。
在STATSPACK報告中,沒有完整的SQL語句,可以使用報告中的HashValue經過下面語句從數據庫中查到:
SELECT sql_text
FROM stats$sqltext
WHERE hash_value = &hash_value
ORDER BYpiece;
Elapsed Time (s) |
CPU Time (s) |
Executions |
Elap per Exec (s) |
% Total DB Time |
SQL Id |
SQL Module |
SQL Text |
93 |
57 |
1 |
93.50 |
14.10 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
76 |
75 |
172,329 |
0.00 |
11.52 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
58 |
42 |
1 |
58.04 |
8.75 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
51 |
42 |
1 |
50.93 |
7.68 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
38 |
36 |
166,069 |
0.00 |
5.67 |
|
select c.name, u.name from co... |
|
35 |
3 |
1 |
35.00 |
5.28 |
SQL*Plus |
SELECT F.TABLESPACE_NAME, TO_... |
|
23 |
23 |
172,329 |
0.00 |
3.46 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
15 |
11 |
5 |
2.98 |
2.25 |
|
DECLARE job BINARY_INTEGER := ... |
|
14 |
14 |
172,983 |
0.00 |
2.16 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_AD... |
|
13 |
13 |
172,337 |
0.00 |
2.00 |
load_oldnewact@HPGICCI1 (TNS V1-V3) |
insert into OLDNEWACT values ... |
|
13 |
13 |
166,051 |
0.00 |
1.89 |
icci_migact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
10 |
4 |
1 |
9.70 |
1.46 |
cuidmain@HPGICCI1 (TNS V1-V3) |
select CUID_CUST_NO , CUID_ID_... |
|
10 |
8 |
5 |
1.91 |
1.44 |
|
INSERT INTO STATS$SGA_TARGET_A... |
|
8 |
8 |
172,329 |
0.00 |
1.25 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCICCS set CCSMAXOVER... |
|
8 |
8 |
172,983 |
0.00 |
1.16 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select * from ICCIPRODCODE wh... |
Back to SQL Statistics
Back to Top
CPU Time (s) |
Elapsed Time (s) |
Executions |
CPU per Exec (s) |
% Total DB Time |
SQL Id |
SQL Module |
SQL Text |
75 |
76 |
172,329 |
0.00 |
11.52 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
57 |
93 |
1 |
57.31 |
14.10 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
42 |
51 |
1 |
42.43 |
7.68 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
42 |
58 |
1 |
42.01 |
8.75 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
36 |
38 |
166,069 |
0.00 |
5.67 |
|
select c.name, u.name from co... |
|
23 |
23 |
172,329 |
0.00 |
3.46 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
14 |
14 |
172,983 |
0.00 |
2.16 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_AD... |
|
13 |
13 |
172,337 |
0.00 |
2.00 |
load_oldnewact@HPGICCI1 (TNS V1-V3) |
insert into OLDNEWACT values ... |
|
13 |
13 |
166,051 |
0.00 |
1.89 |
icci_migact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
11 |
15 |
5 |
2.23 |
2.25 |
|
DECLARE job BINARY_INTEGER := ... |
|
8 |
8 |
172,329 |
0.00 |
1.25 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCICCS set CCSMAXOVER... |
|
8 |
10 |
5 |
1.60 |
1.44 |
|
INSERT INTO STATS$SGA_TARGET_A... |
|
8 |
8 |
172,983 |
0.00 |
1.16 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select * from ICCIPRODCODE wh... |
|
4 |
10 |
1 |
3.54 |
1.46 |
cuidmain@HPGICCI1 (TNS V1-V3) |
select CUID_CUST_NO , CUID_ID_... |
|
3 |
35 |
1 |
3.13 |
5.28 |
SQL*Plus |
SELECT F.TABLESPACE_NAME, TO_... |
Back to SQL Statistics
Back to Top
這一部分,經過Buffer Gets對SQL語句進行排序,即經過它執行了多少個邏輯I/O來排序。頂端的註釋代表一個PL/SQL單元的緩存得到(Buffer Gets)包括被這個代碼塊執行的全部SQL語句的Buffer Gets。所以將常常在這個列表的頂端看到PL/SQL過程,由於存儲過程執行的單獨的語句的數目被總計出來。在這裏的Buffer Gets是一個累積值,因此這個值大並不必定意味着這條語句的性能存在問題。一般咱們能夠經過對比該條語句的Buffer Gets和physical reads值,若是這兩個比較接近,確定這條語句是存在問題的,咱們能夠經過執行計劃來分析,爲何physical reads的值如此之高。另外,咱們在這裏也能夠關注gets per exec的值,這個值若是太大,代表這條語句可能使用了一個比較差的索引或者使用了不當的錶鏈接。
另外說明一點:大量的邏輯讀每每伴隨着較高的CPU消耗。因此不少時候咱們看到的系統CPU將近100%的時候,不少時候就是SQL語句形成的,這時候咱們能夠分析一下這裏邏輯讀大的SQL。
SELECT *
FROM ( SELECT SUBSTR (sql_text, 1, 40) sql,
buffer_gets,
executions,
buffer_gets / executions "Gets/Exec",
hash_value,
address
FROM v$sqlarea
WHERE buffer_gets > 0 AND executions > 0
ORDER BY buffer_gets DESC)
WHERE ROWNUM <= 10;
Buffer Gets |
Executions |
Gets per Exec |
%Total |
CPU Time (s) |
Elapsed Time (s) |
SQL Id |
SQL Module |
SQL Text |
3,305,363 |
172,329 |
19.18 |
19.85 |
74.57 |
76.41 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
2,064,414 |
1 |
2,064,414.00 |
12.40 |
57.31 |
93.50 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
1,826,869 |
166,069 |
11.00 |
10.97 |
35.84 |
37.60 |
|
select c.name, u.name from co... |
|
1,427,648 |
172,337 |
8.28 |
8.58 |
12.97 |
13.29 |
load_oldnewact@HPGICCI1 (TNS V1-V3) |
insert into OLDNEWACT values ... |
|
1,278,667 |
172,329 |
7.42 |
7.68 |
22.85 |
22.94 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
1,216,367 |
1 |
1,216,367.00 |
7.31 |
42.43 |
50.93 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
1,107,305 |
1 |
1,107,305.00 |
6.65 |
42.01 |
58.04 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
898,868 |
172,983 |
5.20 |
5.40 |
14.28 |
14.34 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_AD... |
|
711,450 |
166,051 |
4.28 |
4.27 |
12.52 |
12.55 |
icci_migact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
692,996 |
172,329 |
4.02 |
4.16 |
8.31 |
8.31 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCICCS set CCSMAXOVER... |
|
666,748 |
166,052 |
4.02 |
4.00 |
6.36 |
6.36 |
icci_migact@HPGICCI1 (TNS V1-V3) |
select NEWACTNO into :b0 from... |
|
345,357 |
172,983 |
2.00 |
2.07 |
7.70 |
7.71 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select * from ICCIPRODCODE wh... |
|
231,756 |
51,633 |
4.49 |
1.39 |
5.75 |
5.83 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCIRPYV values (... |
Back to SQL Statistics
Back to Top
這部分經過物理讀對SQL語句進行排序。這顯示引發大部分對這個系統進行讀取活動的SQL,即物理I/O。當咱們的系統若是存在I/O瓶頸時,須要關注這裏I/O操做比較多的語句。
SELECT *
FROM ( SELECT SUBSTR (sql_text, 1, 40) sql,
disk_reads,
executions,
disk_reads / executions "Reads/Exec",
hash_value,
address
FROM v$sqlarea
WHERE disk_reads > 0 AND executions > 0
ORDER BY disk_reads DESC)
WHERE ROWNUM <= 10;
Physical Reads |
Executions |
Reads per Exec |
%Total |
CPU Time (s) |
Elapsed Time (s) |
SQL Id |
SQL Module |
SQL Text |
66,286 |
1 |
66,286.00 |
20.54 |
57.31 |
93.50 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
50,646 |
1 |
50,646.00 |
15.70 |
3.54 |
9.70 |
cuidmain@HPGICCI1 (TNS V1-V3) |
select CUID_CUST_NO , CUID_ID_... |
|
24,507 |
1 |
24,507.00 |
7.59 |
42.01 |
58.04 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
21,893 |
1 |
21,893.00 |
6.78 |
42.43 |
50.93 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
19,761 |
1 |
19,761.00 |
6.12 |
2.14 |
6.04 |
cumimain@HPGICCI1 (TNS V1-V3) |
select CUSV_CUST_NO from CUMI... |
|
19,554 |
1 |
19,554.00 |
6.06 |
1.27 |
3.83 |
SQL*Plus |
select count(*) from CUSVAA_T... |
|
6,342 |
1 |
6,342.00 |
1.97 |
3.13 |
35.00 |
SQL*Plus |
SELECT F.TABLESPACE_NAME, TO_... |
|
4,385 |
1 |
4,385.00 |
1.36 |
1.59 |
2.43 |
cusmmain@HPGICCI1 (TNS V1-V3) |
select CUSM_CUST_ACCT_NO from... |
|
63 |
5 |
12.60 |
0.02 |
11.17 |
14.91 |
|
DECLARE job BINARY_INTEGER := ... |
|
35 |
1 |
35.00 |
0.01 |
0.08 |
0.67 |
SQL*Plus |
BEGIN dbms_workload_repository... |
Back to SQL Statistics
Back to Top
這部分告訴咱們在這段時間中執行次數最多的SQL語句。爲了隔離某些頻繁執行的查詢,以觀察是否有某些更改邏輯的方法以免必須如此頻繁的執行這些查詢,這多是頗有用的。或許一個查詢正在一個循環的內部執行,並且它可能在循環的外部執行一次,能夠設計簡單的算法更改以減小必須執行這個查詢的次數。即便它運行的飛快,任何被執行幾百萬次的操做都將開始耗盡大量的時間。
SELECT *
FROM ( SELECT SUBSTR (sql_text, 1, 40) sql,
executions,
rows_processed,
rows_processed / executions "Rows/Exec",
hash_value,
address
FROM v$sqlarea
WHERE executions > 0
ORDER BY executions DESC)
WHERE ROWNUM <= 10;
Executions |
Rows Processed |
Rows per Exec |
CPU per Exec (s) |
Elap per Exec (s) |
SQL Id |
SQL Module |
SQL Text |
172,983 |
172,329 |
1.00 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select * from ICCIPRODCODE wh... |
|
172,983 |
172,329 |
1.00 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_AD... |
|
172,337 |
172,337 |
1.00 |
0.00 |
0.00 |
load_oldnewact@HPGICCI1 (TNS V1-V3) |
insert into OLDNEWACT values ... |
|
172,329 |
172,329 |
1.00 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
172,329 |
172,329 |
1.00 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCICCS set CCSMAXOVER... |
|
172,329 |
6,286 |
0.04 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
166,069 |
166,069 |
1.00 |
0.00 |
0.00 |
|
select c.name, u.name from co... |
|
166,052 |
166,052 |
1.00 |
0.00 |
0.00 |
icci_migact@HPGICCI1 (TNS V1-V3) |
select NEWACTNO into :b0 from... |
|
166,051 |
166,051 |
1.00 |
0.00 |
0.00 |
icci_migact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
51,740 |
51,740 |
1.00 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select count(*) into :b0 fro... |
|
51,633 |
51,633 |
1.00 |
0.00 |
0.00 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCIRPYV values (... |
Back to SQL Statistics
Back to Top
在這一部分,主要顯示PARSE與EXECUTIONS的對比狀況。若是PARSE/EXECUTIONS>1,每每說明這個語句可能存在問題:沒有使用綁定變量,共享池設置過小,cursor_sharing被設置爲exact,沒有設置session_cached_cursors等等問題。
SELECT *
FROM ( SELECT SUBSTR (sql_text, 1, 40) sql,
parse_calls,
executions,
hash_value,
address
FROM v$sqlarea
WHERE parse_calls > 0
ORDER BY parse_calls DESC)
WHERE ROWNUM <= 10;
Parse Calls |
Executions |
% Total Parses |
SQL Id |
SQL Module |
SQL Text |
166,069 |
166,069 |
90.86 |
|
select c.name, u.name from co... |
|
6,304 |
6,304 |
3.45 |
|
select type#, blocks, extents,... |
|
2,437 |
2,438 |
1.33 |
|
select file# from file$ where ... |
|
1,568 |
1,568 |
0.86 |
|
update seg$ set type#=:4, bloc... |
|
1,554 |
1,554 |
0.85 |
|
update tsq$ set blocks=:3, max... |
|
444 |
444 |
0.24 |
|
select blocks, maxblocks, gran... |
|
421 |
421 |
0.23 |
|
lock table sys.mon_mods$ in ex... |
|
421 |
421 |
0.23 |
|
update sys.mon_mods$ set inser... |
|
86 |
86 |
0.05 |
|
INSERT INTO sys.wri$_adv_messa... |
|
81 |
81 |
0.04 |
|
SELECT sys.wri$_adv_seq_msggro... |
Back to SQL Statistics
Back to Top
No data exists for this sectionof the report.
在這一部分,主要是針對shared memory佔用的狀況進行排序。
SELECT *
FROM ( SELECT SUBSTR (sql_text, 1, 40) sql,
sharable_mem,
executions,
hash_value,
address
FROM v$sqlarea
WHERE sharable_mem > 1048576
ORDER BY sharable_mem DESC)
WHERE ROWNUM <= 10;
Back to SQL Statistics
Back to Top
Running Time top 10 sql
SELECT *
FROM ( SELECT t.sql_fulltext,
(t.last_active_time
- TO_DATE (t.first_load_time, 'yyyy-mm-ddhh24:mi:ss'))
* 24
* 60,
disk_reads,
buffer_gets,
rows_processed,
t.last_active_time,
t.last_load_time,
t.first_load_time
FROM v$sqlarea t
ORDER BY t.first_load_timeDESC)
WHERE ROWNUM < 10;
No data exists for this sectionof the report.
在這一部分,主要是針對SQL語句的多版本進行排序。相同的SQL文本,可是不一樣屬性,好比對象owner不一樣,會話優化模式不一樣、類型不一樣、長度不一樣和綁定變量不一樣等等的語句,他們是不能共享的,因此再緩存中會存在多個不一樣的版本。這固然就形成了資源上的更多的消耗。
Back to SQL Statistics
Back to Top
Cluster Wait Time (s) |
CWT % of Elapsd Time |
Elapsed Time(s) |
CPU Time(s) |
Executions |
SQL Id |
SQL Module |
SQL Text |
10.96 |
11.72 |
93.50 |
57.31 |
1 |
cuidmain@HPGICCI1 (TNS V1-V3) |
insert into CUID select CUID_... |
|
4.21 |
7.25 |
58.04 |
42.01 |
1 |
cumimain@HPGICCI1 (TNS V1-V3) |
insert into CUMI select CUSV_... |
|
3.62 |
7.12 |
50.93 |
42.43 |
1 |
cusmmain@HPGICCI1 (TNS V1-V3) |
insert into CUSM select CUSM_... |
|
2.39 |
6.35 |
37.60 |
35.84 |
166,069 |
|
select c.name, u.name from co... |
|
2.38 |
3.12 |
76.41 |
74.57 |
172,329 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
1.64 |
16.91 |
9.70 |
3.54 |
1 |
cuidmain@HPGICCI1 (TNS V1-V3) |
select CUID_CUST_NO , CUID_ID_... |
|
1.06 |
3.02 |
35.00 |
3.13 |
1 |
SQL*Plus |
SELECT F.TABLESPACE_NAME, TO_... |
|
0.83 |
13.76 |
6.04 |
2.14 |
1 |
cumimain@HPGICCI1 (TNS V1-V3) |
select CUSV_CUST_NO from CUMI... |
|
0.66 |
87.90 |
0.75 |
0.42 |
444 |
|
select blocks, maxblocks, gran... |
|
0.50 |
13.01 |
3.83 |
1.27 |
1 |
SQL*Plus |
select count(*) from CUSVAA_T... |
|
0.50 |
51.75 |
0.96 |
0.79 |
1,554 |
|
update tsq$ set blocks=:3, max... |
|
0.33 |
91.11 |
0.36 |
0.33 |
187 |
|
select obj#, type#, ctime, mti... |
|
0.33 |
2.47 |
13.29 |
12.97 |
172,337 |
load_oldnewact@HPGICCI1 (TNS V1-V3) |
insert into OLDNEWACT values ... |
|
0.29 |
1.26 |
22.94 |
22.85 |
172,329 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into iccifnsact values... |
|
0.25 |
10.14 |
2.43 |
1.59 |
1 |
cusmmain@HPGICCI1 (TNS V1-V3) |
select CUSM_CUST_ACCT_NO from... |
|
0.21 |
27.92 |
0.74 |
0.74 |
1,568 |
|
update seg$ set type#=:4, bloc... |
|
0.20 |
3.49 |
5.83 |
5.75 |
51,633 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
insert into ICCIRPYV values (... |
|
0.17 |
1.39 |
12.55 |
12.52 |
166,051 |
icci_migact@HPGICCI1 (TNS V1-V3) |
insert into ICCICCS values (:... |
|
0.16 |
57.64 |
0.28 |
0.24 |
39 |
cusvaamain@HPGICCI1 (TNS V1-V3) |
BEGIN BEGIN IF (xdb.DBMS... |
|
0.14 |
74.58 |
0.19 |
0.14 |
121 |
|
select o.owner#, o.name, o.nam... |
|
0.11 |
64.72 |
0.18 |
0.15 |
80 |
cusvaamain@HPGICCI1 (TNS V1-V3) |
SELECT /*+ ALL_ROWS */ COUNT(*... |
|
0.11 |
94.54 |
0.12 |
0.01 |
17 |
|
delete from con$ where owner#=... |
|
0.11 |
80.26 |
0.14 |
0.14 |
327 |
|
select intcol#, nvl(pos#, 0), ... |
|
0.08 |
19.20 |
0.42 |
0.24 |
1 |
|
begin prvt_hdm.auto_execute( :... |
|
0.07 |
54.97 |
0.13 |
0.13 |
83 |
|
select i.obj#, i.ts#, i.file#,... |
|
0.06 |
5.22 |
1.13 |
0.72 |
77 |
|
select obj#, type#, flags, ... |
|
0.06 |
86.50 |
0.06 |
0.06 |
45 |
|
select owner#, name from con$... |
|
0.06 |
8.19 |
0.67 |
0.08 |
1 |
SQL*Plus |
BEGIN dbms_workload_repository... |
|
0.04 |
75.69 |
0.06 |
0.06 |
87 |
|
select pos#, intcol#, col#, sp... |
|
0.04 |
48.05 |
0.09 |
0.07 |
7 |
|
select file#, block# from seg... |
|
0.04 |
8.84 |
0.40 |
0.40 |
6,304 |
|
select type#, blocks, extents,... |
|
0.03 |
28.15 |
0.12 |
0.12 |
49 |
|
delete from RecycleBin$ ... |
|
0.03 |
66.23 |
0.05 |
0.05 |
85 |
|
select t.ts#, t.file#, t.block... |
|
0.03 |
67.03 |
0.05 |
0.05 |
38 |
DBMS_SCHEDULER |
update obj$ set obj#=:6, type#... |
|
0.02 |
66.73 |
0.04 |
0.04 |
86 |
|
INSERT INTO sys.wri$_adv_messa... |
|
0.02 |
26.94 |
0.09 |
0.09 |
38 |
|
delete from RecycleBin$ ... |
|
0.02 |
76.76 |
0.03 |
0.03 |
51 |
|
select con# from con$ where ow... |
|
0.02 |
51.91 |
0.05 |
0.05 |
84 |
|
select name, intcol#, segcol#,... |
|
0.02 |
0.15 |
14.91 |
11.17 |
5 |
|
DECLARE job BINARY_INTEGER := ... |
|
0.02 |
2.12 |
1.00 |
0.99 |
8,784 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_FA... |
|
0.02 |
53.82 |
0.03 |
0.03 |
39 |
cusvaamain@HPGICCI1 (TNS V1-V3) |
SELECT count(*) FROM user_poli... |
|
0.01 |
0.10 |
14.34 |
14.28 |
172,983 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
update ICCIFNSACT set BORM_AD... |
|
0.01 |
8.29 |
0.16 |
0.13 |
421 |
|
update sys.mon_mods$ set inser... |
|
0.01 |
1.65 |
0.56 |
0.54 |
2 |
|
insert into wrh$_latch (snap... |
|
0.01 |
22.33 |
0.04 |
0.02 |
26 |
load_curmmast@HPGICCI1 (TNS V1-V3) |
insert into ICCICURMMAST valu... |
|
0.01 |
0.08 |
7.71 |
7.70 |
172,983 |
load_fnsact@HPGICCI1 (TNS V1-V3) |
select * from ICCIPRODCODE wh... |
Back to SQL Statistics
Back to Top
對於出如今上面的可疑的sql語句,咱們能夠查看語句相關的執行計劃,而後分析相關索引等是否合理。
經過語句查看執行計劃的方法:
SELECT id,
parent_id,
LPAD (' ', 4 * (LEVEL - 1))
|| operation
|| ' '
|| options
|| ' '
|| object_name
"Execution plan",
cost,
CARDINALITY,
bytes
FROM (SELECT p.*
FROM v$sql_plan p, v$sql s
WHERE p.address = s.ADDRESS
AND p.hash_value = s.HASH_VALUE
AND p.hash_value = '&hash_value')
CONNECT BY PRIOR id = parent_id
START WITH id = 0;
查看,分析,優化索引等在這裏就再也不一一描述了。
SQL Id |
SQL Text |
select obj#, type#, ctime, mtime, stime, status, dataobj#, flags, oid$, spare1, spare2 from obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null and linkname is null and subname is null |
|
select obj#, type#, flags, related, bo, purgeobj, con# from RecycleBin$ where ts#=:1 and to_number(bitand(flags, 16)) = 16 order by dropscn |
|
delete from RecycleBin$ where purgeobj=:1 |
|
select file#, block# from seg$ where type# = 3 and ts# = :1 |
|
select CUID_CUST_NO , CUID_ID_TYPE , CUID_ID_RECNO from CUID_TMP where CHGFLAG='D' |
|
select blocks, maxblocks, grantor#, priv1, priv2, priv3 from tsq$ where ts#=:1 and user#=:2 |
|
INSERT INTO STATS$SGA_TARGET_ADVICE ( SNAP_ID , DBID , INSTANCE_NUMBER , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS ) SELECT :B3 , :B2 , :B1 , SGA_SIZE , SGA_SIZE_FACTOR , ESTD_DB_TIME , ESTD_DB_TIME_FACTOR , ESTD_PHYSICAL_READS FROM V$SGA_TARGET_ADVICE |
|
insert into iccifnsact values (:b0, :b1, :b2, null , null , :b3, :b4, GREATEST(:b5, :b6), null , :b7, :b8, null , :b9, :b10, :b6, null , null , null , null , null , :b12, null , null , null , :b13, :b14, null , null , :b15, :b16, :b17) |
|
select t.ts#, t.file#, t.block#, nvl(t.bobj#, 0), nvl(t.tab#, 0), t.intcols, nvl(t.clucols, 0), t.audit$, t.flags, t.pctfree$, t.pctused$, t.initrans, t.maxtrans, t.rowcnt, t.blkcnt, t.empcnt, t.avgspc, t.chncnt, t.avgrln, t.analyzetime, t.samplesize, t.cols, t.property, nvl(t.degree, 1), nvl(t.instances, 1), t.avgspc_flb, t.flbcnt, t.kernelcols, nvl(t.trigflag, 0), nvl(t.spare1, 0), nvl(t.spare2, 0), t.spare4, t.spare6, ts.cachedblk, ts.cachehit, ts.logicalread from tab$ t, tab_stats$ ts where t.obj#= :1 and t.obj# = ts.obj# (+) |
|
BEGIN dbms_workload_repository.create_snapshot; END; |
|
select type#, blocks, extents, minexts, maxexts, extsize, extpct, user#, iniexts, NVL(lists, 65535), NVL(groups, 65535), cachehint, hwmincr, NVL(spare1, 0), NVL(scanhint, 0) from seg$ where ts#=:1 and file#=:2 and block#=:3 |
|
lock table sys.mon_mods$ in exclusive mode nowait |
|
update ICCICCS set CCSMAXOVERDUE=GREATEST(:b0, CCSMAXOVERDUE) where FNSACTNO=:b1 |
|
select count(*) from CUSVAA_TMP |
|
INSERT INTO sys.wri$_adv_message_groups (task_id, id, seq, message#, fac, hdr, lm, nl, p1, p2, p3, p4, p5) VALUES (:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13) |
|
insert into ICCICURMMAST values (:b0, :b1, :b2) |
|
insert into ICCIRPYV values (:b0, :b1, :b2, :b3, :b4, :b5, :b6, :b7, :b8, :b9, :b10, :b11, :b12, :b13, :b14, :b15, :b16, :b17, :b18, :b19, :b20, :b21, :b22, :b23, :b24, :b25, :b26, :b27, :b28, :b29, :b30, :b31, :b32, :b33, :b34, :b35, :b36, :b37, :b38, :b39, :b40, :b41, :b42, :b43, :b44, :b45, :b46, :b47, :b48, :b49, :b50, :b51) |
|
insert into ICCICCS values (:b0, '////////////////////////', 0, 0, 0, 0, 0, ' ', 0, 0, 0, ' ', '0', null ) |
|
update ICCIFNSACT set BORM_FACILITY_NO=:b0 where BORM_MEMB_CUST_AC=:b1 |
|
select intcol#, nvl(pos#, 0), col#, nvl(spare1, 0) from ccol$ where con#=:1 |
|
insert into CUMI select CUSV_CUST_NO , CUSV_EDUCATION_CODE , CHGDATE from CUMI_TMP where CHGFLAG<>'D' |
|
select * from ICCIPRODCODE where PRODCODE=to_char(:b0) |
|
select o.owner#, o.name, o.namespace, o.remoteowner, o.linkname, o.subname, o.dataobj#, o.flags from obj$ o where o.obj#=:1 |
|
select pos#, intcol#, col#, spare1, bo#, spare2 from icol$ where obj#=:1 |
|
SELECT F.TABLESPACE_NAME, TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE), '999, 999') "USED (MB)", TO_CHAR (F.FREE_SPACE, '999, 999') "FREE (MB)", TO_CHAR (T.TOTAL_SPACE, '999, 999') "TOTAL (MB)", TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)), '999')||' %' PER_FREE FROM ( SELECT TABLESPACE_NAME, ROUND (SUM (BLOCKS*(SELECT VALUE/1024 FROM V$PARAMETER WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME ) F, ( SELECT TABLESPACE_NAME, ROUND (SUM (BYTES/1048576)) TOTAL_SPACE FROM DBA_DATA_FILES GROUP BY TABLESPACE_NAME ) T WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME |
|
SELECT /*+ ALL_ROWS */ COUNT(*) FROM ALL_POLICIES V WHERE V.OBJECT_OWNER = :B3 AND V.OBJECT_NAME = :B2 AND (POLICY_NAME LIKE '%xdbrls%' OR POLICY_NAME LIKE '%$xd_%') AND V.FUNCTION = :B1 |
|
select c.name, u.name from con$ c, cdef$ cd, user$ u where c.con# = cd.con# and cd.enabled = :1 and c.owner# = u.user# |
|
select i.obj#, i.ts#, i.file#, i.block#, i.intcols, i.type#, i.flags, i.property, i.pctfree$, i.initrans, i.maxtrans, i.blevel, i.leafcnt, i.distkey, i.lblkkey, i.dblkkey, i.clufac, i.cols, i.analyzetime, i.samplesize, i.dataobj#, nvl(i.degree, 1), nvl(i.instances, 1), i.rowcnt, mod(i.pctthres$, 256), i.indmethod#, i.trunccnt, nvl(c.unicols, 0), nvl(c.deferrable#+c.valid#, 0), nvl(i.spare1, i.intcols), i.spare4, i.spare2, i.spare6, decode(i.pctthres$, null, null, mod(trunc(i.pctthres$/256), 256)), ist.cachedblk, ist.cachehit, ist.logicalread from ind$ i, ind_stats$ ist, (select enabled, min(cols) unicols, min(to_number(bitand(defer, 1))) deferrable#, min(to_number(bitand(defer, 4))) valid# from cdef$ where obj#=:1 and enabled > 1 group by enabled) c where i.obj#=c.enabled(+) and i.obj# = ist.obj#(+) and i.bo#=:1 order by i.obj# |
|
select NEWACTNO into :b0 from OLDNEWACT where OLDACTNO=:b1 |
|
update ICCIFNSACT set BORM_ADV_DATE=:b0, BOIS_MATURITY_DATE=:b1, BOIS_UNPD_BAL=:b2, BOIS_UNPD_INT=:b3, BOIS_BAL_FINE=:b4, BOIS_INT_FINE=:b5, BOIS_FINE_FINE=:b6, BORM_LOAN_TRM=:b7, BORM_FIVE_STAT=:b8, BOIS_ARREARS_CTR=:b9, BOIS_ARREARS_SUM=:b10 where BORM_MEMB_CUST_AC=:b11 |
|
select name, intcol#, segcol#, type#, length, nvl(precision#, 0), decode(type#, 2, nvl(scale, -127/*MAXSB1MINAL*/), 178, scale, 179, scale, 180, scale, 181, scale, 182, scale, 183, scale, 231, scale, 0), null$, fixedstorage, nvl(deflength, 0), default$, rowid, col#, property, nvl(charsetid, 0), nvl(charsetform, 0), spare1, spare2, nvl(spare3, 0) from col$ where obj#=:1 order by intcol# |
|
insert into wrh$_latch (snap_id, dbid, instance_number, latch_hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, sleep2, sleep3, sleep4, wait_time) select :snap_id, :dbid, :instance_number, hash, level#, gets, misses, sleeps, immediate_gets, immediate_misses, spin_gets, sleep1, sleep2, sleep3, sleep4, wait_time from v$latch order by hash |
|
update seg$ set type#=:4, blocks=:5, extents=:6, minexts=:7, maxexts=:8, extsize=:9, extpct=:10, user#=:11, iniexts=:12, lists=decode(:13, 65535, NULL, :13), groups=decode(:14, 65535, NULL, :14), cachehint=:15, hwmincr=:16, spare1=DECODE(:17, 0, NULL, :17), scanhint=:18 where ts#=:1 and file#=:2 and block#=:3 |
|
select con# from con$ where owner#=:1 and name=:2 |
|
select owner#, name from con$ where con#=:1 |
|
select CUSV_CUST_NO from CUMI_TMP where CHGFLAG='D' |
|
insert into CUSM select CUSM_CUST_ACCT_NO , CUSM_STAT_POST_ADD_NO , CHGDATE from CUSM_TMP where CHGFLAG<>'D' |
|
update tsq$ set blocks=:3, maxblocks=:4, grantor#=:5, priv1=:6, priv2=:7, priv3=:8 where ts#=:1 and user#=:2 |
|
delete from RecycleBin$ where bo=:1 |
|
SELECT count(*) FROM user_policies o WHERE o.object_name = :tablename AND (policy_name LIKE '%xdbrls%' OR policy_name LIKE '%$xd_%') AND o.function='CHECKPRIVRLS_SELECTPF' |
|
select file# from file$ where ts#=:1 |
|
update obj$ set obj#=:6, type#=:7, ctime=:8, mtime=:9, stime=:10, status=:11, dataobj#=:13, flags=:14, oid$=:15, spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) |
|
select count(*) into :b0 from ICCIFNSACT where BORM_MEMB_CUST_AC=:b1 |
|
delete from con$ where owner#=:1 and name=:2 |
|
insert into ICCICCS values (:b0, :b1, :b2, :b3, :b4, :b5, :b6, :b7, :b8, :b9, :b10, :b11, :b12, :b13) |
|
BEGIN BEGIN IF (xdb.DBMS_XDBZ0.is_hierarchy_enabled_internal(sys.dictionary_obj_owner, sys.dictionary_obj_name, sys.dictionary_obj_owner)) THEN xdb.XDB_PITRIG_PKG.pitrig_truncate(sys.dictionary_obj_owner, sys.dictionary_obj_name); END IF; EXCEPTION WHEN OTHERS THEN null; END; BEGIN IF (xdb.DBMS_XDBZ0.is_hierarchy_enabled_internal(sys.dictionary_obj_owner, sys.dictionary_obj_name, sys.dictionary_obj_owner, xdb.DBMS_XDBZ.IS_ENABLED_RESMETADATA)) THEN xdb.XDB_PITRIG_PKG.pitrig_dropmetadata(sys.dictionary_obj_owner, sys.dictionary_obj_name); END IF; EXCEPTION WHEN OTHERS THEN null; END; END; |
|
select CUSM_CUST_ACCT_NO from CUSM_TMP where CHGFLAG='D' |
|
insert into CUID select CUID_CUST_NO , CUID_ID_MAIN , CUID_ID_TYPE , CUID_ID_RECNO , CUID_ID_NUMBER , CHGDATE from CUID_TMP where CHGFLAG<>'D' |
|
begin prvt_hdm.auto_execute( :db_id, :inst_id, :end_snap ); end; |
|
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate; broken BOOLEAN := FALSE; BEGIN statspack.snap; :mydate := next_date; IF broken THEN :b := 1; ELSE :b := 0; END IF; END; |
|
SELECT sys.wri$_adv_seq_msggroup.nextval FROM dual |
|
update sys.mon_mods$ set inserts = inserts + :ins, updates = updates + :upd, deletes = deletes + :del, flags = (decode(bitand(flags, :flag), :flag, flags, flags + :flag)), drop_segments = drop_segments + :dropseg, timestamp = :time where obj# = :objn |
|
insert into OLDNEWACT values (:b0, :b1) |
Back to SQL Statistics
Back to Top
Statistic |
Total |
per Second |
per Trans |
CPU used by this session |
23,388 |
4.95 |
4.18 |
CPU used when call started |
21,816 |
4.61 |
3.90 |
CR blocks created |
2,794 |
0.59 |
0.50 |
Cached Commit SCN referenced |
237,936 |
50.33 |
42.50 |
Commit SCN cached |
3 |
0.00 |
0.00 |
DB time |
583,424 |
123.41 |
104.22 |
DBWR checkpoint buffers written |
402,781 |
85.20 |
71.95 |
DBWR checkpoints |
9 |
0.00 |
0.00 |
DBWR fusion writes |
255 |
0.05 |
0.05 |
DBWR object drop buffers written |
0 |
0.00 |
0.00 |
DBWR thread checkpoint buffers written |
221,341 |
46.82 |
39.54 |
DBWR transaction table writes |
130 |
0.03 |
0.02 |
DBWR undo block writes |
219,272 |
46.38 |
39.17 |
DFO trees parallelized |
16 |
0.00 |
0.00 |
PX local messages recv'd |
40 |
0.01 |
0.01 |
PX local messages sent |
40 |
0.01 |
0.01 |
PX remote messages recv'd |
80 |
0.02 |
0.01 |
PX remote messages sent |
80 |
0.02 |
0.01 |
Parallel operations not downgraded |
16 |
0.00 |
0.00 |
RowCR - row contention |
9 |
0.00 |
0.00 |
RowCR attempts |
14 |
0.00 |
0.00 |
RowCR hits |
5 |
0.00 |
0.00 |
SMON posted for undo segment recovery |
0 |
0.00 |
0.00 |
SMON posted for undo segment shrink |
9 |
0.00 |
0.00 |
SQL*Net roundtrips to/from client |
1,544,063 |
326.62 |
275.82 |
active txn count during cleanout |
276,652 |
58.52 |
49.42 |
application wait time |
1,620 |
0.34 |
0.29 |
auto extends on undo tablespace |
0 |
0.00 |
0.00 |
background checkpoints completed |
7 |
0.00 |
0.00 |
background checkpoints started |
9 |
0.00 |
0.00 |
background timeouts |
21,703 |
4.59 |
3.88 |
branch node splits |
337 |
0.07 |
0.06 |
buffer is not pinned count |
1,377,184 |
291.32 |
246.01 |
buffer is pinned count |
20,996,139 |
4,441.37 |
3,750.65 |
bytes received via SQL*Net from client |
7,381,397,183 |
1,561,408.36 |
1,318,577.56 |
bytes sent via SQL*Net to client |
149,122,035 |
31,544.22 |
26,638.45 |
calls to get snapshot scn: kcmgss |
1,696,712 |
358.91 |
303.09 |
calls to kcmgas |
433,435 |
91.69 |
77.43 |
calls to kcmgcs |
142,482 |
30.14 |
25.45 |
change write time |
4,707 |
1.00 |
0.84 |
cleanout - number of ktugct calls |
282,045 |
59.66 |
50.38 |
cleanouts and rollbacks - consistent read gets |
55 |
0.01 |
0.01 |
cleanouts only - consistent read gets |
2,406 |
0.51 |
0.43 |
cluster key scan block gets |
21,886 |
4.63 |
3.91 |
cluster key scans |
10,540 |
2.23 |
1.88 |
cluster wait time |
2,855 |
0.60 |
0.51 |
commit batch/immediate performed |
294 |
0.06 |
0.05 |
commit batch/immediate requested |
294 |
0.06 |
0.05 |
commit cleanout failures: block lost |
2,227 |
0.47 |
0.40 |
commit cleanout failures: callback failure |
750 |
0.16 |
0.13 |
commit cleanout failures: cannot pin |
4 |
0.00 |
0.00 |
commit cleanouts |
427,610 |
90.45 |
76.39 |
commit cleanouts successfully completed |
424,629 |
89.82 |
75.85 |
commit immediate performed |
294 |
0.06 |
0.05 |
commit immediate requested |
294 |
0.06 |
0.05 |
commit txn count during cleanout |
111,557 |
23.60 |
19.93 |
concurrency wait time |
515 |
0.11 |
0.09 |
consistent changes |
1,716 |
0.36 |
0.31 |
consistent gets |
5,037,471 |
1,065.59 |
899.87 |
由consistent gets,db block gets和physical reads這三個值,咱們也能夠計算獲得buffer hit ratio, 計算的公式以下: buffer hit ratio = 100*(1-physical reads /(consistent gets+ db block gets)) 例如在這裏,咱們能夠計算獲得: buffer hit ratio =100*(1-26524/(16616758+2941398))= 99.86 |
|||
consistent gets - examination |
2,902,016 |
613.87 |
518.40 |
consistent gets direct |
0 |
0.00 |
0.00 |
consistent gets from cache |
5,037,471 |
1,065.59 |
899.87 |
current blocks converted for CR |
0 |
0.00 |
0.00 |
cursor authentications |
434 |
0.09 |
0.08 |
data blocks consistent reads - undo records applied |
1,519 |
0.32 |
0.27 |
db block changes |
8,594,158 |
1,817.95 |
1,535.22 |
db block gets |
11,611,321 |
2,456.18 |
2,074.19 |
db block gets direct |
1,167,830 |
247.03 |
208.62 |
db block gets from cache |
10,443,491 |
2,209.14 |
1,865.58 |
deferred (CURRENT) block cleanout applications |
20,786 |
4.40 |
3.71 |
dirty buffers inspected |
25,007 |
5.29 |
4.47 |
髒數據從LRU列表中老化,A value here indicates that the DBWR is not keeping up。若是這個值大於0,就須要考慮增長DBWRs。 dirty buffers inspected: This is the number of dirty (modified) data buffers that were aged out on the LRU list. You may benefit by adding more DBWRs.If it is greater than 0, consider increasing the database writes. |
|||
drop segment calls in space pressure |
0 |
0.00 |
0.00 |
enqueue conversions |
6,734 |
1.42 |
1.20 |
enqueue releases |
595,149 |
125.89 |
106.31 |
enqueue requests |
595,158 |
125.90 |
106.32 |
enqueue timeouts |
9 |
0.00 |
0.00 |
enqueue waits |
7,901 |
1.67 |
1.41 |
exchange deadlocks |
1 |
0.00 |
0.00 |
execute count |
1,675,112 |
354.34 |
299.23 |
free buffer inspected |
536,832 |
113.56 |
95.90 |
這個值包含dirty,pinned,busy的buffer區域,若是free buffer inspected - dirty buffers inspected - buffer is pinned count的值仍是比較大,代表不能被重用的內存塊比較多,這將致使latch爭用,須要增大buffer cache |
|||
free buffer requested |
746,999 |
158.01 |
133.44 |
gc CPU used by this session |
9,099 |
1.92 |
1.63 |
gc cr block build time |
13 |
0.00 |
0.00 |
gc cr block flush time |
143 |
0.03 |
0.03 |
gc cr block receive time |
474 |
0.10 |
0.08 |
gc cr block send time |
36 |
0.01 |
0.01 |
gc cr blocks received |
4,142 |
0.88 |
0.74 |
gc cr blocks served |
10,675 |
2.26 |
1.91 |
gc current block flush time |
23 |
0.00 |
0.00 |
gc current block pin time |
34 |
0.01 |
0.01 |
gc current block receive time |
1,212 |
0.26 |
0.22 |
gc current block send time |
52 |
0.01 |
0.01 |
gc current blocks received |
15,502 |
3.28 |
2.77 |
gc current blocks served |
17,534 |
3.71 |
3.13 |
gc local grants |
405,329 |
85.74 |
72.41 |
gc remote grants |
318,630 |
67.40 |
56.92 |
gcs messages sent |
1,129,094 |
238.84 |
201.70 |
ges messages sent |
90,695 |
19.18 |
16.20 |
global enqueue get time |
1,707 |
0.36 |
0.30 |
global enqueue gets async |
12,731 |
2.69 |
2.27 |
global enqueue gets sync |
190,492 |
40.30 |
34.03 |
global enqueue releases |
190,328 |
40.26 |
34.00 |
global undo segment hints helped |
0 |
0.00 |
0.00 |
global undo segment hints were stale |
0 |
0.00 |
0.00 |
heap block compress |
108,758 |
23.01 |
19.43 |
hot buffers moved to head of LRU |
18,652 |
3.95 |
3.33 |
immediate (CR) block cleanout applications |
2,462 |
0.52 |
0.44 |
immediate (CURRENT) block cleanout applications |
325,184 |
68.79 |
58.09 |
index crx upgrade (positioned) |
4,663 |
0.99 |
0.83 |
index fast full scans (full) |
13 |
0.00 |
0.00 |
index fetch by key |
852,181 |
180.26 |
152.23 |
index scans kdiixs1 |
339,583 |
71.83 |
60.66 |
leaf node 90-10 splits |
34 |
0.01 |
0.01 |
leaf node splits |
106,552 |
22.54 |
19.03 |
lob reads |
11 |
0.00 |
0.00 |
lob writes |
83 |
0.02 |
0.01 |
lob writes unaligned |
83 |
0.02 |
0.01 |
local undo segment hints helped |
0 |
0.00 |
0.00 |
local undo segment hints were stale |
0 |
0.00 |
0.00 |
logons cumulative |
61 |
0.01 |
0.01 |
messages received |
20,040 |
4.24 |
3.58 |
messages sent |
19,880 |
4.21 |
3.55 |
no buffer to keep pinned count |
0 |
0.00 |
0.00 |
no work - consistent read gets |
1,513,070 |
320.06 |
270.29 |
opened cursors cumulative |
183,375 |
38.79 |
32.76 |
parse count (failures) |
1 |
0.00 |
0.00 |
parse count (hard) |
143 |
0.03 |
0.03 |
parse count (total) |
182,780 |
38.66 |
32.65 |
經過parse count (hard)和parse count (total),能夠計算soft parse率爲: 100-100*(parse count (hard)/parse count (total)) =100-100*(1-6090/191531)=96.82 |
|||
parse time cpu |
27 |
0.01 |
0.00 |
parse time elapsed |
338 |
0.07 |
0.06 |
physical read IO requests |
82,815 |
17.52 |
14.79 |
physical read bytes |
2,643,378,176 |
559,161.45 |
472,200.46 |
physical read total IO requests |
98,871 |
20.91 |
17.66 |
physical read total bytes |
2,905,491,456 |
614,607.04 |
519,023.13 |
physical read total multi block requests |
24,089 |
5.10 |
4.30 |
physical reads |
322,678 |
68.26 |
57.64 |
physical reads cache |
213,728 |
45.21 |
38.18 |
physical reads cache prefetch |
191,830 |
40.58 |
34.27 |
physical reads direct |
108,950 |
23.05 |
19.46 |
physical reads direct temporary tablespace |
108,812 |
23.02 |
19.44 |
physical reads prefetch warmup |
0 |
0.00 |
0.00 |
physical write IO requests |
223,456 |
47.27 |
39.92 |
physical write bytes |
14,042,071,040 |
2,970,360.02 |
2,508,408.55 |
physical write total IO requests |
133,835 |
28.31 |
23.91 |
physical write total bytes |
23,114,268,672 |
4,889,428.30 |
4,129,022.63 |
physical write total multi block requests |
116,135 |
24.57 |
20.75 |
physical writes |
1,714,120 |
362.59 |
306.20 |
physical writes direct |
1,276,780 |
270.08 |
228.08 |
physical writes direct (lob) |
0 |
0.00 |
0.00 |
physical writes direct temporary tablespace |
108,812 |
23.02 |
19.44 |
physical writes from cache |
437,340 |
92.51 |
78.12 |
physical writes non checkpoint |
1,673,703 |
354.04 |
298.98 |
pinned buffers inspected |
10 |
0.00 |
0.00 |
prefetch clients - default |
0 |
0.00 |
0.00 |
prefetch warmup blocks aged out before use |
0 |
0.00 |
0.00 |
prefetch warmup blocks flushed out before use |
0 |
0.00 |
0.00 |
prefetched blocks aged out before use |
0 |
0.00 |
0.00 |
process last non-idle time |
4,730 |
1.00 |
0.84 |
queries parallelized |
16 |
0.00 |
0.00 |
recursive calls |
1,654,650 |
350.01 |
295.58 |
recursive cpu usage |
2,641 |
0.56 |
0.47 |
redo blocks written |
8,766,094 |
1,854.32 |
1,565.93 |
redo buffer allocation retries |
24 |
0.01 |
0.00 |
redo entries |
4,707,068 |
995.70 |
840.85 |
redo log space requests |
34 |
0.01 |
0.01 |
redo log space wait time |
50 |
0.01 |
0.01 |
redo ordering marks |
277,042 |
58.60 |
49.49 |
redo size |
4,343,559,400 |
918,805.72 |
775,912.72 |
redo subscn max counts |
2,693 |
0.57 |
0.48 |
redo synch time |
408 |
0.09 |
0.07 |
redo synch writes |
6,984 |
1.48 |
1.25 |
redo wastage |
1,969,620 |
416.64 |
351.84 |
redo write time |
5,090 |
1.08 |
0.91 |
redo writer latching time |
1 |
0.00 |
0.00 |
redo writes |
5,494 |
1.16 |
0.98 |
rollback changes - undo records applied |
166,609 |
35.24 |
29.76 |
rollbacks only - consistent read gets |
1,463 |
0.31 |
0.26 |
rows fetched via callback |
342,159 |
72.38 |
61.12 |
session connect time |
1,461 |
0.31 |
0.26 |
session cursor cache hits |
180,472 |
38.18 |
32.24 |
session logical reads |
16,648,792 |
3,521.77 |
2,974.06 |
session pga memory |
37,393,448 |
7,909.94 |
6,679.79 |
session pga memory max |
45,192,232 |
9,559.64 |
8,072.92 |
session uga memory |
30,067,312,240 |
6,360,225.77 |
5,371,081.14 |
session uga memory max |
61,930,448 |
13,100.33 |
11,062.96 |
shared hash latch upgrades - no wait |
6,364 |
1.35 |
1.14 |
shared hash latch upgrades - wait |
0 |
0.00 |
0.00 |
sorts (disk) |
4 |
0.00 |
0.00 |
磁盤排序通常不能超過5%。若是超過5%,須要設置參數PGA_AGGREGATE_TARGET或者 SORT_AREA_SIZE,注意,這裏SORT_AREA_SIZE是分配給每一個用戶的,PGA_AGGREGATE_TARGET則是針對全部的session的一個總數設置。 |
|||
sorts (memory) |
2,857 |
0.60 |
0.51 |
內存中的排序數量 |
|||
sorts (rows) |
42,379,505 |
8,964.66 |
7,570.47 |
space was found by tune down |
0 |
0.00 |
0.00 |
space was not found by tune down |
0 |
0.00 |
0.00 |
sql area evicted |
7 |
0.00 |
0.00 |
sql area purged |
44 |
0.01 |
0.01 |
steps of tune down ret. in space pressure |
0 |
0.00 |
0.00 |
summed dirty queue length |
35,067 |
7.42 |
6.26 |
switch current to new buffer |
17 |
0.00 |
0.00 |
table fetch by rowid |
680,469 |
143.94 |
121.56 |
這是經過索引或者where rowid=語句來取得的行數,固然這個值越大越好。 |
|||
table fetch continued row |
0 |
0.00 |
0.00 |
這是發生行遷移的行。當行遷移的狀況比較嚴重時,須要對這部分進行優化。 檢查行遷移的方法: 1) 運行$ORACLE_HOME/rdbms/admin/utlchain.sql 2) analyze table table_name list chained rows into CHAINED_ROWS 3) select * from CHAINED_ROWS where table_name='table_name'; 清除的方法: 方法1:create table table_name_tmp as select * from table_name where rowed in (select head_rowid from chained_rows); Delete from table_name where rowed in (select head_rowid from chained_rows); Insert into table_name select * from table_name_tmp; 方法2:create table table_name_tmp select * from table_name ; truncate table table_name insert into table_name select * from table_name_tmp 方法3:用exp工具導出表,而後刪除這個表,最後用imp工具導入這表 方法4:alter table table_name move tablespace tablespace_name,而後再從新表的索引 上面的4種方法能夠用以消除已經存在的行遷移現象,可是行遷移的產生不少狀況下時因爲PCT_FREE參數設置的過小所致使,因此須要調整PCT_FREE參數的值。 |
|||
table scan blocks gotten |
790,986 |
167.32 |
141.30 |
table scan rows gotten |
52,989,363 |
11,208.99 |
9,465.77 |
table scans (long tables) |
4 |
0.00 |
0.00 |
longtables就是表的大小超過buffer buffer* _SMALL_TABLE_THRESHOLD的表。若是一個數據庫的大表掃描過多,那麼db file scattered read等待事件可能一樣很是顯著。若是table scans (long tables)的per Trans值大於0,你可能須要增長適當的索引來優化你的SQL語句 |
|||
table scans (short tables) |
169,201 |
35.79 |
30.23 |
short tables是指表的長度低於buffer chache 2%(2%是有隱含參數_SMALL_TABLE_THRESHOLD定義的,這個參數在oracle不一樣的版本中,有不一樣的含義。在9i和10g中,該參數值定義爲2%,在8i中,該參數值爲20個blocks,在v7中,該參數爲5個blocks)的表。這些表將優先使用全表掃描。通常不使用索引。_SMALL_TABLE_THRESHOLD值的計算方法以下(9i,8K): (db_cache_size/8192)*2%。 注意:_SMALL_TABLE_THRESHOLD參數修改是至關危險的操做 |
|||
total number of times SMON posted |
259 |
0.05 |
0.05 |
transaction lock background get time |
0 |
0.00 |
0.00 |
transaction lock background gets |
0 |
0.00 |
0.00 |
transaction lock foreground requests |
0 |
0.00 |
0.00 |
transaction lock foreground wait time |
0 |
0.00 |
0.00 |
transaction rollbacks |
294 |
0.06 |
0.05 |
tune down retentions in space pressure |
0 |
0.00 |
0.00 |
undo change vector size |
1,451,085,596 |
306,952.35 |
259,215.00 |
user I/O wait time |
11,992 |
2.54 |
2.14 |
user calls |
1,544,383 |
326.69 |
275.88 |
user commits |
812 |
0.17 |
0.15 |
user rollbacks |
4,786 |
1.01 |
0.85 |
workarea executions - onepass |
1 |
0.00 |
0.00 |
workarea executions - optimal |
1,616 |
0.34 |
0.29 |
write clones created in background |
0 |
0.00 |
0.00 |
write clones created in foreground |
11 |
0.00 |
0.00 |
Back to Instance Activity Statistics
Back to Top
Statistic |
Begin Value |
End Value |
session cursor cache count |
3,024 |
3,592 |
opened cursors current |
37 |
39 |
logons current |
24 |
26 |
Back to Instance Activity Statistics
Back to Top
Statistic |
Total |
per Hour |
log switches (derived) |
9 |
6.85 |
Back to Instance Activity Statistics
Back to Top
一般,在這裏指望在各設備上的讀取和寫入操做是均勻分佈的。要找出什麼文件可能很是「熱」。一旦DBA瞭解瞭如何讀取和寫入這些數據,他們也許可以經過磁盤間更均勻的分配I/O而獲得某些性能提高。
在這裏主要關注Av Rd(ms)列 (reads per millisecond)的值,通常來講,大部分的磁盤系統的這個值都能調整到14ms如下,oracle認爲該值超過20ms都是沒必要要的。若是該值超過1000ms,基本能夠確定存在I/O的性能瓶頸。若是在這一列上出現######,多是你的系統存在嚴重的I/O問題,也多是格式的顯示問題。
當出現上面的問題,咱們能夠考慮如下的方法:
1)優化操做該表空間或者文件的相關的語句。
2)若是該表空間包含了索引,能夠考慮壓縮索引,是索引的分佈空間減少,從而減少I/O。
3)將該表空間分散在多個邏輯卷中,平衡I/O的負載。
4)咱們能夠經過設置參數DB_FILE_MULTIBLOCK_READ_COUNT來調整讀取的並行度,這將提升全表掃描的效率。可是也會帶來一個問題,就是oracle會所以更多的使用全表掃描而放棄某些索引的使用。爲解決這個問題,咱們須要設置另一個參數OPTIMIZER_INDEX_COST_ADJ=30(通常建議設置10-50)。
關於OPTIMIZER_INDEX_COST_ADJ=n:該參數是一個百分比值,缺省值爲100,能夠理解爲FULL SCAN COST/INDEX SCAN COST。當n%* INDEXSCAN COST<FULL SCAN COST時,oracle會選擇使用索引。在具體設置的時候,咱們能夠根據具體的語句來調整該值。若是咱們但願某個statement使用索引,而實際它確走全表掃描,能夠對比這兩種狀況的執行計劃不一樣的COST,從而設置一個更合適的值。
5)檢查並調整I/O設備的性能。
Tablespace |
Reads |
Av Reads/s |
Av Rd(ms) |
Av Blks/Rd |
Writes |
Av Writes/s |
Buffer Waits |
Av Buf Wt(ms) |
ICCIDAT01 |
67,408 |
14 |
3.76 |
3.17 |
160,261 |
34 |
6 |
0.00 |
UNDOTBS1 |
10 |
0 |
12.00 |
1.00 |
57,771 |
12 |
625 |
0.02 |
TEMP |
15,022 |
3 |
8.74 |
7.24 |
3,831 |
1 |
0 |
0.00 |
USERS |
68 |
0 |
5.44 |
1.00 |
971 |
0 |
0 |
0.00 |
SYSAUX |
263 |
0 |
5.48 |
1.00 |
458 |
0 |
0 |
0.00 |
SYSTEM |
32 |
0 |
5.94 |
1.00 |
158 |
0 |
3 |
23.33 |
UNDOTBS2 |
6 |
0 |
16.67 |
1.00 |
6 |
0 |
0 |
0.00 |
顯示每一個表空間的I/O統計。根據Oracle經驗,Av Rd(ms) [Average Reads in milliseconds]不該該超過30,不然認爲有I/O爭用。
Tablespace |
Filename |
Reads |
Av Reads/s |
Av Rd(ms) |
Av Blks/Rd |
Writes |
Av Writes/s |
Buffer Waits |
Av Buf Wt(ms) |
ICCIDAT01 |
/dev/rora_icci01 |
5,919 |
1 |
4.30 |
3.73 |
15,161 |
3 |
1 |
0.00 |
ICCIDAT01 |
/dev/rora_icci02 |
7,692 |
2 |
4.12 |
3.18 |
16,555 |
4 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci03 |
6,563 |
1 |
2.59 |
3.80 |
15,746 |
3 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci04 |
8,076 |
2 |
2.93 |
3.11 |
16,164 |
3 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci05 |
6,555 |
1 |
2.61 |
3.31 |
21,958 |
5 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci06 |
6,943 |
1 |
4.03 |
3.41 |
20,574 |
4 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci07 |
7,929 |
2 |
4.12 |
2.87 |
18,263 |
4 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci08 |
7,719 |
2 |
3.83 |
2.99 |
17,361 |
4 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci09 |
6,794 |
1 |
4.79 |
3.29 |
18,425 |
4 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci10 |
211 |
0 |
5.31 |
1.00 |
6 |
0 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci11 |
1,168 |
0 |
4.45 |
1.00 |
6 |
0 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci12 |
478 |
0 |
4.23 |
1.00 |
6 |
0 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci13 |
355 |
0 |
5.13 |
1.00 |
6 |
0 |
0 |
0.00 |
ICCIDAT01 |
/dev/rora_icci14 |
411 |
0 |
4.91 |
1.00 |
6 |
0 |
1 |
0.00 |
ICCIDAT01 |
/dev/rora_icci15 |
172 |
0 |
5.29 |
1.00 |
6 |
0 |
1 |
0.00 |
ICCIDAT01 |
/dev/rora_icci16 |
119 |
0 |
7.23 |
1.00 |
6 |
0 |
1 |
0.00 |
ICCIDAT01 |
/dev/rora_icci17 |
227 |
0 |
6.26 |
1.00 |
6 |
0 |
1 |
0.00 |
ICCIDAT01 |
/dev/rora_icci18 |
77 |
0 |
8.44 |
1.00 |
6 |
0 |
1 |
0.00 |
SYSAUX |
/dev/rora_SYSAUX |
263 |
0 |
5.48 |
1.00 |
458 |
0 |
0 |
0.00 |
SYSTEM |
/dev/rora_SYSTEM |
32 |
0 |
5.94 |
1.00 |
158 |
0 |
3 |
23.33 |
TEMP |
/dev/rora_TEMP |
3,653 |
1 |
5.67 |
6.61 |
827 |
0 |
0 |
|
TEMP |
/dev/rora_TEMP2 |
2,569 |
1 |
4.42 |
6.70 |
556 |
0 |
0 |
|
TEMP |
/dev/rora_TEMP3 |
1,022 |
0 |
2.50 |
16.86 |
557 |
0 |
0 |
|
TEMP |
/dev/rora_TEMP5 |
7,778 |
2 |
12.43 |
6.46 |
1,891 |
0 |
0 |
|
UNDOTBS1 |
/dev/rora_UNDO0101 |
10 |
0 |
12.00 |
1.00 |
57,771 |
12 |
625 |
0.02 |
UNDOTBS2 |
/dev/rora_UNDO0201 |
6 |
0 |
16.67 |
1.00 |
6 |
0 |
0 |
0.00 |
USERS |
/dev/rora_USERS |
68 |
0 |
5.44 |
1.00 |
971 |
0 |
0 |
0.00 |
P |
Number of Buffers |
Pool Hit% |
Buffer Gets |
Physical Reads |
Physical Writes |
Free Buff Wait |
Writ Comp Wait |
Buffer Busy Waits |
D |
401,071 |
99 |
15,480,754 |
213,729 |
437,340 |
0 |
0 |
634 |
這裏將buffer poll細分,列舉default、keep、recycle三種類型的buffer的詳細狀況。在這份報告中,咱們的系統中只使用Default size的buffer pool。這裏的3個waits統計,其實在前面的等待時間中已經包含,因此能夠參考前面的描述。關於命中率也已經在前面討論。因此,其實這段信息不須要怎麼關注。
Back to Top
|
Targt MTTR (s) |
Estd MTTR (s) |
Recovery Estd IOs |
Actual Redo Blks |
Target Redo Blks |
Log File Size Redo Blks |
Log Ckpt Timeout Redo Blks |
Log Ckpt Interval Redo Blks |
B |
0 |
11 |
369 |
2316 |
5807 |
1883700 |
5807 |
|
E |
0 |
98 |
116200 |
1828613 |
1883700 |
1883700 |
5033355 |
|
Back to Advisory Statistics
Back to Top
這是oracle的對buffer pool的大小的調整建議。從advisory的數據看,固然buffer是越大,物理讀更小,隨着buffer的增大,對物理讀的性能改進愈來愈小。當前buffer 設置爲5,120M,物理讀因子=1。咱們能夠看到,buffer pool在3G以前的擴大,對物理讀的改善很是明顯,以後,這種改善的程度愈來愈低。
P |
Size for Est (M) |
Size Factor |
Buffers for Estimate |
Est Phys Read Factor |
Estimated Physical Reads |
D |
320 |
0.10 |
38,380 |
1.34 |
10,351,726 |
D |
640 |
0.19 |
76,760 |
1.25 |
9,657,000 |
D |
960 |
0.29 |
115,140 |
1.08 |
8,365,242 |
D |
1,280 |
0.38 |
153,520 |
1.04 |
8,059,415 |
D |
1,600 |
0.48 |
191,900 |
1.02 |
7,878,202 |
D |
1,920 |
0.57 |
230,280 |
1.01 |
7,841,140 |
D |
2,240 |
0.67 |
268,660 |
1.01 |
7,829,141 |
D |
2,560 |
0.77 |
307,040 |
1.01 |
7,817,370 |
D |
2,880 |
0.86 |
345,420 |
1.01 |
7,804,884 |
D |
3,200 |
0.96 |
383,800 |
1.00 |
7,784,014 |
D |
3,344 |
1.00 |
401,071 |
1.00 |
7,748,403 |
D |
3,520 |
1.05 |
422,180 |
0.99 |
7,702,243 |
D |
3,840 |
1.15 |
460,560 |
0.99 |
7,680,429 |
D |
4,160 |
1.24 |
498,940 |
0.99 |
7,663,046 |
D |
4,480 |
1.34 |
537,320 |
0.99 |
7,653,232 |
D |
4,800 |
1.44 |
575,700 |
0.99 |
7,645,544 |
D |
5,120 |
1.53 |
614,080 |
0.98 |
7,630,008 |
D |
5,440 |
1.63 |
652,460 |
0.98 |
7,616,886 |
D |
5,760 |
1.72 |
690,840 |
0.98 |
7,614,591 |
D |
6,080 |
1.82 |
729,220 |
0.98 |
7,613,191 |
D |
6,400 |
1.91 |
767,600 |
0.98 |
7,599,930 |
Back to Advisory Statistics
Back to Top
PGA Cache Hit % |
W/A MB Processed |
Extra W/A MB Read/Written |
87.91 |
1,100 |
151 |
Back to Advisory Statistics
Back to Top
|
PGA Aggr Target(M) |
Auto PGA Target(M) |
PGA Mem Alloc(M) |
W/A PGA Used(M) |
%PGA W/A Mem |
%Auto W/A Mem |
%Man W/A Mem |
Global Mem Bound(K) |
B |
1,024 |
862 |
150.36 |
0.00 |
0.00 |
0.00 |
0.00 |
104,850 |
E |
1,024 |
860 |
154.14 |
0.00 |
0.00 |
0.00 |
0.00 |
104,850 |
Back to Advisory Statistics
Back to Top
Low Optimal |
High Optimal |
Total Execs |
Optimal Execs |
1-Pass Execs |
M-Pass Execs |
2K |
4K |
1,385 |
1,385 |
0 |
0 |
64K |
128K |
28 |
28 |
0 |
0 |
128K |
256K |
5 |
5 |
0 |
0 |
256K |
512K |
79 |
79 |
0 |
0 |
512K |
1024K |
108 |
108 |
0 |
0 |
1M |
2M |
7 |
7 |
0 |
0 |
8M |
16M |
1 |
1 |
0 |
0 |
128M |
256M |
3 |
2 |
1 |
0 |
256M |
512M |
1 |
1 |
0 |
0 |
Back to Advisory Statistics
Back to Top
PGA Target Est (MB) |
Size Factr |
W/A MB Processed |
Estd Extra W/A MB Read/ Written to Disk |
Estd PGA Cache Hit % |
Estd PGA Overalloc Count |
128 |
0.13 |
4,652.12 |
2,895.99 |
62.00 |
0 |
256 |
0.25 |
4,652.12 |
2,857.13 |
62.00 |
0 |
512 |
0.50 |
4,652.12 |
2,857.13 |
62.00 |
0 |
768 |
0.75 |
4,652.12 |
2,857.13 |
62.00 |
0 |
1,024 |
1.00 |
4,652.12 |
717.82 |
87.00 |
0 |
1,229 |
1.20 |
4,652.12 |
717.82 |
87.00 |
0 |
1,434 |
1.40 |
4,652.12 |
717.82 |
87.00 |
0 |
1,638 |
1.60 |
4,652.12 |
717.82 |
87.00 |
0 |
1,843 |
1.80 |
4,652.12 |
717.82 |
87.00 |
0 |
2,048 |
2.00 |
4,652.12 |
717.82 |
87.00 |
0 |
3,072 |
3.00 |
4,652.12 |
717.82 |
87.00 |
0 |
4,096 |
4.00 |
4,652.12 |
717.82 |
87.00 |
0 |
6,144 |
6.00 |
4,652.12 |
717.82 |
87.00 |
0 |
8,192 |
8.00 |
4,652.12 |
717.82 |
87.00 |
0 |
Back to Advisory Statistics
Back to Top
Shared Pool Size(M) |
SP Size Factr |
Est LC Size (M) |
Est LC Mem Obj |
Est LC Time Saved (s) |
Est LC Time Saved Factr |
Est LC Load Time (s) |
Est LC Load Time Factr |
Est LC Mem Obj Hits |
304 |
0.43 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
384 |
0.55 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
464 |
0.66 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
544 |
0.77 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
624 |
0.89 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
704 |
1.00 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
784 |
1.11 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
864 |
1.23 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
944 |
1.34 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
1,024 |
1.45 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
1,104 |
1.57 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
1,184 |
1.68 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
1,264 |
1.80 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
1,344 |
1.91 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
1,424 |
2.02 |
78 |
7,626 |
64,842 |
1.00 |
31 |
1.00 |
3,206,955 |
Back to Advisory Statistics
Back to Top
SGA Target Size (M) |
SGA Size Factor |
Est DB Time (s) |
Est Physical Reads |
1,024 |
0.25 |
9,060 |
9,742,760 |
2,048 |
0.50 |
7,612 |
7,948,245 |
3,072 |
0.75 |
7,563 |
7,886,258 |
4,096 |
1.00 |
7,451 |
7,748,338 |
5,120 |
1.25 |
7,423 |
7,713,470 |
6,144 |
1.50 |
7,397 |
7,680,927 |
7,168 |
1.75 |
7,385 |
7,666,980 |
8,192 |
2.00 |
7,385 |
7,666,980 |
Back to Advisory Statistics
Back to Top
No data exists for this sectionof the report.
Back to Advisory Statistics
Back to Top
No data exists for this sectionof the report.
Back to Advisory Statistics
Back to Top
Class |
Waits |
Total Wait Time (s) |
Avg Time (ms) |
data block |
3 |
0 |
23 |
undo header |
616 |
0 |
0 |
file header block |
8 |
0 |
0 |
undo block |
7 |
0 |
0 |
Back to Wait Statistics
Back to Top
Enqueue Type (Request Reason) |
Requests |
Succ Gets |
Failed Gets |
Waits |
Wt Time (s) |
Av Wt Time(ms) |
FB-Format Block |
14,075 |
14,075 |
0 |
7,033 |
3 |
0.43 |
US-Undo Segment |
964 |
964 |
0 |
556 |
0 |
0.32 |
WF-AWR Flush |
24 |
24 |
0 |
14 |
0 |
9.00 |
HW-Segment High Water Mark |
4,223 |
4,223 |
0 |
37 |
0 |
1.22 |
CF-Controlfile Transaction |
10,548 |
10,548 |
0 |
58 |
0 |
0.67 |
TX-Transaction (index contention) |
1 |
1 |
0 |
1 |
0 |
35.00 |
TM-DML |
121,768 |
121,761 |
6 |
70 |
0 |
0.43 |
PS-PX Process Reservation |
103 |
103 |
0 |
46 |
0 |
0.65 |
TT-Tablespace |
9,933 |
9,933 |
0 |
39 |
0 |
0.54 |
TD-KTF map table enqueue (KTF dump entries) |
12 |
12 |
0 |
12 |
0 |
1.42 |
TA-Instance Undo |
18 |
18 |
0 |
13 |
0 |
0.38 |
PI-Remote PX Process Spawn Status |
16 |
16 |
0 |
8 |
0 |
0.50 |
MW-MWIN Schedule |
3 |
3 |
0 |
3 |
0 |
0.67 |
DR-Distributed Recovery |
3 |
3 |
0 |
3 |
0 |
0.33 |
TS-Temporary Segment |
14 |
11 |
3 |
3 |
0 |
0.33 |
AF-Advisor Framework (task serialization) |
14 |
14 |
0 |
1 |
0 |
1.00 |
JS-Job Scheduler (job run lock - synchronize) |
2 |
2 |
0 |
1 |
0 |
1.00 |
UL-User-defined |
2 |
2 |
0 |
1 |
0 |
1.00 |
MD-Materialized View Log DDL |
6 |
6 |
0 |
2 |
0 |
0.00 |
Back to Wait Statistics
Back to Top
Undo從9i開始,回滾段通常都是自動管理的,通常狀況下,這裏咱們不須要過重點關注。
在這裏,主要關注pct waits,若是出現比較多的pct waits,那就須要增長回滾段的數量或者增大回滾段的空間。另外,觀察一下各個回滾段使用的狀況,比較理想的是各個回滾段上Avg Active比較均衡。
在oracle 9i以前,回滾段時手工管理的,能夠經過指定optimal值來設定一個回滾段收縮的值,若是不設定,默認也應當爲initial+(minextents-1)*nextextents ,這個指定的結果,就是限制了回滾段不能無限制的增加,當超過optimal的設定值後,在適當的時候,oracle會shrinks到optimal大小。可是9i以後,undo通常都設置爲auto模式,在這種模式下,咱們沒法指定optimal值,好像也沒有默認值,因此沒法shrinks,回滾段就會無限制的增加,一直到表空間利用率達到爲100%,若是表空間設置爲自動擴展的方式,這種狀況下,就更糟糕,undo將無限制的增加。在這裏,咱們也能夠看到,shrinks的值爲0,也就是說,歷來就沒收縮過。
Undo TS# |
Num Undo Blocks (K) |
Number of Transactions |
Max Qry Len (s) |
Max Tx Concurcy |
Min/Max TR (mins) |
STO/ OOS |
uS/uR/uU/ eS/eR/eU |
1 |
219.12 |
113,405 |
0 |
6 |
130.95/239.25 |
0/0 |
0/0/0/13/24256/0 |
Back to Undo Statistics
Back to Top
End Time |
Num Undo Blocks |
Number of Transactions |
Max Qry Len (s) |
Max Tx Concy |
Tun Ret (mins) |
STO/ OOS |
uS/uR/uU/ eS/eR/eU |
25-Dec 15:18 |
182,021 |
74,309 |
0 |
5 |
131 |
0/0 |
0/0/0/13/24256/0 |
25-Dec 15:08 |
57 |
170 |
0 |
3 |
239 |
0/0 |
0/0/0/0/0/0 |
25-Dec 14:58 |
68 |
31 |
0 |
2 |
229 |
0/0 |
0/0/0/0/0/0 |
25-Dec 14:48 |
194 |
4,256 |
0 |
4 |
219 |
0/0 |
0/0/0/0/0/0 |
25-Dec 14:38 |
570 |
12,299 |
0 |
5 |
209 |
0/0 |
0/0/0/0/0/0 |
25-Dec 14:28 |
36,047 |
21,328 |
0 |
6 |
200 |
0/0 |
0/0/0/0/0/0 |
25-Dec 14:18 |
70 |
907 |
0 |
3 |
162 |
0/0 |
0/0/0/0/0/0 |
25-Dec 14:08 |
91 |
105 |
0 |
3 |
154 |
0/0 |
0/0/0/0/0/0 |
Back to Undo Statistics
Back to Top
Latch是一種低級排隊機制,用於防止對內存結構的並行訪問,保護系統全局區(SGA)共享內存結構。Latch是一種快速地被獲取和釋放的內存鎖。若是latch不可用,就會記錄latch free miss 。
有兩種類型的Latch:willing to wait和(immediate)notwilling to wait。
對於願意等待類型(willing-to-wait)的latch,若是一個進程在第一次嘗試中沒有得到latch,那麼它會等待而且再嘗試一次,若是通過_spin_count次爭奪不能得到latch, 而後該進程轉入睡眠狀態,百分之一秒以後醒來,按順序重複之前的步驟。在8i/9i中默認值是_spin_count=2000。睡眠的時間會愈來愈長。
對於不肯意等待類型(not-willing-to-wait)的latch,若是該閂不能當即獲得的話,那麼該進程就不會爲得到該閂而等待。它將繼續執行另外一個操做。
大多數Latch問題均可以歸結爲如下幾種:
沒有很好的是用綁定變量(library cache latch和shared pool cache)、重做生成問題(redo allocation latch)、緩衝存儲競爭問題(cache buffers LRU chain),以及buffer cache中的存在"熱點"塊(cache buffers chain)。
另外也有一些latch等待與bug有關,應當關注Metalink相關bug的公佈及補丁的發佈。
當latchmiss ratios大於0.5%時,就須要檢查latch的等待問題。
若是SQL語句不能調整,在8.1.6版本以上,能夠經過設置CURSOR_SHARING = force 在服務器端強制綁定變量。設置該參數可能會帶來必定的反作用,可能會致使執行計劃不優,另外對於Java的程序,有相關的bug,具體應用應該關注Metalink的bug公告。
下面對幾個重要類型的latch等待加以說明:
1) latch free:當‘latch free’在報告的高等待事件中出現時,就表示可能出現了性能問題,就須要在這一部分詳細分析出現等待的具體的latch的類型,而後再調整。
2) cache buffers chain:cbclatch代表熱塊。爲何這會表示存在熱塊?爲了理解這個問題,先要理解cbc的做用。ORACLE對buffer cache管理是以hash鏈表的方式來實現的(oracle稱爲buckets,buckets的數量由_db_block_hash_buckets定義)。cbc latch就是爲了保護buffer cache而設置的。當有併發的訪問需求時,cbc會將這些訪問串行化,當咱們得到cbc latch的控制權時,就能夠開始訪問數據,若是咱們所請求的數據正好的某個buckets中,那就直接從內存中讀取數據,完成以後釋放cbc latch,cbc latch就能夠被其餘的用戶獲取了。cbc latch獲取和釋放是很是快速的,因此這種狀況下就通常不會存在等待。可是若是咱們請求的數據在內存中不存在,就須要到物理磁盤上讀取數據,這相對於latch來講就是一個至關長的時間了,當找到對應的數據塊時,若是有其餘用戶正在訪問這個數據塊,而且數據塊上也沒有空閒的ITL槽來接收本次請求,就必須等待。在這過程當中,咱們由於沒有獲得請求的數據,就一直佔有cbc latch,其餘的用戶也就沒法獲取cbc latch,因此就出現了cbc latch等待的狀況。因此這種等待歸根結底就是因爲數據塊比較hot的形成的。
解決方法能夠參考前面在等待事件中的3) bufferbusy wait中關於熱塊的解決方法。
3) cache buffers lru chain:該latch用於掃描buffer的LRU鏈表。三種狀況可致使爭用:1)buffer cache過小;2)buffercache的過分使用,或者太多的基於cache的排序操做;3)DBWR不及時。解決方法:查找邏輯讀太高的statement,增大buffer cache。
4) Library cache and shared pool 爭用:
library cache是一個hash table,咱們須要經過一個hash buckets數組來訪問(相似buffer cache)。library cache latch就是將對library cache的訪問串行化。當有一個sql(或者PL/SQL procedure,package,function,trigger)須要執行的時候,首先須要獲取一個latch,而後library cache latch就會去查詢library cache以重用這些語句。在8i中,library cache latch只有一個。在9i中,有7個child latch,這個數量能夠經過參數_KGL_LATCH_ COUNT修改(最大能夠達到66個)。當共享池過小或者語句的reuse低的時候,會出現‘shared pool’、‘library cache pin’或者 ‘library cache’ latch的爭用。解決的方法是:增大共享池或者設置CURSOR_SHARING=FORCE|SIMILAR ,固然咱們也須要tuning SQL statement。爲減小爭用,咱們也能夠把一些比較大的SQL或者過程利用DBMS_SHARED_POOL.KEEP包來pinning在sharedpool中。
shared pool內存結構與buffer cache相似,也採用的是hash方式來管理的。共享池有一個固定數量的hash buckets,經過固定數量的library cache latch來串行化保護這段內存的使用。在數據啓動的時候,會分配509個hashbuctets,2*CPU_COUNT個library cache latch。當在數據庫的使用中,共享池中的對象愈來愈多,oracle就會以如下的遞增方式增長hash buckets的數量:509,1021,4093,8191,32749,65521,131071,4292967293。咱們能夠經過設置下面的參數來實現_KGL_BUCKET_COUNT,參數的默認值是0,表明數量509,最大咱們能夠設置爲8,表明數量131071。
咱們能夠經過x$ksmsp來查看具體的共享池內存段狀況,主要關注下面幾個字段:
KSMCHCOM—表示內存段的類型
ksmchptr—表示內存段的物理地址
ksmchsiz—表示內存段的大小
ksmchcls—表示內存段的分類。recr表示a recreatable piece currently in use that can be a candidate forflushing when the shared pool is low in available memory; freeabl表示當前正在使用的,可以被釋放的段; free表示空閒的未分配的段; perm表示不能被釋放永久分配段。
下降共享池的latch 爭用,咱們主要能夠考慮以下的幾個事件:
1、使用綁定變量
2、使用cursor sharing
3、設置session_cached_cursors參數。該參數的做用是將cursor從shared pool轉移到pga中。減少對共享池的爭用。通常初始的值能夠設置爲100,而後視狀況再做調整。
4、設置合適大小的共享池
5) Redo Copy:這個latch用來從PGA中copy redo records到redolog buffer。latch的初始數量是2*COU_OUNT,能夠經過設置參數_LOG_SIMULTANEOUS_COPIES在增長latch的數量,減少爭用。
6) Redo allocation:該latch用於redo log buffer的分配。減少這種類型的爭用的方法有3個:
增大redo log buffer
適當使用nologging選項
避免沒必要要的commit操做
7) Row cache objects:該latch出現爭用,一般代表數據字典存在爭用的狀況,這每每也預示着過多的依賴於公共同義詞的parse。解決方法:1)增大shared pool 2)使用本地管理的表空間,尤爲對於索引表空間
Latch事件 |
建議解決方法 |
Library cache |
使用綁定變量; 調整shared_pool_size. |
Shared pool |
使用綁定變量; 調整shared_pool_size. |
Redo allocation |
減少 redo 的產生;避免沒必要要的commits. |
Redo copy |
增長 _log_simultaneous_copies. |
Row cache objects |
增長shared_pool_size |
Cache buffers chain |
增大 _DB_BLOCK_HASH_BUCKETS ; make it prime. |
Cache buffers LRU chain |
使用多個緩衝池;調整引發大量邏輯讀的查詢 |
注:在這裏,提到了很多隱藏參數,也有利用隱藏參數來解決latch的方法描述,可是在實際的操做中,強烈建議儘可能不要去更改隱藏參數的默認值。
Latch Name |
Get Requests |
Pct Get Miss |
Avg Slps /Miss |
Wait Time (s) |
NoWait Requests |
Pct NoWait Miss |
ASM db client latch |
11,883 |
0.00 |
|
0 |
0 |
|
AWR Alerted Metric Element list |
18,252 |
0.00 |
|
0 |
0 |
|
Consistent RBA |
5,508 |
0.02 |
0.00 |
0 |
0 |
|
FOB s.o list latch |
731 |
0.00 |
|
0 |
0 |
|
JS broadcast add buf latch |
6,193 |
0.00 |
|
0 |
0 |
|
JS broadcast drop buf latch |
6,194 |
0.00 |
|
0 |
0 |
|
JS broadcast load blnc latch |
6,057 |
0.00 |
|
0 |
0 |
|
JS mem alloc latch |
8 |
0.00 |
|
0 |
0 |
|
JS queue access latch |
8 |
0.00 |
|
0 |
0 |
|
JS queue state obj latch |
218,086 |
0.00 |
|
0 |
0 |
|
JS slv state obj latch |
31 |
0.00 |
|
0 |
0 |
|
KCL gc element parent latch |
2,803,392 |
0.04 |
0.01 |
0 |
108 |
0.00 |
KJC message pool free list |
43,168 |
0.06 |
0.00 |
0 |
14,532 |
0.01 |
KJCT flow control latch |
563,875 |
0.00 |
0.00 |
0 |
0 |
|
KMG MMAN ready and startup request latch |
1,576 |
0.00 |
|
0 |
0 |
|
KSXR large replies |
320 |
0.00 |
|
0 |
0 |
|
KTF sga latch |
23 |
0.00 |
|
0 |
1,534 |
0.00 |
KWQMN job cache list latch |
352 |
0.00 |
|
0 |
0 |
|
KWQP Prop Status |
5 |
0.00 |
|
0 |
0 |
|
MQL Tracking Latch |
0 |
|
|
0 |
94 |
0.00 |
Memory Management Latch |
0 |
|
|
0 |
1,576 |
0.00 |
OS process |
207 |
0.00 |
|
0 |
0 |
|
OS process allocation |
1,717 |
0.00 |
|
0 |
0 |
|
OS process: request allocation |
73 |
0.00 |
|
0 |
0 |
|
PL/SQL warning settings |
226 |
0.00 |
|
0 |
0 |
|
SGA IO buffer pool latch |
20,679 |
0.06 |
0.00 |
0 |
20,869 |
0.00 |
SQL memory manager latch |
7 |
0.00 |
|
0 |
1,575 |
0.00 |
SQL memory manager workarea list latch |
439,442 |
0.00 |
|
0 |
0 |
|
Shared B-Tree |
182 |
0.00 |
|
0 |
0 |
|
Undo Hint Latch |
0 |
|
|
0 |
12 |
0.00 |
active checkpoint queue latch |
7,835 |
0.00 |
|
0 |
0 |
|
active service list |
50,936 |
0.00 |
|
0 |
1,621 |
0.00 |
archive control |
5 |
0.00 |
|
0 |
0 |
|
begin backup scn array |
72,901 |
0.00 |
0.00 |
0 |
0 |
|
business card |
32 |
0.00 |
|
0 |
0 |
|
cache buffer handles |
331,153 |
0.02 |
0.00 |
0 |
0 |
|
cache buffers chains |
48,189,073 |
0.00 |
0.00 |
0 |
1,201,379 |
0.00 |
cache buffers lru chain |
891,796 |
0.34 |
0.00 |
0 |
991,605 |
0.23 |
cache table scan latch |
0 |
|
|
0 |
10,309 |
0.01 |
channel handle pool latch |
99 |
0.00 |
|
0 |
0 |
|
channel operations parent latch |
490,324 |
0.01 |
0.00 |
0 |
0 |
|
checkpoint queue latch |
671,856 |
0.01 |
0.00 |
0 |
555,469 |
0.02 |
client/application info |
335 |
0.00 |
|
0 |
0 |
|
commit callback allocation |
12 |
0.00 |
|
0 |
0 |
|
compile environment latch |
173,428 |
0.00 |
|
0 |
0 |
|
dml lock allocation |
243,087 |
0.00 |
0.00 |
0 |
0 |
|
dummy allocation |
134 |
0.00 |
|
0 |
0 |
|
enqueue hash chains |
1,539,499 |
0.01 |
0.03 |
0 |
263 |
0.00 |
enqueues |
855,207 |
0.02 |
0.00 |
0 |
0 |
|
error message lists |
64 |
0.00 |
|
0 |
0 |
|
event group latch |
38 |
0.00 |
|
0 |
0 |
|
file cache latch |
4,694 |
0.00 |
|
0 |
0 |
|
gcs drop object freelist |
8,451 |
0.19 |
0.00 |
0 |
0 |
|
gcs opaque info freelist |
38,584 |
0.00 |
0.00 |
0 |
0 |
|
gcs partitioned table hash |
9,801,867 |
0.00 |
|
0 |
0 |
|
gcs remaster request queue |
31 |
0.00 |
|
0 |
0 |
|
gcs remastering latch |
1,014,198 |
0.00 |
0.33 |
0 |
0 |
|
gcs resource freelist |
1,154,551 |
0.03 |
0.00 |
0 |
771,650 |
0.00 |
gcs resource hash |
3,815,373 |
0.02 |
0.00 |
0 |
2 |
0.00 |
gcs resource scan list |
4 |
0.00 |
|
0 |
0 |
|
gcs shadows freelist |
795,482 |
0.00 |
0.00 |
0 |
779,648 |
0.00 |
ges caches resource lists |
209,655 |
0.02 |
0.00 |
0 |
121,613 |
0.01 |
ges deadlock list |
840 |
0.00 |
|
0 |
0 |
|
ges domain table |
366,702 |
0.00 |
|
0 |
0 |
|
ges enqueue table freelist |
487,875 |
0.00 |
|
0 |
0 |
|
ges group table |
543,887 |
0.00 |
|
0 |
0 |
|
ges process hash list |
59,503 |
0.00 |
|
0 |
0 |
|
ges process parent latch |
908,232 |
0.00 |
|
0 |
1 |
0.00 |
ges process table freelist |
73 |
0.00 |
|
0 |
0 |
|
ges resource hash list |
862,590 |
0.02 |
0.28 |
0 |
72,266 |
0.01 |
ges resource scan list |
534 |
0.00 |
|
0 |
0 |
|
ges resource table freelist |
135,406 |
0.00 |
0.00 |
0 |
0 |
|
ges synchronous data |
160 |
0.63 |
0.00 |
0 |
2,954 |
0.07 |
ges timeout list |
3,256 |
0.00 |
|
0 |
4,478 |
0.00 |
global KZLD latch for mem in SGA |
21 |
0.00 |
|
0 |
0 |
|
hash table column usage latch |
59 |
0.00 |
|
0 |
1,279 |
0.00 |
hash table modification latch |
116 |
0.00 |
|
0 |
0 |
|
job workq parent latch |
0 |
|
|
0 |
14 |
0.00 |
job_queue_processes parameter latch |
86 |
0.00 |
|
0 |
0 |
|
kks stats |
384 |
0.00 |
|
0 |
0 |
|
ksuosstats global area |
329 |
0.00 |
|
0 |
0 |
|
ktm global data |
296 |
0.00 |
|
0 |
0 |
|
kwqbsn:qsga |
182 |
0.00 |
|
0 |
0 |
|
lgwr LWN SCN |
6,547 |
0.18 |
0.00 |
0 |
0 |
|
library cache |
235,060 |
0.00 |
0.00 |
0 |
22 |
0.00 |
library cache load lock |
486 |
0.00 |
|
0 |
0 |
|
library cache lock |
49,284 |
0.00 |
|
0 |
0 |
|
library cache lock allocation |
566 |
0.00 |
|
0 |
0 |
|
library cache pin |
27,863 |
0.00 |
0.00 |
0 |
0 |
|
library cache pin allocation |
204 |
0.00 |
|
0 |
0 |
|
list of block allocation |
10,101 |
0.00 |
|
0 |
0 |
|
loader state object freelist |
108 |
0.00 |
|
0 |
0 |
|
longop free list parent |
6 |
0.00 |
|
0 |
6 |
0.00 |
message pool operations parent latch |
1,424 |
0.00 |
|
0 |
0 |
|
messages |
222,581 |
0.00 |
0.00 |
0 |
0 |
|
mostly latch-free SCN |
6,649 |
1.43 |
0.00 |
0 |
0 |
|
multiblock read objects |
29,230 |
0.03 |
0.00 |
0 |
0 |
|
name-service memory objects |
18,842 |
0.00 |
|
0 |
0 |
|
name-service namespace bucket |
56,712 |
0.00 |
|
0 |
0 |
|
name-service namespace objects |
15 |
0.00 |
|
0 |
0 |
|
name-service pending queue |
6,436 |
0.00 |
|
0 |
0 |
|
name-service request |
44 |
0.00 |
|
0 |
0 |
|
name-service request queue |
57,312 |
0.00 |
|
0 |
0 |
|
ncodef allocation latch |
77 |
0.00 |
|
0 |
0 |
|
object queue header heap |
37,721 |
0.00 |
|
0 |
7,457 |
0.00 |
object queue header operation |
2,706,992 |
0.06 |
0.00 |
0 |
0 |
|
object stats modification |
22 |
0.00 |
|
0 |
0 |
|
parallel query alloc buffer |
939 |
0.00 |
|
0 |
0 |
|
parallel query stats |
72 |
0.00 |
|
0 |
0 |
|
parallel txn reco latch |
630 |
0.00 |
|
0 |
0 |
|
parameter list |
193 |
0.00 |
|
0 |
0 |
|
parameter table allocation management |
68 |
0.00 |
|
0 |
0 |
|
post/wait queue |
4,205 |
0.00 |
|
0 |
2,712 |
0.00 |
process allocation |
46,895 |
0.00 |
|
0 |
38 |
0.00 |
process group creation |
73 |
0.00 |
|
0 |
0 |
|
process queue |
175 |
0.00 |
|
0 |
0 |
|
process queue reference |
2,621 |
0.00 |
|
0 |
240 |
62.50 |
qmn task queue latch |
668 |
0.15 |
1.00 |
0 |
0 |
|
query server freelists |
159 |
0.00 |
|
0 |
0 |
|
query server process |
8 |
0.00 |
|
0 |
7 |
0.00 |
queued dump request |
23,628 |
0.00 |
|
0 |
0 |
|
redo allocation |
21,206 |
0.57 |
0.00 |
0 |
4,706,826 |
0.02 |
redo copy |
0 |
|
|
0 |
4,707,106 |
0.01 |
redo writing |
29,944 |
0.01 |
0.00 |
0 |
0 |
|
resmgr group change latch |
69 |
0.00 |
|
0 |
0 |
|
resmgr:actses active list |
137 |
0.00 |
|
0 |
0 |
|
resmgr:actses change group |
52 |
0.00 |
|
0 |
0 |
|
resmgr:free threads list |
130 |
0.00 |
|
0 |
0 |
|
resmgr:schema config |
7 |
0.00 |
|
0 |
0 |
|
row cache objects |
1,644,149 |
0.00 |
0.00 |
0 |
321 |
0.00 |
rules engine rule set statistics |
500 |
0.00 |
|
0 |
0 |
|
sequence cache |
360 |
0.00 |
|
0 |
0 |
|
session allocation |
535,514 |
0.00 |
0.00 |
0 |
0 |
|
session idle bit |
3,262,141 |
0.00 |
0.00 |
0 |
0 |
|
session state list latch |
166 |
0.00 |
|
0 |
0 |
|
session switching |
77 |
0.00 |
|
0 |
0 |
|
session timer |
1,620 |
0.00 |
|
0 |
0 |
|
shared pool |
60,359 |
0.00 |
0.00 |
0 |
0 |
|
shared pool sim alloc |
13 |
0.00 |
|
0 |
0 |
|
shared pool simulator |
4,246 |
0.00 |
|
0 |
0 |
|
simulator hash latch |
1,862,803 |
0.00 |
|
0 |
0 |
|
simulator lru latch |
1,719,480 |
0.01 |
0.00 |
0 |
46,053 |
0.00 |
slave class |
2 |
0.00 |
|
0 |
0 |
|
slave class create |
8 |
12.50 |
1.00 |
0 |
0 |
|
sort extent pool |
1,284 |
0.00 |
|
0 |
0 |
|
state object free list |
4 |
0.00 |
|
0 |
0 |
|
statistics aggregation |
280 |
0.00 |
|
0 |
0 |
|
temp lob duration state obj allocation |
2 |
0.00 |
|
0 |
0 |
|
threshold alerts latch |
202 |
0.00 |
|
0 |
0 |
|
transaction allocation |
211 |
0.00 |
|
0 |
0 |
|
transaction branch allocation |
77 |
0.00 |
|
0 |
0 |
|
undo global data |
779,759 |
0.07 |
0.00 |
0 |
0 |
|
user lock |
102 |
0.00 |
|
0 |
0 |
|
Back to Latch Statistics
Back to Top
Latch Name |
Get Requests |
Misses |
Sleeps |
Spin Gets |
Sleep1 |
Sleep2 |
Sleep3 |
cache buffers lru chain |
891,796 |
3,061 |
1 |
3,060 |
0 |
0 |
0 |
object queue header operation |
2,706,992 |
1,755 |
3 |
1,752 |
0 |
0 |
0 |
KCL gc element parent latch |
2,803,392 |
1,186 |
11 |
1,176 |
0 |
0 |
0 |
cache buffers chains |
48,189,073 |
496 |
1 |
495 |
0 |
0 |
0 |
ges resource hash list |
862,590 |
160 |
44 |
116 |
0 |
0 |
0 |
enqueue hash chains |
1,539,499 |
79 |
2 |
78 |
0 |
0 |
0 |
gcs remastering latch |
1,014,198 |
3 |
1 |
2 |
0 |
0 |
0 |
qmn task queue latch |
668 |
1 |
1 |
0 |
0 |
0 |
0 |
slave class create |
8 |
1 |
1 |
0 |
0 |
0 |
0 |
Back to Latch Statistics
Back to Top
Latch Name |
Where |
NoWait Misses |
Sleeps |
Waiter Sleeps |
KCL gc element parent latch |
kclrwrite |
0 |
8 |
0 |
KCL gc element parent latch |
kclnfndnewm |
0 |
4 |
6 |
KCL gc element parent latch |
KCLUNLNK |
0 |
1 |
1 |
KCL gc element parent latch |
kclbla |
0 |
1 |
0 |
KCL gc element parent latch |
kclulb |
0 |
1 |
1 |
KCL gc element parent latch |
kclzcl |
0 |
1 |
0 |
cache buffers chains |
kcbnew: new latch again |
0 |
2 |
0 |
cache buffers chains |
kclwrt |
0 |
1 |
0 |
cache buffers lru chain |
kcbzgws |
0 |
1 |
0 |
enqueue hash chains |
ksqcmi: if lk mode not requested |
0 |
2 |
0 |
event range base latch |
No latch |
0 |
1 |
1 |
gcs remastering latch |
69 |
0 |
1 |
0 |
ges resource hash list |
kjlmfnd: search for lockp by rename and inst id |
0 |
23 |
0 |
ges resource hash list |
kjakcai: search for resp by resname |
0 |
13 |
0 |
ges resource hash list |
kjrmas1: lookup master node |
0 |
5 |
0 |
ges resource hash list |
kjlrlr: remove lock from resource queue |
0 |
2 |
33 |
ges resource hash list |
kjcvscn: remove from scan queue |
0 |
1 |
0 |
object queue header operation |
kcbo_switch_q_bg |
0 |
3 |
0 |
object queue header operation |
kcbo_switch_mq_bg |
0 |
2 |
4 |
object queue header operation |
kcbw_unlink_q |
0 |
2 |
0 |
object queue header operation |
kcbw_link_q |
0 |
1 |
0 |
slave class create |
ksvcreate |
0 |
1 |
0 |
Back to Latch Statistics
Back to Top
No data exists for this sectionof the report.
Back to Latch Statistics
Back to Top
No data exists for this sectionof the report.
Back to Latch Statistics
Back to Top
DBA_HIST_SEG_STAT
desc DBA_HIST_SEG_STAT
v$sesstat
v$statname
Owner |
Tablespace Name |
Object Name |
Subobject Name |
Obj. Type |
Logical Reads |
%Total |
ICCI01 |
ICCIDAT01 |
ICCICCS_PK |
|
INDEX |
1,544,848 |
9.28 |
ICCI01 |
ICCIDAT01 |
CUSCAD_TMP |
|
TABLE |
1,349,536 |
8.11 |
ICCI01 |
ICCIDAT01 |
ICCIFNSACT_PK |
|
INDEX |
1,268,400 |
7.62 |
ICCI01 |
ICCIDAT01 |
IND_OLDNEWACT |
|
INDEX |
1,071,072 |
6.43 |
ICCI01 |
ICCIDAT01 |
CUID_PK |
|
INDEX |
935,584 |
5.62 |
Back to Segment Statistics
Back to Top
Owner |
Tablespace Name |
Object Name |
Subobject Name |
Obj. Type |
Physical Reads |
%Total |
ICCI01 |
ICCIDAT01 |
CUID_TMP |
|
TABLE |
116,417 |
36.08 |
ICCI01 |
ICCIDAT01 |
CUMI_TMP |
|
TABLE |
44,086 |
13.66 |
ICCI01 |
ICCIDAT01 |
CUSM_TMP |
|
TABLE |
26,078 |
8.08 |
ICCI01 |
ICCIDAT01 |
CUSVAA_TMP_PK |
|
INDEX |
19,554 |
6.06 |
ICCI01 |
ICCIDAT01 |
CUID |
|
TABLE |
259 |
0.08 |
Back to Segment Statistics
Back to Top
當一個進程予在正被其它進程鎖住的數據行上得到排它鎖時發生這種等待。這種等待常常是因爲在一個有主鍵索引的表上作大量INSERT操做。
No data exists for this sectionof the report.
Back to Segment Statistics
Back to Top
No data exists for this sectionof the report.
Back to Segment Statistics
Back to Top
No data exists for this sectionof the report.
Back to Segment Statistics
Back to Top
Owner |
Tablespace Name |
Object Name |
Subobject Name |
Obj. Type |
GC Buffer Busy |
% of Capture |
SYS |
SYSTEM |
TSQ$ |
|
TABLE |
2 |
100.00 |
Back to Segment Statistics
Back to Top
Owner |
Tablespace Name |
Object Name |
Subobject Name |
Obj. Type |
CR Blocks Received |
%Total |
SYS |
SYSTEM |
USER$ |
|
TABLE |
1,001 |
24.17 |
SYS |
SYSTEM |
TSQ$ |
|
TABLE |
722 |
17.43 |
SYS |
SYSTEM |
SEG$ |
|
TABLE |
446 |
10.77 |
SYS |
SYSTEM |
OBJ$ |
|
TABLE |
264 |
6.37 |
SYS |
SYSTEM |
I_OBJ2 |
|
INDEX |
174 |
4.20 |
Back to Segment Statistics
Back to Top
Owner |
Tablespace Name |
Object Name |
Subobject Name |
Obj. Type |
Current Blocks Received |
%Total |
ICCI01 |
ICCIDAT01 |
CUSM_TMP |
|
TABLE |
5,764 |
37.18 |
ICCI01 |
ICCIDAT01 |
CUMI_TMP |
|
TABLE |
2,794 |
18.02 |
ICCI01 |
ICCIDAT01 |
CUID_TMP |
|
TABLE |
2,585 |
16.68 |
SYS |
SYSTEM |
SEG$ |
|
TABLE |
361 |
2.33 |
SYS |
SYSTEM |
TSQ$ |
|
TABLE |
361 |
2.33 |
Back to Segment Statistics
Back to Top
/* 庫緩存詳細信息,。
Get Requests:get表示一種類型的鎖,語法分析鎖。這種類型的鎖在引用了一個對象的那條SQL語句的語法分析階段被設置在該對象上。每當一條語句被語法分析一次時,Get Requests的值就增長1。
pin requests:pin也表示一種類型的鎖,是在執行發生的加鎖。每當一條語句執行一次,pin requests的值就增長1。
reloads:reloads列顯示一條已執行過的語句因Library Cache使該語句的已語法分析版本過時或做廢而須要被從新語法分析的次數。
invalidations:失效發生在一條已告訴緩存的SQL語句即便已經在library cache中,但已被標記爲無效並迎詞而被迫從新作語法分析的時候。每當已告訴緩存的語句所引用的對象以某種方式被修改時,這些語句就被標記爲無效。
pct miss應該不高於1%。
Reloads/pin requests <1%,不然應該考慮增大SHARED_POOL_SIZE。
該部分信息經過v$librarycache視圖統計獲得:
gethitratio,
pinhitratio,
reloads,
invalidations
FROMv$librarycache
WHERE namespace IN('SQL AREA', 'TABLE/PROCEDURE', 'BODY', 'TRIGGER', 'INDEX');
Dictionary Cache Stats
Cache |
Get Requests |
Pct Miss |
Scan Reqs |
Pct Miss |
Mod Reqs |
Final Usage |
dc_awr_control |
86 |
0.00 |
0 |
|
4 |
1 |
dc_constraints |
59 |
91.53 |
0 |
|
20 |
1,350 |
dc_files |
23 |
0.00 |
0 |
|
0 |
23 |
dc_global_oids |
406 |
0.00 |
0 |
|
0 |
35 |
dc_histogram_data |
673 |
0.15 |
0 |
|
0 |
1,555 |
dc_histogram_defs |
472 |
24.36 |
0 |
|
0 |
4,296 |
dc_object_grants |
58 |
0.00 |
0 |
|
0 |
154 |
dc_object_ids |
1,974 |
6.13 |
0 |
|
0 |
1,199 |
dc_objects |
955 |
19.58 |
0 |
|
56 |
2,064 |
dc_profiles |
30 |
0.00 |
0 |
|
0 |
1 |
dc_rollback_segments |
3,358 |
0.00 |
0 |
|
0 |
37 |
dc_segments |
2,770 |
2.56 |
0 |
|
1,579 |
1,312 |
dc_sequences |
9 |
33.33 |
0 |
|
9 |
5 |
dc_table_scns |
6 |
100.00 |
0 |
|
0 |
0 |
dc_tablespace_quotas |
1,558 |
28.50 |
0 |
|
1,554 |
3 |
dc_tablespaces |
346,651 |
0.00 |
0 |
|
0 |
7 |
dc_usernames |
434 |
0.00 |
0 |
|
0 |
14 |
dc_users |
175,585 |
0.00 |
0 |
|
0 |
43 |
outstanding_alerts |
57 |
71.93 |
0 |
|
0 |
1 |
Back to Dictionary Cache Statistics
Back to Top
Cache |
GES Requests |
GES Conflicts |
GES Releases |
dc_awr_control |
8 |
0 |
0 |
dc_constraints |
88 |
22 |
0 |
dc_histogram_defs |
115 |
0 |
0 |
dc_object_ids |
143 |
101 |
0 |
dc_objects |
253 |
111 |
0 |
dc_segments |
3,228 |
49 |
0 |
dc_sequences |
17 |
3 |
0 |
dc_table_scns |
6 |
0 |
0 |
dc_tablespace_quotas |
3,093 |
441 |
0 |
dc_users |
8 |
1 |
0 |
outstanding_alerts |
113 |
41 |
0 |
Back to Dictionary Cache Statistics
Back to Top
Namespace |
Get Requests |
Pct Miss |
Pin Requests |
Pct Miss |
Reloads |
Invali- dations |
BODY |
105 |
0.00 |
247 |
0.00 |
0 |
0 |
CLUSTER |
3 |
0.00 |
4 |
0.00 |
0 |
0 |
INDEX |
13 |
46.15 |
26 |
42.31 |
5 |
0 |
SQL AREA |
56 |
100.00 |
1,857,002 |
0.02 |
32 |
12 |
TABLE/PROCEDURE |
179 |
35.75 |
3,477 |
8.02 |
63 |
0 |
TRIGGER |
323 |
0.00 |
386 |
0.00 |
0 |
0 |
Back to Library Cache Statistics
Back to Top
Namespace |
GES Lock Requests |
GES Pin Requests |
GES Pin Releases |
GES Inval Requests |
GES Invali- dations |
BODY |
5 |
0 |
0 |
0 |
0 |
CLUSTER |
4 |
0 |
0 |
0 |
0 |
INDEX |
26 |
22 |
6 |
17 |
0 |
TABLE/PROCEDURE |
1,949 |
285 |
63 |
244 |
0 |
Back to Library Cache Statistics
Back to Top
|
Category |
Alloc (MB) |
Used (MB) |
Avg Alloc (MB) |
Std Dev Alloc (MB) |
Max Alloc (MB) |
Hist Max Alloc (MB) |
Num Proc |
Num Alloc |
B |
Other |
136.42 |
|
5.25 |
8.55 |
24 |
27 |
26 |
26 |
|
Freeable |
13.50 |
0.00 |
1.50 |
1.11 |
3 |
|
9 |
9 |
|
SQL |
0.33 |
0.16 |
0.03 |
0.03 |
0 |
2 |
12 |
10 |
|
PL/SQL |
0.12 |
0.06 |
0.01 |
0.01 |
0 |
0 |
24 |
24 |
E |
Other |
138.65 |
|
4.78 |
8.20 |
24 |
27 |
29 |
29 |
|
Freeable |
14.94 |
0.00 |
1.36 |
1.04 |
3 |
|
11 |
11 |
|
SQL |
0.39 |
0.19 |
0.03 |
0.03 |
0 |
2 |
15 |
12 |
|
PL/SQL |
0.18 |
0.11 |
0.01 |
0.01 |
0 |
0 |
27 |
26 |
Back to Memory Statistics
Back to Top
這部分是關於SGA內存分配的一個描述,咱們能夠經過show sga等命令也能夠查看到這裏的內容。
FixedSize:
oracle 的不一樣平臺和不一樣版本下可能不同,但對於肯定環境是一個固定的值,裏面存儲了SGA 各部分組件的信息,能夠看做引導創建SGA的區域。
VariableSize:
包含了shared_pool_size、java_pool_size、large_pool_size等內存設置。
DatabaseBuffers:
指數據緩衝區,在8i 中包含db_block_buffer*db_block_size、buffer_pool_keep、buffer_pool_recycle三部份內存。在9i 中包含db_cache_size、db_keep_cache_size、db_recycle_cache_size、db_nk_cache_size。
RedoBuffers:
指日誌緩衝區,log_buffer。對於logbuffer,咱們會發如今v$parameter、v$sgastat、v$sga的值不同。v$parameter是咱們能夠本身設定的值,也能夠設定爲0,這時候,oracle降會以默認的最小值來設置v$sgastat的值,同時v$sga也是最小的值。v$sgastat的值是基於參數log_buffer的設定值,再根據必定的計算公式獲得的一個值。v$sga的值,則是根據v$sgastat的值,而後選擇再加上8k-11k的一個值,獲得min(n*4k)的一個值。就是說獲得的結果是4k的整數倍,也就是說v$sga是以4k的單位遞增的。
SGA regions |
Begin Size (Bytes) |
End Size (Bytes) (if different) |
Database Buffers |
3,506,438,144 |
|
Fixed Size |
2,078,368 |
|
Redo Buffers |
14,696,448 |
|
Variable Size |
771,754,336 |
|
Back to Memory Statistics
Back to Top
Pool |
Name |
Begin MB |
End MB |
% Diff |
java |
free memory |
16.00 |
16.00 |
0.00 |
large |
PX msg pool |
1.03 |
1.03 |
0.00 |
large |
free memory |
14.97 |
14.97 |
0.00 |
shared |
ASH buffers |
15.50 |
15.50 |
0.00 |
shared |
CCursor |
8.58 |
8.85 |
3.09 |
shared |
KQR L PO |
8.75 |
8.80 |
0.55 |
shared |
db_block_hash_buckets |
22.50 |
22.50 |
0.00 |
shared |
free memory |
371.80 |
369.61 |
-0.59 |
shared |
gcs resources |
66.11 |
66.11 |
0.00 |
shared |
gcs shadows |
41.65 |
41.65 |
0.00 |
shared |
ges big msg buffers |
13.75 |
13.75 |
0.00 |
shared |
ges enqueues |
7.44 |
7.56 |
1.63 |
shared |
ges reserved msg buffers |
7.86 |
7.86 |
0.00 |
shared |
library cache |
10.78 |
10.93 |
1.41 |
shared |
row cache |
7.16 |
7.16 |
0.00 |
shared |
sql area |
27.49 |
28.50 |
3.67 |
|
buffer_cache |
3,344.00 |
3,344.00 |
0.00 |
|
fixed_sga |
1.98 |
1.98 |
0.00 |
|
log_buffer |
14.02 |
14.02 |
0.00 |
Back to Memory Statistics
Back to Top
No data exists for this sectionof the report.
Back to Streams Statistics
Back to Top
No data exists for this sectionof the report.
Back to Streams Statistics
Back to Top
No data exists for this sectionof the report.
Back to Streams Statistics
Back to Top
No data exists for this sectionof the report.
Back to Streams Statistics
Back to Top
No data exists for this sectionof the report.
Back to Streams Statistics
Back to Top
No data exists for this sectionof the report.
Back to Streams Statistics
Back to Top
Resource Name |
Current Utilization |
Maximum Utilization |
Initial Allocation |
Limit |
gcs_resources |
349,392 |
446,903 |
450063 |
450063 |
gcs_shadows |
400,300 |
447,369 |
450063 |
450063 |
Parameter Name |
Begin value |
End value (if different) |
audit_file_dest |
/oracle/app/oracle/admin/ICCI/adump |
|
background_dump_dest |
/oracle/app/oracle/admin/ICCI/bdump |
|
cluster_database |
TRUE |
|
cluster_database_instances |
2 |
|
compatible |
10.2.0.3.0 |
|
control_files |
/dev/rora_CTL01, /dev/rora_CTL02, /dev/rora_CTL03 |
|
core_dump_dest |
/oracle/app/oracle/admin/ICCI/cdump |
|
db_block_size |
8192 |
|
db_domain |
|
|
db_file_multiblock_read_count |
16 |
|
db_name |
ICCI |
|
dispatchers |
(PROTOCOL=TCP) (SERVICE=ICCIXDB) |
|
instance_number |
1 |
|
job_queue_processes |
10 |
|
open_cursors |
800 |
|
pga_aggregate_target |
1073741824 |
|
processes |
500 |
|
remote_listener |
LISTENERS_ICCI |
|
remote_login_passwordfile |
EXCLUSIVE |
|
sga_max_size |
4294967296 |
|
sga_target |
4294967296 |
|
sort_area_size |
196608 |
|
spfile |
/dev/rora_SPFILE |
|
thread |
1 |
|
undo_management |
AUTO |
|
undo_retention |
900 |
|
undo_tablespace |
UNDOTBS1 |
|
user_dump_dest |
/oracle/app/oracle/admin/ICCI/udump |
|
Statistic |
Total |
per Second |
per Trans |
acks for commit broadcast(actual) |
18,537 |
3.92 |
3.31 |
acks for commit broadcast(logical) |
21,016 |
4.45 |
3.75 |
broadcast msgs on commit(actual) |
5,193 |
1.10 |
0.93 |
broadcast msgs on commit(logical) |
5,491 |
1.16 |
0.98 |
broadcast msgs on commit(wasted) |
450 |
0.10 |
0.08 |
dynamically allocated gcs resources |
0 |
0.00 |
0.00 |
dynamically allocated gcs shadows |
0 |
0.00 |
0.00 |
false posts waiting for scn acks |
0 |
0.00 |
0.00 |
flow control messages received |
0 |
0.00 |
0.00 |
flow control messages sent |
2 |
0.00 |
0.00 |
gcs assume cvt |
0 |
0.00 |
0.00 |
gcs assume no cvt |
9,675 |
2.05 |
1.73 |
gcs ast xid |
1 |
0.00 |
0.00 |
gcs blocked converts |
7,099 |
1.50 |
1.27 |
gcs blocked cr converts |
8,442 |
1.79 |
1.51 |
gcs compatible basts |
45 |
0.01 |
0.01 |
gcs compatible cr basts (global) |
273 |
0.06 |
0.05 |
gcs compatible cr basts (local) |
12,593 |
2.66 |
2.25 |
gcs cr basts to PIs |
0 |
0.00 |
0.00 |
gcs cr serve without current lock |
0 |
0.00 |
0.00 |
gcs dbwr flush pi msgs |
223 |
0.05 |
0.04 |
gcs dbwr write request msgs |
223 |
0.05 |
0.04 |
gcs error msgs |
0 |
0.00 |
0.00 |
gcs forward cr to pinged instance |
0 |
0.00 |
0.00 |
gcs immediate (compatible) converts |
2,998 |
0.63 |
0.54 |
gcs immediate (null) converts |
170,925 |
36.16 |
30.53 |
gcs immediate cr (compatible) converts |
0 |
0.00 |
0.00 |
gcs immediate cr (null) converts |
722,748 |
152.88 |
129.11 |
gcs indirect ast |
306,817 |
64.90 |
54.81 |
gcs lms flush pi msgs |
0 |
0.00 |
0.00 |
gcs lms write request msgs |
189 |
0.04 |
0.03 |
gcs msgs process time(ms) |
16,164 |
3.42 |
2.89 |
gcs msgs received |
1,792,132 |
379.09 |
320.14 |
gcs out-of-order msgs |
0 |
0.00 |
0.00 |
gcs pings refused |
0 |
0.00 |
0.00 |
gcs pkey conflicts retry |
0 |
0.00 |
0.00 |
gcs queued converts |
2 |
0.00 |
0.00 |
gcs recovery claim msgs |
0 |
0.00 |
0.00 |
gcs refuse xid |
0 |
0.00 |
0.00 |
gcs regular cr |
0 |
0.00 |
0.00 |
gcs retry convert request |
0 |
0.00 |
0.00 |
gcs side channel msgs actual |
437 |
0.09 |
0.08 |
gcs side channel msgs logical |
21,086 |
4.46 |
3.77 |
gcs stale cr |
3,300 |
0.70 |
0.59 |
gcs undo cr |
5 |
0.00 |
0.00 |
gcs write notification msgs |
23 |
0.00 |
0.00 |
gcs writes refused |
3 |
0.00 |
0.00 |
ges msgs process time(ms) |
1,289 |
0.27 |
0.23 |
ges msgs received |
138,891 |
29.38 |
24.81 |
global posts dropped |
0 |
0.00 |
0.00 |
global posts queue time |
0 |
0.00 |
0.00 |
global posts queued |
0 |
0.00 |
0.00 |
global posts requested |
0 |
0.00 |
0.00 |
global posts sent |
0 |
0.00 |
0.00 |
implicit batch messages received |
81,181 |
17.17 |
14.50 |
implicit batch messages sent |
19,561 |
4.14 |
3.49 |
lmd msg send time(ms) |
0 |
0.00 |
0.00 |
lms(s) msg send time(ms) |
0 |
0.00 |
0.00 |
messages flow controlled |
15,306 |
3.24 |
2.73 |
messages queue sent actual |
108,411 |
22.93 |
19.37 |
messages queue sent logical |
222,518 |
47.07 |
39.75 |
messages received actual |
474,202 |
100.31 |
84.71 |
messages received logical |
1,931,144 |
408.50 |
344.97 |
messages sent directly |
25,742 |
5.45 |
4.60 |
messages sent indirectly |
137,725 |
29.13 |
24.60 |
messages sent not implicit batched |
88,859 |
18.80 |
15.87 |
messages sent pbatched |
1,050,224 |
222.16 |
187.61 |
msgs causing lmd to send msgs |
61,682 |
13.05 |
11.02 |
msgs causing lms(s) to send msgs |
85,978 |
18.19 |
15.36 |
msgs received queue time (ms) |
911,013 |
192.71 |
162.74 |
msgs received queued |
1,931,121 |
408.50 |
344.97 |
msgs sent queue time (ms) |
5,651 |
1.20 |
1.01 |
msgs sent queue time on ksxp (ms) |
66,767 |
14.12 |
11.93 |
msgs sent queued |
215,124 |
45.51 |
38.43 |
msgs sent queued on ksxp |
243,729 |
51.56 |
43.54 |
process batch messages received |
120,003 |
25.38 |
21.44 |
process batch messages sent |
181,019 |
38.29 |
32.34 |
Statistic |
Total |
CR Block Requests |
10,422 |
CURRENT Block Requests |
251 |
Data Block Requests |
10,422 |
Undo Block Requests |
2 |
TX Block Requests |
20 |
Current Results |
10,664 |
Private results |
4 |
Zero Results |
5 |
Disk Read Results |
0 |
Fail Results |
0 |
Fairness Down Converts |
1,474 |
Fairness Clears |
0 |
Free GC Elements |
0 |
Flushes |
370 |
Flushes Queued |
0 |
Flush Queue Full |
0 |
Flush Max Time (us) |
0 |
Light Works |
2 |
Errors |
0 |
Statistic |
Total |
% <1ms |
% <10ms |
% <100ms |
% <1s |
% <10s |
Pins |
17,534 |
99.96 |
0.01 |
0.03 |
0.00 |
0.00 |
Flushes |
77 |
48.05 |
46.75 |
5.19 |
0.00 |
0.00 |
Writes |
255 |
5.49 |
53.73 |
40.00 |
0.78 |
0.00 |
|
|
CR |
Current |
||||||
Inst No |
Block Class |
Blocks Received |
% Immed |
% Busy |
% Congst |
Blocks Received |
% Immed |
% Busy |
% Congst |
2 |
data block |
3,945 |
87.20 |
12.80 |
0.00 |
13,324 |
99.71 |
0.26 |
0.04 |
2 |
Others |
191 |
100.00 |
0.00 |
0.00 |
2,190 |
96.48 |
3.52 |
0.00 |
2 |
undo header |
11 |
100.00 |
0.00 |
0.00 |
2 |
100.00 |
0.00 |
0.00 |
End of Report