SQL> select event#,name,parameter1,parameter2,parameter3 from v$event_name where name = 'db file parallel read'; EVENT# NAME PARAMETER1 PARAMETER2 PARAMETER3 ---------- ------------------------------ --------------- --------------- --------------- 120 db file parallel read files blocks requests
1. db file parallel read html
Contrary to what the name suggests, the db file parallel read event is not related to any parallel
operation—neither parallel DML nor parallel query. This event occurs during the database recovery
operation when database blocks that need changes as a part of recovery are read in parallel from the
datafiles. This event also occurs when a process reads multiple noncontiguous single blocks from
one or more datafiles.sql
Wait Parameters
Wait parameters for db file parallel read are described here:
l P1 Number of files to read from
l P2 Total number of blocks to read
l P3 Total number of I/O requests (the same as P2 since multiblock read is not used) session
Wait Time
No timeouts. The session waits until all of the I/Os are completed. 性能
案例測試
http://www.itpub.net/thread-1586802-1-1.htmlfetch
雖然執行計劃正確,可是因爲index的聚合因子太高,致使prefetch功能大量預讀取數據.優化
一個語句執行計劃沒有問題,單獨提取出來執行只要0.8秒。可是放到存儲過程當中就很慢 須要10秒左右甚至更多。spa
Execution Plan.net
Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time |
0 | SELECT STATEMENT | 1839 (100) | ||||
1 | SORT AGGREGATE | 1 | 37 | |||
2 | FILTER | |||||
3 | TABLE ACCESS BY INDEX ROWID | V_RPT_PLYEDR_INSRNC | 1 | 37 | 1839 (1) | 00:00:23 |
4 | INDEX RANGE SCAN | ACIF_INDEX_001 | 2147 | 18 (0) | 00:00:01 |
可是帶入數據單獨這個語句執行計劃同樣的狀況下,只要1秒左右。應該不是綁定變量致使的執行計劃問題,由於能夠看到執行計劃是最優的,我查看了等待事件
491 db file parallel read 2041
491 db file sequential read 526
491 db file scattered read 23 code
SQL> select distinct(file_id) from dba_extents where segment_name='V_RPT_PLYEDR_INSRNC';
FILE_ID ---------- 25 22 SQL> select name from v$datafile where file# in ('22','25'); NAME -------------------------------------------------------------------------------- /repdata/ora9i/CIRCI.dbf /repdata/ora9i/circi01.dbf
Top 5 Timed Events
Event | Waits | Time(s) | Avg Wait(ms) | % Total Call Time | Wait Class |
db file parallel read | 351,880 |
2,891 |
8 |
68.3 |
User I/O |
db file sequential read | 463,984 |
1,216 |
3 |
28.7 |
User I/O |
CPU time | 184
|
4.4
|
|||
log file parallel write | 1,346 |
3 |
2 |
.1 |
System I/O |
db file parallel write | 512 |
3 |
6 |
.1 |
System I/O |
File IO Stats
Tablespace | Filename | Reads | Av Reads/s | Av Rd(ms) | Av Blks/Rd | Writes | Av Writes/s | Buffer Waits | Av Buf Wt(ms) |
CIRCI | /repdata/ora9i/CIRCI.dbf | 2,847,514 |
787 |
12.59 |
1.02 |
1,258 |
0 |
3 |
20.00 |
CIRCI | /repdata/ora9i/circi01.dbf | 915,158 |
253 |
8.63 |
1.00 |
13 |
0 |
0 |
0.00 |
REPORT | /repdata/ora9i/REPORT01.dbf | 257,679 |
71 |
0.75 |
15.15 |
0 |
0 |
186,811 |
0.45 |
REPORT | /repdata/ora9i/REPORT02.dbf | 255,701 |
71 |
0.71 |
15.21 |
0 |
0 |
187,164 |
0.43 |
REPORT | /repdata/ora9i/REPORT03.dbf | 135,105 |
37 |
0.72 |
15.35 |
0 |
0 |
125,856 |
0.39
|
Av Rd(ms) 過大 排除 整列本生有問題,是否和 集羣因子過大致使經過ROWID尋找TABLE ROWS時跳躍過大有關?
結果顯示集羣因子至關的大,表中一共3000多W調數據 集羣因子達到2600W,可是索引的DISTINCT值確只有846,表只是批量的進行INSERT
因此是否能夠考慮以下的方法
一、創建BITMAP索引代替之前愛的B-TREE索引
二、或者創建一個大的聯合索引,讓查詢直接走INDEX FAST FULL SCAN。
三、同時創建一個新的表空間來存放索引,作好把TABLE也MOVE到新建的表空間中。
我已經創建了聯合索引。效果灰常好。已經不經過TABLE ACCESS BY INDEX ROWID 了。只掃描索引就行了。
新的Execution Plan 以下:
PLAN_TABLE_OUTPUT -------------------------------------------------------------------------------- Plan hash value: 2161530321 ---------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 37 | 156 (1)| 00:00:02 | | 1 | SORT AGGREGATE | | 1 | 37 | | | |* 2 | INDEX RANGE SCAN| TEST01 | 2 | 74 | 156 (1)| 00:00:02 |
原理:
高聚簇因子 index range scan -----> 引起灰常多的 rowid 回表掃描離散的block ------>buffer prefetching(11G) ------------> db file parallel read...
當 db 出現過多的 db file parallel read 優化SQL 去吧。
案列二:
http://yangtingkun.net/?p=695 消除11.2上的db file parallel read
客戶在11.2.0.3環境中進行壓力測試,發現出現大量的db file parallel read等待事件。
這個等待是11g之後纔出現的,而在11g之前,通常這個等待事件發生在數據文件的恢復過程當中。而11g新增了prefetch的特性,也可能致使這個等待事件的產生。
當運行壓力測試時,後臺的等待事件以下:
SQL> SELECT event, COUNT(*) FROM v$session WHERE username = USER GROUP BY event ORDER BY 2; EVENT COUNT(*) ---------------------------------------------------------------- ---------- SQL*Net message FROM client 1 SQL*Net message TO client 1 db file sequential READ 24 db file scattered READ 33 db file parallel READ 42
能夠看到用戶進程經歷比較嚴重的IO等待,而此時的db file parallel read,並不會帶來性能提高。
能夠經過添加隱含參數的方法來屏蔽prefetch功能,從而避免db file parallel read等待事件的產生:
_db_block_prefetch_limit=0
_db_block_prefetch_quota=0
_db_file_noncontig_mblock_read_count=0
修改這三個隱藏參數後,發現db file parallel read等待事件已經消失:
SQL> SELECT event, COUNT(*) FROM v$session WHERE username = USER GROUP BY event ORDER BY 2; EVENT COUNT(*) ---------------------------------------------------------------- ---------- SQL*Net message TO client 1 db file scattered READ 30 db file sequential READ 70