SQL 性能分析器(SPA)工具概覽 php |
做爲 Oracle Real Application Testing 選件/特性,這篇文章將提供一個關於 SQL 性能分析器(SPA)工具的簡要概覽。這是此係列的第一部分。第二部分於下個月繼續講述數據庫捕獲和重演。關於 SPA 的詳細信息可參考:
數據庫測試指南
DBA 的一個重要工做是確保在一個計劃內的變動安排後,當前生產環境負載和 SQL 執行計劃能夠持續平滑運行。變動可能包含數據庫升級、增長一個新索引或改變一個特定的數據庫參數。 SPA 工具做爲 Oracle Real Application Testing 選件的一部分提供,容許將生產環境負載中的 SQL 拿到測試環境的目標數據庫運行,因此能夠經過比對結果識別退化的性能問題,並在遷移、升級或特定系統變動以前修復。若是您計劃使用數據庫重演特性,在運行重演以前使用 SPA 是 Oracle 建議的最佳實踐。目標是在數據庫重演以前識別和修復全部的 SQL 性能退化,因此咱們能夠只關注重演特性中的併發和吞吐量 。SQL 性能分析器使用 SQL 調優集(STS)做爲輸入,STS 已經存在了很長一段時間了,這容許 DBA 提取現有生產環境的 SQL 語句工做負載並在節省時間和資源的前提下輕鬆比對一組變動前和變動後的執行結果。 一個 SQL 調優集(STS)是一個包含了一系列從工做負載中得來的 SQL 語句集及其執行上下文信息(例如用戶和綁定變量、執行的統計信息和執行計劃)的數據庫對象。關於 STS 的更多信息,請參考: Managing SQL Tuning Setscss 注: SQL 性能分析器須要 Oracle Real Application Testing 許可. 更多信息,請參考: Oracle Database Licensing Information.html 以下清單提供了一些 DBA 考慮使用 SPA 工具的常見場景。 python 使用場景linux 1. 數據庫升級 – 一個新版本數據庫意味着一個新版本的優化器。DBA 能在升級生產系統以前主動發現任何 SQL 性能退化。 2. 部署一個補丁 – 您可能會部署一個與性能或優化器相關的特定修復的補丁。使用 SPA 來檢查您的生產環境 SQL 負載能幫助您驗證這個補丁不會引發任何 SQL 性能退化。 3. 數據庫初始化參數變動 - 有各類各樣的數據庫參數可能影響性能,因此這是 SPA 用處的一個很好的場景。 4. Schema 變動例如增長索引 – schema 變動和修改,如增長索引會直接影響優化器的決定和計劃。SPA 可用來測試這些變動並確保不會引入負面影響。 5. 改變或刷新優化器統計信息 – 優化器統計信息直接關係到優化器的決策和執行計劃的生成,您可使用 SPA 來測試新的統計信息和設置來確保它們不會引發 SQL 性能退化。
使用 SPA 包括執行如下工做流文檔/步驟。SPA 工具徹底集成到 Oracle12c Cloud Control 中,Oracle 也提供了一個名爲 DBMS_SQLPA 的 PLSQL 包來容許 DBA 使用 PL/SQL 實施這些步驟。這個工做流使用了一個迭代的過程來執行、對比和分析、以及修復這些退化。DBA 可以使用諸如 SQL 執行計劃基線或 SQL 調優顧問等工具/特性來修復 SPA 發現的壞或退化的 SQL 語句。sql SPA 工做流 1. 捕捉您想要分析的生產系統的 SQL 工做負載,並將其保存爲一個 SQL 調優集。 2. 設置目標測試系統(這應該儘量多地和生產系統一致)。 3. 在測試系統建立一個 SPA 任務。shell 4. 構建變動前 SPA 任務。 5. 進行系統變動。 6. 構建變動後 SPA 任務。 7. 對比和分析變動先後的性能數據。 8. 調優或修復任何退化的 SQL 語句。 9. 重複地6~8步,直到 SQL 性能在測試系統上可接受。 出於本文的目的,咱們將經過一個簡單實例「給表增長一個索引 Schema 變動」來介紹。
• 源數據庫版本 12.1.0.2.0 • 目標測試系統 12.1.0.2.0 • 系統變動是給 t1 表增長一個索引 • 性能報告將生成 HTML 格式的詳細信息數據庫 SPA – 使用 PL/SQL API 的簡單介紹
注:DBMS_SQLPA 包及其用法的更多信息,請參考: Using DBMS_SQLPA
1. 捕獲 SQL 工做負載到一個 SQL 調優集
建立和填充 STS
BEGIN DBMS_SQLTUNE.DROP_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI'); END; /
BEGIN DBMS_SQLTUNE.CREATE_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', description => 'My Simple STS Using the API' ); END; /
1a. 使用 SCOTT 用戶運行一下 PLSQL 代碼執行 SQL 語句。(PLSQL 用來模擬使用綁定變量的 SQL 語句工做負載) var b1 number;
declare v_num number; begin for i in 1..10000 loop :b1 := i; select c1 into v_num from t1 where c1 = :b1; end loop; end; /django 1b. 從遊標緩存中找到使用 parsing schema=SCOTT 的語句來填充 STS
DECLARE c_sqlarea_cursor DBMS_SQLTUNE.SQLSET_CURSOR; BEGIN OPEN c_sqlarea_cursor FOR SELECT VALUE(p) FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('parsing_schema_name = ''SCOTT''', NULL, NULL, NULL, NULL, 1, NULL,'ALL')) p; DBMS_SQLTUNE.LOAD_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', populate_cursor => c_sqlarea_cursor); END; /
1c. 檢查 STS 中捕獲了多少 SQL 語句
COLUMN NAME FORMAT a20 COLUMN COUNT FORMAT 99999 COLUMN DESCRIPTION FORMAT a30
SELECT NAME, STATEMENT_COUNT AS "SQLCNT", DESCRIPTION FROM USER_SQLSET;
Results:
NAME SQLCNT DESCRIPTION -------------------- ---------- ------------------------------ MYSIMPLESTSUSINGAPI 12 My Simple STS Using the API
1d. 顯示 STS 的內容
COLUMN SQL_TEXT FORMAT a30 COLUMN SCH FORMAT a3 COLUMN ELAPSED FORMAT 999999999
SELECT SQL_ID, PARSING_SCHEMA_NAME AS "SCOTT", SQL_TEXT, ELAPSED_TIME AS "ELAPSED", BUFFER_GETS FROM TABLE( DBMS_SQLTUNE.SELECT_SQLSET( 'MYSIMPLESTSUSINGAPI' ) );
Results: (partial)
SQL_ID SCOTT SQL_TEXT ELAPSED BUFFER_GETS ------------- ------------------------------ ------------------------------ ---------- ----------- 0af4p26041xkv SCOTT SELECT C1 FROM T1 WHERE C1 = : 169909252 18185689緩存 2. 設置目標系統
爲了演示目的,這裏將使用 STS 的捕獲源做爲一樣的目標測試系統。
3. 建立 SPA 任務
VARIABLE t_name VARCHAR2(100); EXEC :t_name := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'MYSIMPLESTSUSINGAPI', task_name => 'MYSPATASKUSINGAPI'); print t_name
Results:
T_NAME ----------------- MYSPATASKUSINGAPI
4. 建立和執行變動前的 SPA 任務
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'TEST EXECUTE', execution_name => 'MY_BEFORE_CHANGE');
5. 作出系統變動
CREATE INDEX t1_idx ON t1 (c1);
6. 建立和執行變動後的 SPA 任務
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'TEST EXECUTE', execution_name => 'MY_AFTER_CHANGE'); 7. 對比和分析變動前和變動後的性能
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'COMPARE PERFORMANCE', execution_name => 'MY_EXEC_COMPARE', execution_params => dbms_advisor.arglist('comparison_metric', 'elapsed_time'));
-- Generate the Report
set long 100000000 longchunksize 100000000 linesize 200 head off feedback off echo off TRIMSPOOL ON TRIM ON VAR rep CLOB; EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all'); SPOOL C:\mydir\SPA_detailed.html PRINT :rep SPOOL off
HTML 格式的報告示例:
下面是 SPA 報告的一部分截屏。報告由3個部分組成,一個部分涉及到變動前和變動後的任務包括 範圍,狀態,執行起始時間,錯誤個數和比較的標準;第二個部分總結部分包括了變動帶來的負載影響;第三個部分包括了詳細的 SQL 信息,好比 SQLID 以及須要比較的度量,好比對負載的影響,執行的頻率,以及在這個例子裏在變動先後的執行時間這個度量。好比對於 SQL ID 0af4p26041xkv 來講,負載影響是97%。咱們要實施的變動對於性能有好的影響,能夠把執行時間從12766下降到29。咱們還能夠看到執行計劃在變動後發生了變化。這些信息能夠幫助 DBA 進一步關注在特定的問題或者性能退化上,對於當前的這個例子,影響是對性能有提高。 下面的截屏顯示了在增長了索引後執行計劃的變化。這部分信息可讓 DBA 進一步關注在變動先後某個具體 SQL 的執行計劃上。對任何 SQL 退化來講,這可讓 DBA 來清楚瞭解執行計劃是如何變化的,而且能夠進一步採起計劃,好比使用 SQL Tuning Advisor 或者建立 SPM 基線。
關於 Real Application Testing 的推薦資源清單:
• Oracle Real Application Testing Product Information • Master Note for Real Application Testing Option (Doc ID 1464274.1) • Database Testing: Best Practices (Doc ID 1535885.1) • Mandatory Patches for Database Testing Functionality for Current and Earlier Releases (Doc ID 560977.1) |
##### sample 0
注意事項:
SPA 數據存在2個數據徹底一致的庫,若是是生產庫和測試庫不一致,則沒有比較意義,由於數據不一致:
同時測試庫須要準備2個環境,一個是類生產庫,一個類新庫
section 1:
1 恢復平臺 搭建10g性能測試環境 在恢復平臺恢復出10g的生產庫,做爲性能測試環境,參數保持和生產庫一致
2 11g性能測試庫 搭建11g性能測試環境 在linux主機進行跨平臺遷移,搭建11g性能測試環境
3.生產環境 抓取sqlset "在生產環境執行附件sql,對生產性能影響不大。
抓取持續一週。若是有異常,能夠當即停掉。"
---------------------------------------------------
--Step1: 建立名稱爲STS_SQLSET的SQL_SET.
---------------------------------------------------
BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SQLSET'
);
END;
/
BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
DESCRIPTION => 'COMPLETE APPLICATION WORKLOAD',
SQLSET_OWNER =>'DBMGR');
END;
/
---------------------------------------------------
--Step2: 初始加載當前數據庫中的SQL.
---------------------------------------------------
DECLARE
STSCUR DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN STSCUR FOR
SELECT VALUE(P)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',
''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
NULL, NULL, NULL, NULL, 1, NULL,
'ALL')) P;
-- POPULATE THE SQLSET
DBMS_SQLTUNE.LOAD_SQLSET(SQLSET_NAME => 'STS_SQLSET',
POPULATE_CURSOR => STSCUR,
COMMIT_ROWS => 100,
SQLSET_OWNER => 'DBMGR');
CLOSE STSCUR;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END;
/
select owner,name,STATEMENT_COUNT from dba_sqlset;
--觀察下初始化收集數據,若是數據太大,能夠停下進程
---------------------------------------------------
--Step3: 增量抓取數據庫中的SQL, 會連續抓取7天,每小時抓取一次,Sessions一直持續7天. 這一步用shell腳本在後臺執行。或者在sqlplus 上執行,若是不會被kiill 到。
BEGIN
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
TIME_LIMIT => 345600,
REPEAT_INTERVAL => 3600,
CAPTURE_OPTION => 'MERGE',
CAPTURE_MODE => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
BASIC_FILTER => 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
SQLSET_OWNER => 'DBMGR');
END;
/
--for script runnign in background ,collct from 8/2 to 8/6, 4days totoal
cd /db/cps/app/opcps/dba
vi collect_spq.sh
sqlplus / as sysdba <<eof
select instance_name from v\$instance;
BEGIN
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
TIME_LIMIT => 345600,
REPEAT_INTERVAL => 3600,
CAPTURE_OPTION => 'MERGE',
CAPTURE_MODE => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
BASIC_FILTER => 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
SQLSET_OWNER => 'DBMGR');
END;
/
eof
4 生產環境 "第三步完成後,
建立中間表,將sqlset打包到中間表" "exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'STS_SQLSET' ,sqlset_owner =>'DBMGR' ,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR' );"
方法1:
(
轉換成中轉表以後,咱們能夠再作一次去除重複的操做。固然,你也能夠根據module來刪除一些沒必要要的遊標。
delete from SPA.SQLSET_TAB a where rowid !=(select max(rowid) from SQLSET_TAB b where a.FORCE_MATCHING_SIGNATURE=b.FORCE_MATCHING_SIGNATURE and a.FORCE_MATCHING_SIGNATURE<>0); delete from SPA.SQLSET_TAB where MODULE='PL/SQL Developer';
)
方法2:
(
exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB_08' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'STS_SQLSET' ,sqlset_owner =>'DBMGR' ,staging_table_name =>'STS_STBTAB_08' ,staging_schema_owner => 'DBMGR' );
create table sts_b as select distinct(to_char(s.force_matching_signature)) b from DBMGR.STS_STBTAB_08 s;
create index sts_b_1_force on STS_STBTAB_08(to_char(force_matching_signature));
DECLARE
T VARCHAR2(50);
cursor STSCUR IS select b from sts_b;
STSCUR_1 STS_STBTAB_08%ROWTYPE;
BEGIN
OPEN STSCUR;
LOOP
fetch STSCUR into T;
EXIT WHEN STSCUR%NOTFOUND;
-- DBMS_OUTPUT.PUT_LINE(T);
select * into STSCUR_1 from ( (select * from STS_STBTAB_08 st where (to_char(st.force_matching_signature)) =T ) order by cpu_time desc) where rownum < 2;
delete STS_STBTAB_08 where (to_char(force_matching_signature)) =T;
insert into STS_STBTAB_08 values STSCUR_1;
commit;
-- DBMS_OUTPUT.PUT_LINE(STSCUR_1.SQL_ID);
END LOOP;
CLOSE STSCUR;
END;
)
5 生產環境 導出sqlset "expdp導出DBMGR.STS_STBTAB
select count(*) from DBMGR.STS_STBTAB;
exp dbmgr/db1234DBA tables=STS_STBTAB file=/db/cps/archivelog/exp_SQLSET_TAB.dmp log=/db/cps/archivelog/exp_SQLSET_TAB.log FEEDBACK=1000 BUFFER=5000000
"
section 2:
將sqlset導入11g性能測試庫
6 11g性能測試庫 導入到11g性能測試庫 impdp導入DBMGR.STS_STBTAB",
由於有70萬條數據,跑得時間很是慢,考慮先過濾信息,在跑,主要過濾方法是排除文字常量同樣的SQL,而後取其中最大buffer get 的一條記錄,最後的sql_id 和記錄放在sts_b_2 表裏。最終記錄在2萬條左右,方法以下:
drop table sts_b;
drop table sts_b_1;
drop table sts_b_2;
drop table sts_b_3;
create table sts_b as select distinct(to_char(s.force_matching_signature)) b from dba_sqlset_statements s;
create table sts_b_1 as select * from dba_sqlset_statements;
create index sts_b_1_force on sts_b_1(to_char(force_matching_signature));
create table sts_b_2 (sql_id varchar2(50));
create table sts_b_3 as select * from dba_sqlset_statements;
DECLARE
T VARCHAR2(50);
cursor STSCUR IS select b from sts_b;
STSCUR_1 dba_sqlset_statements%ROWTYPE;
BEGIN
OPEN STSCUR;
LOOP
fetch STSCUR into T;
EXIT WHEN STSCUR%NOTFOUND;
-- DBMS_OUTPUT.PUT_LINE(T);
select * into STSCUR_1 from ( (select * from sts_b_1 st where (to_char(st.force_matching_signature)) =T ) order by cpu_time desc) where rownum < 2;
delete sts_b_1 where (to_char(force_matching_signature)) =T;
insert into sts_b_2 values(STSCUR_1.SQL_ID);
commit;
-- DBMS_OUTPUT.PUT_LINE(STSCUR_1.SQL_ID);
END LOOP;
CLOSE STSCUR;
END;
--select * into STSCUR_1 from ( (select * from dba_sqlset_statements st where (to_char(st.force_matching_signature)) =6018318222786944325 ) order by cpu_time ---desc) where rownum < 2;
7 11g性能測試庫 sqlset解包 exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
section 3:
將sqlset導入10g測試庫
8 10g測試環境 受權 "grant all on spa_sqlpa to dbmgr;
grant all on dbms_sqlpa to dbmgr;
"
9 10g測試環境 導出導入 "impdp導入DBMGR.STS_STBTAB
imp dbmgr/dbmgr fromuser=dbmgr touser=dbmgr file=/oraclelv/exp_SQLSET_TAB.dmp feedback=1000 log=/oraclelv/imp_SQLSET_TAB.log BUFFER=5000000
imp dbmgr/db1234DBA fromuser=dbmgr touser=dbmgr file=/datalv03/afa/exp_SQLSET_TAB.dmp feedback=1000 log=/datalv03/afa/imp_SQLSET_TAB.log BUFFER=5000000
"
10 10g測試環境 解包sqlset exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
11 11g性能測試庫 建立DBLINK "create public database link to_10g connect to dbmgr identified by xxxxx using 'xxxxxxx'
create public database link to_10g connect to dbmgr identified by db1234DBA
using ' (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =25.10.0.199)(PORT = 1539)) (CONNECT_DATA =(sid = afa)))';"
section 4:
第一次執行spa回放,得到10g性能測試環境的數據
12 11性能測試庫 建立task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_10G');
END;
/
"
13 11g性能測試庫 生成第一次執行task的腳本,經過DATABASE_LINK to_10g 遠程執行"
vi spa_10g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_10G',execution_type=>'test execute',execution_name=>'spa10g',execution_params =>dbms_advisor.argList('DATABASE_LINK','TO_10G','EXECUTE_COUNT',5) ,execution_desc => 'before_change');
exit
SQLEnd" "能夠看到SPA_DIR下有BEF_TASK_SQLSET_NO_i.sh腳本,
"
14 11g性能測試庫 "執行task,
執行第一次spa回放" "nohup sh spa_10g.sh > spa_10g.log &
" 發起目標庫執行第一次spa回放
15 11g性能測試庫 查詢回放進度 "SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa10g%';" 計算 2萬筆 筆數據須要多久完成,
section 5:
第二次執行spa回放,得到11g性能測試環境的數據
16 11g性能測試庫 建立task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_11G');
END;
/
"
17 11g性能測試庫 生成第二次執行task的腳本 "vi spa_11g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_11G',execution_type=>'test execute',execution_name=>'spa11g_1',execution_params =>dbms_advisor.argList('EXECUTE_COUNT',5) ,execution_desc => 'after_change');
exit
SQLEnd" 能夠看到SPA_DIR下有AFT_TASK_SQLSET_NO_i.sh腳本
18 11g性能測試庫 "執行task,
執行第二次spa回放" "nohup sh spa_11g.sh > spa_11g.log &
" 發起目標庫執行第二次spa回放,5個小時左右
19 11g性能測試庫 查詢回放進度 "SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa11g%';"
section 6:
20 11g性能測試庫 取出buffer gets變大的sql進行分析
20.1.分析SQL 以下
Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3400) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='SPA11G'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='SPA10G'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%';
Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3400) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='spa11g_1'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='spa10g'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%';
20.2." 1.最後檢查的結果SQL大於10%的有將近400多條。在過濾一遍buffer_get 大於1000以上的,只有30多條,所以重點分析這30條sql數據。
2.把這30條SQL依次放在PL/SQL developer裏格式化後,在放入同一個文件,按照sql_id 編好號。"
20.3."1.若是spa數據過多,每次都要查詢好久的話 ,能夠考慮建立臨時表 a,加快查詢速度 。
create table a as
(
Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3400) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='spa11g_1'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='spa10g'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%');" "
2.以下是從a表中找到升級先後變化量大於10%的SQL,detal_buffer_gets這個變量就是變化量.
select sql_id,sql_text,bf_buffer_gets,af_buffer_gets,(detal_buffer_gets/bf_buffer_gets *100) change from a where detal_buffer_gets/bf_buffer_gets *100 > 10;
"
21 11g性能測試庫 plan改變的sql
Select st.sql_id,
sst.executions,
dbms_lob.substr(st.sql_text, 3000) sql_text
From sys.wrh$_sqltext st,
dba_sqlset_statements sst,
(Select Distinct sql_id
From (Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans af
Where af.task_name = 'TASK_11G' And af.execution_name = 'SPA11G'
Minus
Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans bf
Where bf.task_name = 'TASK_10G' And bf.execution_name = 'SPA10G'
)) cp
Where st.sql_id = cp.sql_id
And sst.sql_id = cp.sql_id
And st.sql_text Not Like '%Analyze(%'
And st.sql_text Not Like '%SELECT /* DS_SVC */%'
And st.sql_text Not Like '%/* OPT_DYN_SAMP */%'
And sst.sqlset_name Like 'STS_SQLSETNO%'
;
Select st.sql_id,
sst.executions,
dbms_lob.substr(st.sql_text, 3000) sql_text
From sys.wrh$_sqltext st,
dba_sqlset_statements sst,
(Select Distinct sql_id
From (Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans af
Where af.task_name = 'TASK_11G' And af.execution_name = 'spa11g_1'
Minus
Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans bf
Where bf.task_name = 'TASK_10G' And bf.execution_name = 'spa10g'
)) cp
Where st.sql_id = cp.sql_id
And sst.sql_id = cp.sql_id
And st.sql_text Not Like '%Analyze(%'
And st.sql_text Not Like '%SELECT /* DS_SVC */%'
And st.sql_text Not Like '%/* OPT_DYN_SAMP */%'
And sst.sqlset_name Like '%STS_SQLSET%'
22 11g性能測試庫 "生成任務task_11g的報告,
生成任務task_10g的報告" "conn / as sysdba
1.SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('TASK_11G', 'HTML', 'ALL', 'ALL') FROM dual;
SQL>spoo off
SQL>alter session set events '31156 trace name context forever,level 0x400';
SQL> spool task01_after_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('TASK_10G', 'HTML', 'ALL', 'ALL') FROM dual;
SQL>spoo off"
23.
23 11g性能測試庫 分析性能變差的sql忽然變化的緣由以及對應SQL優化。"分析性能忽然變差的SQL緣由週期比較長。
主要能夠經過SQLT報告來分析。,這裏主要是對性能變化的SQL的調優。" 新生成的SQL_PROFILE能夠經過導入和導出方式遷移到新庫,這樣能夠實現優化SQL的目的
--手工調優SQL方法,在11g 數據庫裏,根據SQL_ID,而後根據SQL_ID綁定profile
DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
select dbms_lob.substr(sql_fulltext,4000) sql_text from v$sqlarea where sql_id='cybxr1trru31n';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text=> my_sqltext,
user_name => 'AFA',
scope => 'COMPREHENSIVE',
time_limit => 60,
task_name => 'my_sql_tuning_task_test1',
description => 'Task to tune a query on a specified table');
END;
/
exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task_test1');
set long 2000
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task_test1') from DUAL;
--自動調優方法:思路以下,手工執行SQL,獲得SQL_ID,再調用自動分析方法DBMS_SQLTUNE分析,最後獲得分析結果以及建議,最後將結果建議實施。
## 自動調優開始begin
rm t.log
sqlplus afa/afa <<eof
spool t.log
SELECT *
FROM (SELECT agentserialno,
FROM v_beps_returnticketinfo
WHERE brno = '756045'
AND workdate >= '20180701'
ORDER BY workdate desc, agentserialno desc))
WHERE RN <= 15
AND RN BETWEEN 1 AND 15
/
select * from table(dbms_xplan.display_cursor());
spool off
eof
sql_id=`grep SQL_ID t.log|awk '{print $2}'|awk -F, '{print $1}'`
sqlplus / as sysdba <<eof1
set pagesize 0 linesize 300
select * from dual;
exec DBMS_SQLTUNE.DROP_TUNING_TASK(task_name => 'my_sql_tuning_task_test1');
select * from dual;
DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
select dbms_lob.substr(sql_fulltext,4000) sql_text into my_sqltext from v\$sqlarea where sql_id='$sql_id';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text=> my_sqltext,
user_name => 'AFA',
scope => 'COMPREHENSIVE',
time_limit => 1600,
task_name => 'my_sql_tuning_task_test1',
description => 'Task to tune a query on a specified table');
END;
/
exec DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task_test1');
/
set long 200000
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task_test1') from DUAL;
execute dbms_sqltune.accept_sql_profile(task_name => 'my_sql_tuning_task_test1', task_owner => 'SYS', replace => TRUE,force_match=>true);
/
eof1
####自動調優結束
附錄:分析sql變慢的緣由,須要使用SQLT 中xplore, 方法以下:
Method 3)如何找到是優化器的哪一個變化致使SQL性能變化 ,若是數據庫升級,好比從10g升級到11g,sql 運行緩慢,可使用以下方法
單獨安裝sqlt->utl->xplore目錄下執行install.sql
生成create_xplore_script.sql並執行
修改sql語句,增長hint /* ^^unique_id */保存到文件
選擇XECUTE模式執行
選擇CBO Parameters Y
選擇EXADATA Parameters N
選擇Fix Control Y
選擇SQL Monitor N
執行@xplore_script_1.sql帶入變量用戶、密碼、sql文件名
分析生成的html格式報告
##sample
上傳SQLT,解壓縮SQLT, 不須要安裝SQLT ,直接按照以下方法實施
cd sqlt/utl/xplore
Install:
~~~~~~~
1. Connect as SYS and execute install script:
# sqlplus / as sysdba
SQL> START install.sql
Installation completed.
You are now connected as afa.
1. Set CBO env if needed
2. Execute @create_xplore_script.sql
--
2. Generate the xplore_script in the same session within which you executed step one.
cd /home/oracle/sqlt/sqlt/utl/xplore
改寫sql create_xplore_script.sql ,思路以下,去掉ACC這一行,增長define 這一行。
--#ACC xplore_method PROMPT 'Enter "XPLORE Method" [XECUTE]: ';
define xplore_method="XECUTE"
PRO Parameter 2:
PRO Include CBO Parameters: Y (default) or N
--ACC include_cbo_parameters PROMPT 'Enter "CBO Parameters" [Y]: ';
define include_cbo_parameters="Y"
PRO
PRO Parameter 3:
PRO Include Exadata Parameters: Y (default) or N
--ACC include_exadata_parameters PROMPT 'Enter "EXADATA Parameters" [Y]: ';
define include_exadata_parameters="N"
PRO
PRO Parameter 4:
PRO Include Fix Control: Y (default) or N
--ACC include_fix_control PROMPT 'Enter "Fix Control" [Y]: ';
define include_fix_control="Y"
PRO
PRO Parameter 5:
PRO Generate SQL Monitor Reports: N (default) or Y
PRO Only applicable when XPLORE Method is XECUTE
--ACC generate_sql_monitor_reports PROMPT 'Enter "SQL Monitor" [N]: ';
define generate_sql_monitor_reports="Y"
SQL> conn app/passwd
sqlplus afa/afa <<eof
@create_xplore_script.sql
eof
3. Execute generated xplore_script. It will ask for two parameters:
修改sql語句,增長hint /* ^^unique_id */保存到文件
conn app/passwd
@xplore_script_1.sql帶入變量用戶、密碼、sql文件名
P1. Name of the script to be executed. 必定要加入/* ^^unique_id */ 這個關鍵字,便可,最後結果會生成zip文件
Notes:
Example:
SELECT /* ^^unique_id */ t1.col1, etc.
P2. Password for <user>
4. After you are done using XPLORE you may want to bounce the
database since it executed some ALTER SYSTEM commands:
(when meet erro ERROR at line 1:ORA-01422: exact fetch returns more than requested number of rows, need start db)
# sqlplus / as sysdba
SQL> shutdown immediate
SQL> startup
Uninstall:
~~~~~~~~~
1. Connect as SYS and execute uninstall script:
# sqlplus <user>
SQL> START uninstall.sql
Note:
You will be asked for the test case user.
##### sample 1
做爲 Oracle Real Application Testing 選件/特性,這篇文章將提供一個關於 SQL 性能分析器(SPA)工具的簡要概覽。這是此係列的第一部分。第二部分於下個月繼續講述數據庫捕獲和重演。關於 SPA 的詳細信息可參考:
數據庫測試指南
DBA 的一個重要工做是確保在一個計劃內的變動安排後,當前生產環境負載和 SQL 執行計劃能夠持續平滑運行。變動可能包含數據庫升級、增長一個新索引或改變一個特定的數據庫參數。 SPA 工具做爲 Oracle Real Application Testing 選件的一部分提供,容許將生產環境負載中的 SQL 拿到測試環境的目標數據庫運行,因此能夠經過比對結果識別退化的性能問題,並在遷移、升級或特定系統變動以前修復。若是您計劃使用數據庫重演特性,在運行重演以前使用 SPA 是 Oracle 建議的最佳實踐。目標是在數據庫重演以前識別和修復全部的 SQL 性能退化,因此咱們能夠只關注重演特性中的併發和吞吐量 。SQL 性能分析器使用 SQL 調優集(STS)做爲輸入,STS 已經存在了很長一段時間了,這容許 DBA 提取現有生產環境的 SQL 語句工做負載並在節省時間和資源的前提下輕鬆比對一組變動前和變動後的執行結果。 一個 SQL 調優集(STS)是一個包含了一系列從工做負載中得來的 SQL 語句集及其執行上下文信息(例如用戶和綁定變量、執行的統計信息和執行計劃)的數據庫對象。關於 STS 的更多信息,請參考: Managing SQL Tuning Sets
注: SQL 性能分析器須要 Oracle Real Application Testing 許可. 更多信息,請參考: Oracle Database Licensing Information.
以下清單提供了一些 DBA 考慮使用 SPA 工具的常見場景。
使用場景
1. 數據庫升級 – 一個新版本數據庫意味着一個新版本的優化器。DBA 能在升級生產系統以前主動發現任何 SQL 性能退化。
2. 部署一個補丁 – 您可能會部署一個與性能或優化器相關的特定修復的補丁。使用 SPA 來檢查您的生產環境 SQL 負載能幫助您驗證這個補丁不會引發任何 SQL 性能退化。
3. 數據庫初始化參數變動 - 有各類各樣的數據庫參數可能影響性能,因此這是 SPA 用處的一個很好的場景。
4. Schema 變動例如增長索引 – schema 變動和修改,如增長索引會直接影響優化器的決定和計劃。SPA 可用來測試這些變動並確保不會引入負面影響。
5. 改變或刷新優化器統計信息 – 優化器統計信息直接關係到優化器的決策和執行計劃的生成,您可使用 SPA 來測試新的統計信息和設置來確保它們不會引發 SQL 性能退化。
使用 SPA 包括執行如下工做流文檔/步驟。SPA 工具徹底集成到 Oracle12c Cloud Control 中,Oracle 也提供了一個名爲 DBMS_SQLPA 的 PLSQL 包來容許 DBA 使用 PL/SQL 實施這些步驟。這個工做流使用了一個迭代的過程來執行、對比和分析、以及修復這些退化。DBA 可以使用諸如 SQL 執行計劃基線或 SQL 調優顧問等工具/特性來修復 SPA 發現的壞或退化的 SQL 語句。
SPA 工做流
1. 捕捉您想要分析的生產系統的 SQL 工做負載,並將其保存爲一個 SQL 調優集。
2. 設置目標測試系統(這應該儘量多地和生產系統一致)。
3. 在測試系統建立一個 SPA 任務。
Image
4. 構建變動前 SPA 任務。
5. 進行系統變動。
6. 構建變動後 SPA 任務。
7. 對比和分析變動先後的性能數據。
8. 調優或修復任何退化的 SQL 語句。
9. 重複地6~8步,直到 SQL 性能在測試系統上可接受。
出於本文的目的,咱們將經過一個簡單實例10.2.0.4 到11,2,0.4 來介紹。
• 源數據庫版本 10.2.0.4 25.10.0.199
• 目標測試系統 11.2.0.4 25.10.0.31
• 性能報告將生成 HTML 格式的詳細信息
SPA – 使用 PL/SQL API 的簡單介紹
注:DBMS_SQLPA 包及其用法的更多信息,請參考: Using DBMS_SQLPA
1. 捕獲 SQL 工做負載到一個 SQL 調優集
建立和填充 STS
BEGIN
DBMS_SQLTUNE.DROP_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI');
END;
/
BEGIN
DBMS_SQLTUNE.CREATE_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', description => 'My Simple STS Using the API' );
END;
/
PL/SQL procedure successfully completed.
2,檢查SYSAUX空間是否足夠
09:26:38 sys@LUNAR>@ts
SELECT a.tablespace_name ,b.maxbytes/1024/1024/1024 "maxbyes_GB",total/1024/1024/1024 "bytes_GB",free/1024/1024/1024 "free_GB",(total-free) /1024/1024/1024 "use_GB",
ROUND((total-free)/total,4)*100 "use_%",ROUND((total-free)/b.maxbytes,4)*100 "maxuse_%"
FROM
(SELECT tablespace_name,SUM(bytes) free FROM DBA_FREE_SPACE
GROUP BY tablespace_name
) a,
(SELECT tablespace_name,sum(case autoextensible when 'YES' then maxbytes else bytes end) maxbytes,SUM(bytes) total FROM DBA_DATA_FILES
GROUP BY tablespace_name
) b
WHERE a.tablespace_name=b.tablespace_name
order by "maxuse_%" desc;
8 rows selected.
1a. 使用 SCOTT 用戶運行一下 PLSQL 代碼執行 SQL 語句。(PLSQL 用來模擬使用綁定變量的 SQL 語句工做負載) 使用swithbech 模擬壓力
##var b1 number;
##declare
##v_num number;
## begin
## for i in 1..10000 loop
## :b1 := i;
## select c1 into v_num from t1 where c1 = :b1;
## end loop;
##end;
##/
1b. 從遊標緩存中找到使用 parsing schema=SCOTT 的語句來填充 STS
DECLARE
c_sqlarea_cursor DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN c_sqlarea_cursor FOR SELECT VALUE(p) FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('parsing_schema_name = ''SOE''', NULL, NULL, NULL, NULL, 1, NULL,'ALL')) p;
DBMS_SQLTUNE.LOAD_SQLSET (sqlset_name => 'MYSIMPLESTSUSINGAPI', populate_cursor => c_sqlarea_cursor);
END;
/
上述過程通常執行時間比較長,所以,一般放到後臺執行。
這裏咱們看到加載的SQL明顯增長了不少:
1c. 檢查 STS 中捕獲了多少 SQL 語句
COLUMN NAME FORMAT a20
COLUMN COUNT FORMAT 99999
COLUMN DESCRIPTION FORMAT a30
SELECT NAME, STATEMENT_COUNT AS "SQLCNT", DESCRIPTION FROM USER_SQLSET;
Results:
NAME SQLCNT DESCRIPTION
-------------------- ---------- ------------------------------
MYSIMPLESTSUSINGAPI 12 My Simple STS Using the API
1d. 顯示 STS 的內容
COLUMN SQL_TEXT FORMAT a30
COLUMN SCH FORMAT a3
COLUMN ELAPSED FORMAT 999999999
SELECT SQL_ID, PARSING_SCHEMA_NAME AS "SCOTT", SQL_TEXT, ELAPSED_TIME AS "ELAPSED", BUFFER_GETS FROM TABLE( DBMS_SQLTUNE.SELECT_SQLSET( 'MYSIMPLESTSUSINGAPI' ) );
Results: (partial)
SQL_ID SCOTT SQL_TEXT ELAPSED BUFFER_GETS
------------- ------------------------------ ------------------------------ ---------- -----------
0af4p26041xkv SCOTT SELECT C1 FROM T1 WHERE C1 = : 169909252 18185689
在源庫上執行打包SQL TUNING SET的操做,而後exp/imp到新庫上
exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'MYSIMPLESTSUSINGAPI' ,sqlset_owner =>'SYS' ,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR' );
select count(*) from DBMGR.STS_STBTAB;
exp dbmgr/db1234DBA tables=STS_STBTAB file=/home/oracle/xtts/bak/exp_SQLSET_TAB.dmp log=/home/oracle/xtts/bak/exp_SQLSET_TAB.log FEEDBACK=1000 BUFFER=5000000
####
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in ZHS16GBK character set and UTF8 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table STS_STBTAB
61 rows exported
. . exporting table STS_STBTAB_CBINDS
0 rows exported
. . exporting table STS_STBTAB_CPLANS
260 rows exported
#####
2. 設置目標系統
###爲了演示目的,這裏將使用 STS 的捕獲源做爲一樣的目標測試系統。
imp dbmgr/dbmgr fromuser=dbmgr touser=sys file=/home/oracle/xtts/bak/exp_SQLSET_TAB.dmp feedback=1000 log=/home/oracle/xtts/bak/imp_SQLSET_TAB.log BUFFER=5000000
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Tes
Export file created by EXPORT:V10.02.01 via conventional path
import done in ZHS16GBK character set and AL16UTF16 NCHAR character set
export server uses UTF8 NCHAR character set (possible ncharset conversion)
. importing DBMGR's objects into DBMGR
. . importing table "STS_STBTAB"
61 rows imported
. . importing table "STS_STBTAB_CBINDS"
0 rows imported
. . importing table "STS_STBTAB_CPLANS"
260 rows imported
exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'MYSIMPLESTSUSINGAPI',sqlset_owner => 'SYS' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
conn / as sysdba
SELECT NAME, STATEMENT_COUNT AS "SQLCNT", DESCRIPTION FROM USER_SQLSET;
MYSIMPLESTSUSINGAPI
3. 建立 SPA 任務
VARIABLE t_name VARCHAR2(100);
EXEC :t_name := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'MYSIMPLESTSUSINGAPI', task_name => 'MYSPATASKUSINGAPI');
print t_name
Results:
T_NAME
-----------------
MYSPATASKUSINGAPI
4. 建立和執行變動前的 SPA 任務 in 10.2.0.5 type 爲 CONVERT SQLSET,生成11.2.0.1的SPA Trail,採用STS轉化方式
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'CONVERT SQLSET', execution_name => 'MY_BEFORE_CHANGE');
###5. 作出系統變動
###CREATE INDEX t1_idx ON t1 (c1);
6. 建立和執行變動後的 SPA 任務 in 11.2.0.4 type 爲 TEST EXECUTE,從性能數據生成SPA trial
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'TEST EXECUTE', execution_name => 'MY_AFTER_CHANGE');
7. 對比和分析變動前和變動後的性能
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'COMPARE PERFORMANCE', execution_name => 'MY_EXEC_COMPARE_CPU', execution_params => dbms_advisor.arglist('comparison_metric', 'elapsed_time'));
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'MYSPATASKUSINGAPI', execution_type => 'COMPARE PERFORMANCE', execution_name => 'MY_EXEC_COMPARE_BF', execution_params => dbms_advisor.arglist('comparison_metric', 'BUFFER_GETS'));
-- Generate the Report of cpu
set long 100000000 longchunksize 100000000 linesize 200 head off feedback off echo off TRIMSPOOL ON TRIM ON
VAR rep CLOB;
EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all',execution_name=>'MY_EXEC_COMPARE_CPU');
##EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all');
##SPOOL C:\mydir\SPA_detailed.html
spool /tmp/dba/cpu.html
PRINT :rep
SPOOL off
-- Generate the Report of buffer
set long 100000000 longchunksize 100000000 linesize 200 head off feedback off echo off TRIMSPOOL ON TRIM ON
VAR rep CLOB;
EXEC :rep := DBMS_SQLPA.REPORT_ANALYSIS_TASK('MYSPATASKUSINGAPI', 'html', 'typical', 'all',execution_name=>'MY_EXEC_COMPARE_BF');
spool /tmp/dba/bf.html
PRINT :rep
SPOOL off
HTML 格式的報告示例:
下面是 SPA 報告的一部分截屏。報告由3個部分組成,一個部分涉及到變動前和變動後的任務包括 範圍,狀態,執行起始時間,錯誤個數和比較的標準;第二個部分總結部分包括了變動帶來的負載影響;第三個部分包括了詳細的 SQL 信息,好比 SQLID 以及須要比較的度量,好比對負載的影響,執行的頻率,以及在這個例子裏在變動先後的執行時間這個度量。好比對於 SQL ID 0af4p26041xkv 來講,負載影響是97%。咱們要實施的變動對於性能有好的影響,能夠把執行時間從12766下降到29。咱們還能夠看到執行計劃在變動後發生了變化。這些信息能夠幫助 DBA 進一步關注在特定的問題或者性能退化上,對於當前的這個例子,影響是對性能有提高。
Image
下面的截屏顯示了在增長了索引後執行計劃的變化。這部分信息可讓 DBA 進一步關注在變動先後某個具體 SQL 的執行計劃上。對任何 SQL 退化來講,這可讓 DBA 來清楚瞭解執行計劃是如何變化的,而且能夠進一步採起計劃,好比使用 SQL Tuning Advisor 或者建立 SPM 基線。
Image
關於 Real Application Testing 的推薦資源清單:
• Oracle Real Application Testing Product Information
• Master Note for Real Application Testing Option (Doc ID 1464274.1)
• Database Testing: Best Practices (Doc ID 1535885.1)
• Mandatory Patches for Database Testing Functionality for Current and Earlier Releases (Doc ID 560977.1)
##### sample 2
1 恢復平臺 搭建10g性能測試環境 在恢復平臺恢復出10g的生產庫,做爲性能測試環境,參數保持和生產庫一致
2 11g性能測試庫 搭建11g性能測試環境 在linux主機進行跨平臺遷移,搭建11g性能測試環境
3 生產環境 抓取sqlset "在生產環境執行附件sql,對生產性能影響不大。
抓取持續一週。若是有異常,能夠當即停掉。"
4 生產環境 "第三步完成後,
建立中間表,將sqlset打包到中間表" "exec dbms_sqltune.create_stgtab_sqlset(table_name => 'STS_STBTAB' ,schema_name => 'DBMGR');
exec dbms_sqltune.pack_stgtab_sqlset(sqlset_name =>'STS_SQLSET' ,sqlset_owner =>'DBMGR' ,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR' );" 792485
5 生產環境 導出sqlset "expdp導出DBMGR.STS_STBTAB
select count(*) from DBMGR.STS_STBTAB;
exp dbmgr/db1234DBA tables=STS_STBTAB file=/db/cps/archivelog/exp_SQLSET_TAB.dmp log=/db/cps/archivelog/exp_SQLSET_TAB.log FEEDBACK=1000 BUFFER=5000000
" 若是cow庫有sqlset,則直接在cow庫按如下拆分步驟操做,sqlset完後拆分後再導入已拆分的sqlset到11g性能測試庫。
將sqlset導入11g性能測試庫
6 10g性能測試庫 導入到11g性能測試庫 impdp導入DBMGR.STS_STBTAB
########-------------將優化集打包到stgtab表裏面 中轉表過濾 ref http://ju.outofmemory.cn/entry/77139
方法1:
轉換成中轉表以後,咱們能夠再作一次去除重複的操做。固然,你也能夠根據module來刪除一些沒必要要的遊標。
-
delete from SPA.SQLSET_TAB a where rowid !=(select max(rowid) from SQLSET_TAB b where a.FORCE_MATCHING_SIGNATURE=b.FORCE_MATCHING_SIGNATURE and a.FORCE_MATCHING_SIGNATURE<>0);
-
-
delete from SPA.SQLSET_TAB where MODULE='PL/SQL Developer';
方法2:
create table sts_b as select distinct(to_char(s.force_matching_signature)) b from DBMGR.STS_STBTAB_08 s;
create index sts_b_1_force on STS_STBTAB_08(to_char(force_matching_signature));
DECLARE
T VARCHAR2(50);
cursor STSCUR IS select b from sts_b;
STSCUR_1 STS_STBTAB_08%ROWTYPE;
BEGIN
OPEN STSCUR;
LOOP
fetch STSCUR into T;
EXIT WHEN STSCUR%NOTFOUND;
-- DBMS_OUTPUT.PUT_LINE(T);
select * into STSCUR_1 from ( (select * from STS_STBTAB_08 st where (to_char(st.force_matching_signature)) =T ) order by cpu_time desc) where rownum < 2;
delete STS_STBTAB_08 where (to_char(force_matching_signature)) =T;
insert into STS_STBTAB_08 values STSCUR_1;
commit;
-- DBMS_OUTPUT.PUT_LINE(STSCUR_1.SQL_ID);
END LOOP;
CLOSE STSCUR;
END;
7 10g性能測試庫 sqlset解包 exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
將sqlset導入10g測試庫
8 10g測試環境 受權 ##grant all on spa_sqlpa to dbmgr;
grant all on dbms_sqlpa to dbmgr;
9 10g測試環境 導出導入 "impdp導入DBMGR.STS_STBTAB
imp dbmgr/dbmgr fromuser=dbmgr touser=dbmgr file=/oraclelv/exp_SQLSET_TAB.dmp feedback=1000 log=/oraclelv/imp_SQLSET_TAB.log BUFFER=5000000
"
10 10g測試環境 解包sqlset exec dbms_sqltune.unpack_stgtab_sqlset(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR' ,replace => True,staging_table_name =>'STS_STBTAB' ,staging_schema_owner => 'DBMGR');
11 11g性能測試庫 建立DBLINK "create public database link to_10g connect to dbmgr identified by xxxxx using 'xxxxxxx'
create public database link to_10g connect to dbmgr identified by db1234DBA
using ' (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST =25.10.0.199)(PORT = 1539)) (CONNECT_DATA =(sid = db)))';" 建立到10g測試庫的dblink
第一次執行spa回放,得到10g性能測試環境的數據
12 11性能測試庫 建立task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_10G');
END;
/
"
13 11g性能測試庫 生成第一次執行task的腳本 (經過db_link 方式)
"vi spa_10g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_10G',execution_type=>'test execute',execution_name=>'spa10g',execution_params =>dbms_advisor.argList('DATABASE_LINK','TO_10G','EXECUTE_COUNT',5) ,execution_desc => 'before_change');
exit
SQLEnd" 能夠看到SPA_DIR下有BEF_TASK_SQLSET_NO_i.sh腳本
14 11g性能測試庫 "執行task,
執行第一次spa回放" "nohup sh spa_10g.sh > spa_10g.log &
" 發起目標庫執行第一次spa回放
15 11g性能測試庫 查詢回放進度 "SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa10g%';" 計算 792485 筆數據須要多久完成
第二次執行spa回放,得到11g性能測試環境的數據
16 11g性能測試庫 建立task "declare
mytask varchar2(100);
begin
mytask := dbms_sqlpa.create_analysis_task(sqlset_name => 'STS_SQLSET',sqlset_owner => 'DBMGR',task_name => 'TASK_11G');
END;
/
"
17 11g性能測試庫 生成第二次執行task的腳本 "
vi spa_11g.sh
sqlplus -S /nolog <<SQLEnd
conn / as sysdba
exec DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name=>'TASK_11G',execution_type=>'test execute',execution_name=>'spa11g_1',execution_params =>dbms_advisor.argList('EXECUTE_COUNT',5) ,execution_desc => 'after_change');
exit
SQLEnd" 能夠看到SPA_DIR下有AFT_TASK_SQLSET_NO_i.sh腳本
18 11g性能測試庫 "執行task,
執行第二次spa回放" "nohup sh spa_11g.sh > spa_11g.log &
" 發起目標庫執行第二次spa回放
19 11g性能測試庫 查詢回放進度
"SELECT owner,task_name,execution_name,a.execution_type,execution_start,execution_last_modified,execution_end,status,b.SOFAR,b.START_TIME,b.LAST_UPDATE_TIME
from dba_advisor_executions a, v$advisor_progress b where a.execution_name like 'spa11g%';"
分析回放結果
20 11g性能測試庫 取出buffer gets變大的sql進行分析
21 11g性能測試庫 plan改變的sql
附錄:
3.
---------------------------------------------------
--Step1: 建立名稱爲STS_SQLSET的SQL_SET.
---------------------------------------------------
BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SQLSET'
);
END;
/
BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
DESCRIPTION => 'COMPLETE APPLICATION WORKLOAD',
SQLSET_OWNER =>'DBMGR');
END;
/
---------------------------------------------------
--Step2: 初始加載當前數據庫中的SQL.
---------------------------------------------------
DECLARE
STSCUR DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN STSCUR FOR
SELECT VALUE(P)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',
''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
NULL, NULL, NULL, NULL, 1, NULL,
'ALL')) P;
-- POPULATE THE SQLSET
DBMS_SQLTUNE.LOAD_SQLSET(SQLSET_NAME => 'STS_SQLSET',
POPULATE_CURSOR => STSCUR,
COMMIT_ROWS => 100,
SQLSET_OWNER => 'DBMGR');
CLOSE STSCUR;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
RAISE;
END;
/
---------------------------------------------------
--Step3: 增量抓取數據庫中的SQL, 會連續抓取7天,每小時抓取一次,Sessions一直持續7天. 這一步用shell腳本在後臺執行。
BEGIN
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(SQLSET_NAME => 'STS_SQLSET',
TIME_LIMIT => 345600,
REPEAT_INTERVAL => 3600,
CAPTURE_OPTION => 'MERGE',
CAPTURE_MODE => DBMS_SQLTUNE.MODE_ACCUMULATE_STATS,
BASIC_FILTER => 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'', ''OUTLN'', ''DBSNMP'', ''WMSYS'', ''CTXSYS'', ''XDB'', ''MDSYS'',
''ORDPLUGINS'', ''ORDSYS'', ''OLAPSYS'', ''HR'', ''OE'', ''SCOTT'', ''QS_CB'', ''QS_CBADM'', ''QS_ES'', ''QS_OS'', ''QS_CS'', ''QS'', ''QS_ADM'', ''QS_WS'', ''DIP'', ''TSMSYS'', ''EXFSYS'',''LBACSYS'', ''TRACESVR'', ''AURORA$JIS$UTILITY$'', ''OSE$HTTP$ADMIN'', ''DBMGR'', ''OVSEE'', ''DBMONOPR'')
AND PLAN_HASH_VALUE <> 0 AND UPPER(SQL_TEXT) NOT LIKE ''%TABLE_NAME%''',
SQLSET_OWNER => 'DBMGR');
END;
/
20.
Select *
From (Select b.*, dbms_lob.substr(st.sql_text, 3000) sql_text
From SYS.Wrh$_Sqltext st,
(Select task_name,
sql_id,
executions,
detal_buffer_gets,
bf_buffer_gets,
af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED
From (Select task_name,
sql_id,
bf_executions executions,
round(af_buffer_gets / af_EXECUTIONS) -
round(bf_buffer_gets / bf_executions) detal_buffer_gets,
round(bf_buffer_gets / bf_executions) bf_buffer_gets,
round(af_buffer_gets / af_EXECUTIONS) af_buffer_gets,
bf_plan_hash_value,
bf_rows_processed / bf_executions bf_rows_processed,
af_plan_hash_value,
af_ROWS_PROCESSED / af_EXECUTIONS af_ROWS_PROCESSED
From (Select bf.task_name,
bf.sql_id,
bf.executions bf_executions,
bf.plan_hash_value bf_plan_hash_value,
bf.buffer_gets bf_buffer_gets,
bf.rows_processed bf_rows_processed,
af.plan_hash_value af_plan_hash_value,
af.buffer_gets af_buffer_gets,
af.executions af_executions,
af.rows_processed af_rows_processed
From dba_advisor_sqlstats af,
dba_advisor_sqlstats bf
Where af.execution_name='SPA11G'
And af.task_name like '%TASK_11G%'
And bf.task_name like '%TASK_10G%'
And bf.execution_name='SPA10G'
And bf.sql_id = af.sql_id))
Where detal_buffer_gets > 0
and af_plan_hash_value ! = bf_plan_hash_value) b
Where st.sql_id = b.sql_id
Order By detal_buffer_gets Desc)
Where sql_text Not Like '%Analyze(%'
And sql_text Not Like '%SELECT /* DS_SVC */%'
And sql_text Not Like '%/* OPT_DYN_SAMP */%';
21.
Select st.sql_id,
sst.executions,
dbms_lob.substr(st.sql_text, 3000) sql_text
From sys.wrh$_sqltext st,
dba_sqlset_statements sst,
(Select Distinct sql_id
From (Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans af
Where af.task_name = 'TASK_11G' And af.execution_name = 'SPA11G'
Minus
Select sql_id,
operation,
options,
object_name,
object_alias,
object_type,
Id,
parent_id,
depth
From dba_advisor_sqlplans bf
Where bf.task_name = 'TASK_10G' And bf.execution_name = 'SPA10G'
)) cp
Where st.sql_id = cp.sql_id
And sst.sql_id = cp.sql_id
And st.sql_text Not Like '%Analyze(%'
And st.sql_text Not Like '%SELECT /* DS_SVC */%'
And st.sql_text Not Like '%/* OPT_DYN_SAMP */%'
And sst.sqlset_name Like 'STS_SQLSETNO%'
;
###refer 1
http://www.cnblogs.com/jyzhao/p/9210517.html
生產端:Windows 2008 + Oracle 10.2.0.5
測試端:RHEL 6.5 + Oracle 11.2.0.4
需求:由於Oracle跨越大版本,優化器、新特性變更較多,須要進行SPA測試比對先後期性能差別。
說明:本文是根據DBA Travel的SPA參考規範文檔(在此致謝Travel同窗),結合實際某客戶需求整理的整個測試過程。爲了更真實的反映整個過程,在生產端使用swingbench壓力測試軟件持續運行了一段時間,模擬真實的業務壓力。
1.SPA測試流程
爲了儘量的減少對正式生產庫的性能影響,本次SPA測試只是從AWR資料庫中的SQL數據轉化而來的SQL Tuning Set進行總體的SQL性能測試。
本次SPA測試主要分爲如下幾個步驟:
在生產庫端:
- 環境準備:建立SPA測試專用用戶
- 採集數據: a) 在生產庫轉化AWR中SQL爲SQL Tuning Set b) 在生產庫從現有SQL Tuning Set提取SQL
- 導出數據:打包(pack)轉化後的SQL Tuning Set,並導出傳輸到測試服務器
在測試庫端:
- 環境準備:建立SPA測試專用用戶
- 測試準備:導入SQL Tuning Set表,並解包(unpack),建立SPA分析任務
- 前期性能:從SQL Tuning Set中轉化得出10g的性能Trail
- 後期性能:在11g測試數據庫中執行SQL Tuning Set中SQL,生成11g性能Trail
- 對比分析:執行對比分析任務,分別按執行時間,CPU時間和邏輯讀三個維度進行
- 彙總報告:取出對比報告,對每一個維度分別取出All,Unsupport,Error 3類報告
總結報告:
- 總結報告:分析彙總報告,優化其中的性能降低SQL,編寫SPA測試報告
2.SPA操做流程
2.1 本文使用的命名規劃
類型 規劃
SQLSET ORCL_SQLSET_201806 Analysis Task SPA_TASK_201806 STGTAB ORCL_STSTAB_201806 Dmpfile ORCL_STSTAB_201806.dmp
2.2 生產端:環境準備
conn / as sysdba
CREATE USER SPA IDENTIFIED BY SPA DEFAULT TABLESPACE SYSAUX; GRANT DBA TO SPA; GRANT ADVISOR TO SPA; GRANT SELECT ANY DICTIONARY TO SPA; GRANT ADMINISTER SQL TUNING SET TO SPA;
2.3 生產端:採集數據
1). 獲取AWR快照的邊界ID
SET LINES 188 PAGES 1000 COL SNAP_TIME FOR A22 COL MIN_ID NEW_VALUE MINID COL MAX_ID NEW_VALUE MAXID SELECT MIN(SNAP_ID) MIN_ID, MAX(SNAP_ID) MAX_ID FROM DBA_HIST_SNAPSHOT WHERE END_INTERVAL_TIME > trunc(sysdate)-10 ORDER BY 1;
2). 建立SQL Set
--鏈接用戶
conn SPA/SPA
--若是以前有這個SQLSET的名字,能夠這樣刪除
EXEC DBMS_SQLTUNE.DROP_SQLSET (SQLSET_NAME => 'ORCL_SQLSET_201806', SQLSET_OWNER => 'SPA'); --新建SQLSET:ORCL_SQLSET_201806 EXEC DBMS_SQLTUNE.CREATE_SQLSET ( - SQLSET_NAME => 'ORCL_SQLSET_201806', - DESCRIPTION => 'SQL Set Create at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'), - SQLSET_OWNER => 'SPA');
3). 轉化AWR數據中的SQL數據,將其中的SQL載入到SQL Set中
DECLARE SQLSET_CUR DBMS_SQLTUNE.SQLSET_CURSOR; BEGIN OPEN SQLSET_CUR FOR SELECT VALUE(P) FROM TABLE( DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY( 16, 24, 'PARSING_SCHEMA_NAME NOT IN (''SYS'', ''SYSTEM'')', NULL, NULL, NULL, NULL, 1, NULL, 'ALL')) P; DBMS_SQLTUNE.LOAD_SQLSET( SQLSET_NAME => 'ORCL_SQLSET_201806', SQLSET_OWNER => 'SPA', POPULATE_CURSOR => SQLSET_CUR, LOAD_OPTION => 'MERGE', UPDATE_OPTION => 'ACCUMULATE'); CLOSE SQLSET_CUR; END; /
4). 打包SQL Set
DROP TABLE SPA.JYZHAO_SQLSETTAB_20180106;
EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLSET ('ORCL_STSTAB_201806', 'SPA', 'SYSAUX'); EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLSET ( - SQLSET_NAME => 'ORCL_SQLSET_201806', - SQLSET_OWNER => 'SPA', - STAGING_TABLE_NAME => 'ORCL_STSTAB_201806', - STAGING_SCHEMA_OWNER => 'SPA');
2.4 生產端:導出數據
1). 在操做系統中,導出打包後的SQL Set數據
cat > ./export_sqlset_201806.par <<EOF
USERID='SPA/SPA'
FILE=ORCL_STSTAB_201806.dmp
LOG=exp_spa_sqlset_201806.log
TABLES=ORCL_STSTAB_201806
DIRECT=N
BUFFER=10240000
STATISTICS=NONE
EOF
注意:這裏DIRECT=Y參數在遇到問題後嘗試改成了DIRECT=N,默認也是N。
set NLS_LANG=AMERICAN_AMERICA.US7ASCII exp PARFILE=export_sqlset_201806.par
注意:NLS_LANG變量是Oracle的變量,設置字符集和數據庫字符集一致,避免發生錯誤轉換。
2). 將導出後的Dump文件傳輸到測試服務器
將 ORCL_STSTAB_201806.dmp 傳輸到 目標服務器 /orabak/spa下。
2.5 測試端:環境準備
conn / as sysdba
CREATE USER SPA IDENTIFIED BY SPA DEFAULT TABLESPACE SYSAUX; GRANT DBA TO SPA; GRANT ADVISOR TO SPA; GRANT SELECT ANY DICTIONARY TO SPA; GRANT ADMINISTER SQL TUNING SET TO SPA;
2.6 測試端:測試準備
在進行SPA測試前須要準備測試環境,包括導入生產庫中的SQL Set,對其進行解包(unpack)操做,並建立SPA分析任務。
1). 在操做系統中,執行導入命令,導入SQL Set表
cat > ./import_sqlset_201806.par <<EOF USERID='SPA/SPA' FILE=ORCL_STSTAB_201806.dmp LOG=imp_spa_sqlset_201806.log FULL=Y EOF export NLS_LANG=AMERICAN_AMERICA.US7ASCII imp PARFILE=import_sqlset_201806.par
2). 解包(unpack)SQL Set
conn SPA/SPA
EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET (-
SQLSET_NAME => 'ORCL_SQLSET_201806', - SQLSET_OWNER => 'SPA', - REPLACE => TRUE, - STAGING_TABLE_NAME => 'ORCL_STSTAB_201806', - STAGING_SCHEMA_OWNER => 'SPA');
3). 建立SPA分析任務
VARIABLE SPA_TASK VARCHAR2(64); EXEC :SPA_TASK := DBMS_SQLPA.CREATE_ANALYSIS_TASK( - TASK_NAME => 'SPA_TASK_201806', - DESCRIPTION => 'SPA Analysis task at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'), - SQLSET_NAME => 'ORCL_SQLSET_201806', - SQLSET_OWNER => 'SPA');
2.7 測試端:前期性能
在測試服務器中,能夠直接從SQL Tuning Set中轉化獲得全部SQL在10g數據庫中的執行效率,獲得10g中的SQL Trail。
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', - EXECUTION_NAME => 'EXEC_10G_201806', - EXECUTION_TYPE => 'CONVERT SQLSET', - EXECUTION_DESC => 'Convert 10g SQLSET for SPA Task at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
2.8 測試端:後期性能
在測試服務器(運行11g數據庫)中,須要在本地數據庫(11g)測試運行SQL Tuning Set中的SQL語句,分析全部語句在11g環境中的執行效率,獲得11g中的SQL Trail。
vi spa2.sh
echo "WARNING: SPA2 Start @`date`" sqlplus SPA/SPA << EOF! EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( - TASK_NAME => 'SPA_TASK_201806', - EXECUTION_NAME => 'EXEC_11G_201806', - EXECUTION_TYPE => 'TEST EXECUTE', - EXECUTION_DESC => 'Execute SQL in 11g for SPA Task at : '||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS')); exit EOF! echo "WARNING:SPA2 OK @`date`" nohup sh spa2.sh &
2.9 測試端:性能對比
獲得兩次SQL Trail以後,能夠對比兩次Trial之間的SQL執行性能,能夠從不一樣的維度對兩次Trail中的全部SQL進行對比分析,主要關注的維度有:SQL執行時間,SQL執行的CPU時間,SQL執行的邏輯讀。
1). 對比兩次Trail中的SQL執行時間
conn SPA/SPA
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', - EXECUTION_NAME => 'COMPARE_ET_201806', - EXECUTION_TYPE => 'COMPARE PERFORMANCE', - EXECUTION_PARAMS => DBMS_ADVISOR.ARGLIST( - 'COMPARISON_METRIC', 'ELAPSED_TIME', - 'EXECUTE_FULLDML', 'TRUE', - 'EXECUTION_NAME1','EXEC_10G_201806', - 'EXECUTION_NAME2','EXEC_11G_201806'), - EXECUTION_DESC => 'Compare SQLs between 10g and 11g at :'||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
2). 對比兩次Trail中的SQL執行的CPU時間
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', - EXECUTION_NAME => 'COMPARE_CT_201806', - EXECUTION_TYPE => 'COMPARE PERFORMANCE', - EXECUTION_PARAMS => DBMS_ADVISOR.ARGLIST( - 'COMPARISON_METRIC', 'CPU_TIME', - 'EXECUTION_NAME1','EXEC_10G_201806', - 'EXECUTION_NAME2','EXEC_11G_201806'), - EXECUTION_DESC => 'Compare SQLs between 10g and 11g at :'||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
3). 對比兩次Trail中的SQL執行的邏輯讀
EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK( -
TASK_NAME => 'SPA_TASK_201806', - EXECUTION_NAME => 'COMPARE_BG_201806', - EXECUTION_TYPE => 'COMPARE PERFORMANCE', - EXECUTION_PARAMS => DBMS_ADVISOR.ARGLIST( - 'COMPARISON_METRIC', 'BUFFER_GETS', - 'EXECUTION_NAME1','EXEC_10G_201806', - 'EXECUTION_NAME2','EXEC_11G_201806'), - EXECUTION_DESC => 'Compare SQLs between 10g and 11g at :'||TO_CHAR(SYSDATE, 'YYYY-MM-DD HH24:MI:SS'));
2.10 測試端:彙總報告
執行對比分析任務以後,就能夠取出對應的對比分析任務的結果報告,主要關注的報告類型有:彙總SQL報告,錯誤SQL報告以及不支持SQL報告。
a) 獲取執行時間所有報告
conn SPA/SPA
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL elapsed_all.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','ALL','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
b) 獲取執行時間降低報告
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL elapsed_regressed.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','REGRESSED','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
c) 獲取邏輯讀所有報告
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL buffer_all.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','ALL','ALL',NULL,1000,'COMPARE_BG_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
d) 獲取邏輯讀降低報告
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL buffer_regressed.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','REGRESSED','ALL',NULL,1000,'COMPARE_BG_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
e) 獲取錯誤報告
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL error.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','ERRORS','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
f) 獲取不支持報告
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL unsupported.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','UNSUPPORTED','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
g) 獲取執行計劃變化報告
ALTER SESSION SET EVENTS='31156 TRACE NAME CONTEXT FOREVER, LEVEL 0X400'; SET LINES 1111 PAGES 50000 LONG 1999999999 TRIM ON TRIMS ON SERVEROUTPUT ON SIZE UNLIMITED SPOOL changed_plans.html SELECT XMLTYPE(DBMS_SQLPA.REPORT_ANALYSIS_TASK('SPA_TASK_201806','HTML','CHANGED_PLANS','ALL',NULL,1000,'COMPARE_ET_201806')).GETCLOBVAL(0,0) FROM DUAL; spool off
3.SPA環境清理
3.1 查看SQLSET
conn SPA/SPA
select owner,name,STATEMENT_COUNT from dba_sqlset;
3.2 查看分析任務
select owner,task_id,task_name,created,LAST_MODIFIED,STATUS from DBA_ADVISOR_TASKS where task_name like upper('%&task_name%') order by 2; SPA_TASK_201806
3.3 刪除ANALYSIS_TASK
exec dbms_sqlpa.DROP_ANALYSIS_TASK('SPA_TASK_201806');
3.4 刪除sqlset
exec dbms_sqltune.DROP_SQLSET('ORCL_SQLSET_201806');
若是刪除時出現異常狀況"ORA-13757",提示STS是活動的,能夠嘗試使用下面SQL修改後再進行刪除。
delete from wri$_sqlset_references where sqlset_id in (select id from wri$_sqlset_definitions where name in ('ORCL_SQLSET_201806','ORCL_SQLSET_201806')); commit;
3.5 刪除用戶
刪除SPA用戶(兩端)
drop user spa cascade;
AlfredZhao©版權全部「從Oracle起航,領略精彩的IT技術。」
####refer
http://www.lunar2013.com/2015/05/spasql%E6%80%A7%E8%83%BD%E5%88%86%E6%9E%90%E5%99%A8%E7%9A%84%E4%BD%BF%E7%94%A8-1-%E6%94%B6%E9%9B%86%E5%92%8C%E8%BF%81%E7%A7%BBsql-tuning-set.html
### section 1
SPA(SQL Performance Analyzer , SQL 性能分析器),是11g引入的新功能,主要用於預測潛在的更改對 SQL 查詢工做量的性能影響。
通常有幾種狀況下,咱們會建議作SPA:
1,OS版本發生變化
2,硬件發生變化
3,數據庫版本的升級
4,實施某些優化建議
5, 收集統計信息
6,更改數據庫參數
等等
.
SPA的主要實施步驟以下:
1, 在生產系統上捕捉SQL負載,並生成SQL Tuning Set;
2, 建立一箇中轉表,將SQL Tuning Set導入到中轉表,導出中轉表並傳輸到測試庫;
3, 導入中轉表,並解壓中轉表的數據到SQL Tuning Set;
4, 建立SPA任務,先生成10g的trail,而後在11g中再生成11g的trail;
5, 執行比較任務,再生成SPA報告;
6, 分析性能退化的SQL語句;
.
我這裏的例子是,將一根數據庫從10.2.0.1升級到11.2.0.4.
1,在源庫建立spa用戶:
create user LUNAR identified by LUNAR;
grant connect,resource,dba to LUNAR;
10:38:37 lunar@LUNAR>select username,default_tablespace,temporary_tablespace
10:41:41 2 from dba_users
10:41:41 3 where username in ('LUNAR','SPA')
10:41:41 4 order by 1,2;
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------ ------------------------------ ------------------------------
LUNAR USERS TEMP
Elapsed: 00:00:00.27
10:41:41 lunar@LUNAR>
2,檢查SYSAUX空間是否足夠
09:26:38 sys@LUNAR>@ts
Name TS Type All Size Max Size Free Size Max Free Pct. Free Max Free%
------------------------------ ------------ ---------- ---------- ---------- ---------- --------- ---------
UNDOTBS1 UNDO 148,433 221,521 19,467 92,555 13 42
LUNAR_IDX PERMANENT 352,256 352,256 84,272 84,272 24 24
LUNAR_DAT PERMANENT 1,048,576 1,048,576 258,728 258,728 25 25
LUNAR_TESTS PERMANENT 251,904 251,904 139,424 139,424 55 55
LUNAR_TESTS_IDX PERMANENT 329,728 329,728 196,351 196,351 60 60
USERS PERMANENT 4,096 32,768 2,582 31,254 63 95
SYSAUX PERMANENT 4,096 32,768 2,786 31,458 68 96
SYSTEM PERMANENT 4,096 32,768 2,882 31,554 70 96
8 rows selected.
Elapsed: 00:00:00.07
09:26:40 sys@LUNAR>
3,建立SQL優化器:
conn LUNAR/LUNAR
10:33:30 lunar@LUNAR>exec dbms_sqltune.create_sqlset('Lunar_11201STS_LUNAR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.11
10:34:25 lunar@LUNAR>
4,往SQL優化其中,加載優化集
1). 從AWR快照中加載
11:31:55 lunar@LUNAR>select INSTANCE_NUMBER ,min(snap_id),max(snap_id) from dba_hist_snapshot group by INSTANCE_NUMBER;
INSTANCE_NUMBER MIN(SNAP_ID) MAX(SNAP_ID)
--------------- ------------ ------------
1 19355 19555
Elapsed: 00:00:00.01
11:32:12 lunar@LUNAR>
b).加載2個快照之間的全部查詢(這一步大概執行了4分鐘)
11:33:12 lunar@LUNAR>declare
11:33:14 2 own VARCHAR2(30) := 'LUNAR';
11:33:14 3 bid NUMBER := '&begin_snap';
11:33:14 4 eid NUMBER := '&end_snap';
11:33:14 5 stsname VARCHAR2(30) :='Lunar_11201STS_LUNAR';
11:33:14 6 sts_cur dbms_sqltune.sqlset_cursor;
11:33:14 7 begin
11:33:14 8 open sts_cur for
11:33:14 9 select value(P) from table(dbms_sqltune.select_workload_repository(bid,eid, null, null, null, null, null, 1, null, 'ALL')) P;
11:33:14 10 dbms_sqltune.load_sqlset(sqlset_name => stsname,populate_cursor => sts_cur,load_option => 'MERGE');
11:33:14 11 end;
11:33:14 12 /
Enter value for begin_snap: 19355
old 3: bid NUMBER := '&begin_snap';
new 3: bid NUMBER := '19355';
Enter value for end_snap: 19555
old 4: eid NUMBER := '&end_snap';
new 4: eid NUMBER := '19555';
PL/SQL procedure successfully completed.
Elapsed: 00:03:07.05
11:36:29 lunar@LUNAR>
c) 驗證建立的SQL優化集
10:52:58 lunar@LUNAR>select NAME,OWNER,CREATED,STATEMENT_COUNT, LAST_MODIFIED FROM DBA_SQLSET;
NAME OWNER CREATED STATEMENT_COUNT LAST_MODIFIED
------------------------------ ------------------------------ ------------------- --------------- -------------------
Lunar_11201STS_LUNAR LUNAR 2015-04-18 10:34:25 921 2015-04-18 10:38:27
Elapsed: 00:00:00.06
10:53:03 lunar@LUNAR>
2). 若是須要,能夠從AWR快照中加載指定sql_id和plan_hash_value的sql語句
12:06:31 lunar@LUNAR>SELECT sql_id, substr(sql_text, 1, 50) sql
12:06:32 2 FROM TABLE( DBMS_SQLTUNE.select_sqlset ('Lunar_11201STS_LUNAR'))
12:06:32 3 where sql_id in ('34xbj7bv7suyk','gxsfh4gm276d3');
SQL_ID SQL
------------- --------------------------------------------------
34xbj7bv7suyk UPDATE "LUNAR_PRD".MDRT_1472A$ set info= :1 where ro
gxsfh4gm276d3 update LUNARINFO t set TIME=:1, LUNARMARK=:2, LO
Elapsed: 00:00:01.14
12:06:34 lunar@LUNAR>
3). 從當前遊標緩存中加載
DECLARE
cur sys_refcursor;
BEGIN
OPEN cur FOR
SELECT value(P)
FROM TABLE(dbms_sqltune.select_cursor_cache('parsing_schema_name <> ''SYS''',NULL,NULL,NULL,NULL,1,NULL,'ALL')) p;
dbms_sqltune.load_sqlset('Lunar_11201STS_LUNAR', cur);
CLOSE cur;
END;
/
上述過程通常執行時間比較長,所以,一般放到後臺執行。
這裏咱們看到加載的SQL明顯增長了不少:
12:55:02 sys@LUNAR>select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;
NAME OWNER CREATED STATEMENT_COUNT
------------------------------ ------------------------------ ------------------- ---------------
Lunar_11201STS_LUNAR LUNAR 2015-04-18 11:31:55 41928
12:57:11 sys@LUNAR>
執行完上述全部操做後,咱們就能夠將這個SQL TUNING SET遷移到新的環境,進行分析,具體過程以下:
1,在新庫中建立SQL優化器用戶
create user LUNAR identified by LUNAR;
grant connect,resource,dba to LUNAR;
2,檢查SYSAUX空間是否足夠
3,在源庫上執行打包SQL TUNING SET的操做,而後exp/imp到新庫上
[oracle@lunardb tmp]$ ss
SQL*Plus: Release 11.2.0.1.0 Production on Sat Apr 18 23:22:26 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
23:22:26 sys@GPS>conn LUNAR/LUNAR
Connected.
23:22:28 lunar@GPS>BEGIN
23:22:33 2 DBMS_SQLTUNE.create_stgtab_sqlset(table_name => 'SQLSET_TAB_LUNAR',
23:22:34 3 schema_name => 'LUNAR',
23:22:34 4 tablespace_name => 'USERS');
23:22:34 5 END;
23:22:34 6 /
PL/SQL procedure successfully completed.
Elapsed: 00:00:01.32
23:22:36 lunar@GPS>
3,打包SQL TUNING SET的操做,而後exp/imp到新庫上
conn LUNAR/LUNAR
BEGIN
DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => 'Lunar_11201STS_GPS',
sqlset_owner => 'LUNAR',
staging_table_name => 'SQLSET_TAB_LUNAR',
staging_schema_owner => 'LUNAR');
END;
/
執行過程當中,咱們能夠監控一下:
[oracle@lunardb tmp]$ ss
SQL*Plus: Release 11.2.0.1.0 Production on Sat Apr 18 23:26:18 2015
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
23:26:18 sys@GPS>select count(*) from LUNAR.SQLSET_TAB_LUNAR;
COUNT(*)
----------
496641
Elapsed: 00:00:00.57
23:28:04 sys@GPS>
exp LUNAR/LUNAR tables=SQLSET_TAB_LUNAR file=/u01/oradata/tmp/exp_SQLSET_TAB_LUNAR.dmp log=/u01/oradata/tmp/exp_SQLSET_TAB_LUNAR.log FEEDBACK=1000 BUFFER=5000000
4,在新庫上執行導入SQL TUNING SET的表(LUNAR.SQLSET_TAB_LUNAR)
imp LUNAR/LUNAR fromuser=LUNAR touser=LUNAR file=/u01/oradata/tmp/exp_SQLSET_TAB_LUNAR.dmp feedback=1000 log=/u01/oradata/tmp/imp_SQLSET_TAB_LUNAR.log BUFFER=5000000
### section 2
1,查看當前STS中的SQL數量:
09:52:42 LUNAR@ lunardb> select count(*) from LUNAR.SQLSET_TAB_LUNAR;
COUNT(*)
----------
496641
Elapsed: 00:00:00.24
09:53:13 LUNAR@ lunardb>
刪除一些沒用的:
LUNAR@ lunardb> delete from LUNAR.SQLSET_TAB_LUNAR
where (PARSING_SCHEMA_NAME in ('LUNAR', 'GGUSR','EXFSYS','SYS') )
or ( module in ('PL/SQL Developer','SQL*Plus','sqlplus.exe','plsqldev.exe','DBMS_SCHEDULER') );
701 rows deleted.
Elapsed: 00:00:00.96
10:07:34 LUNAR@ lunardb> commit;
Commit complete.
Elapsed: 00:00:00.00
10:07:38 LUNAR@ lunardb>
2,在新庫建立Lunar_11201STS_LUNAR SQLSET集
create user LUNARSPA identified by LUNARSPA;
grant connect,resource,dba to LUNARSPA;
10:24:41 LUNAR@ lunardb>
10:24:41 2 from dba_users
10:24:41 3 where username in ('LUNAR','LUNARSPA')
10:24:41 4 order by 1,2;
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE
------------------------------ ------------------------------ ------------------------------
LUNAR USERS TEMP
LUNARSPA USERS TEMP
Elapsed: 00:00:00.03
10:24:42 LUNAR@ lunardb>
—(2)使用LUNAR用戶,建立STS:Lunar_11204STS_LUNAR
10:24:42 LUNAR@ lunardb> conn LUNARSPA/LUNARSPA
Connected.
10:25:33 LUNARSPA@ lunardb> exec DBMS_SQLTUNE.create_sqlset(sqlset_name => 'Lunar_11204STS_LUNAR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.05
10:25:40 LUNARSPA@ lunardb>
—(2)使用LUNARSPA用戶,將源庫的LUNAR.Lunar_11201STS_LUNAR的SQL優化器映射到LUNARSPA.Lunar_11204STS_LUNAR
10:28:24 LUNARSPA@ lunardb> select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;
NAME OWNER CREATED STATEMENT_COUNT
------------------------------ ------------------------------ ------------------- ---------------
Lunar_11204STS_LUNAR LUNARSPA 2015-04-19 10:25:40 0
Elapsed: 00:00:00.00
10:40:44 LUNARSPA@ lunardb> exec dbms_sqltune.remap_stgtab_sqlset(old_sqlset_name =>'Lunar_11201STS_LUNAR',old_sqlset_owner => 'LUNAR', new_sqlset_name => 'Lunar_11204STS_LUNAR',new_sqlset_owner => 'LUNARSPA', staging_table_name => 'SQLSET_TAB_LUNAR',staging_schema_owner => 'LUNAR');
PL/SQL procedure successfully completed.
Elapsed: 00:00:09.39
10:41:06 LUNARSPA@ lunardb>
使用LUNARSPA用戶執行remap:
BEGIN
DBMS_SQLTUNE.unpack_stgtab_sqlset(
sqlset_name => 'Lunar_11201STS_LUNAR',
sqlset_owner => 'SPA',
replace => TRUE,
staging_table_name => 'SQLSET_TAB',
staging_schema_owner => 'SPA');
END;
/
11:21:16 LUNARSPA@ lunardb> select NAME,OWNER,CREATED,STATEMENT_COUNT FROM DBA_SQLSET;
NAME OWNER CREATED STATEMENT_COUNT
------------------------------ ------------------------------ ------------------- ---------------
Lunar_11204STS_LUNAR LUNARSPA 2015-04-19 11:19:04 6005
Elapsed: 00:00:00.01
11:21:19 LUNARSPA@ lunardb>
至此,SPA在新庫的數據已經準備完畢,能夠開始生成SPA報告了。
常見報告的就提步驟以下:
1)建立SPA任務
11:33:10 LUNARSPA@ lunardb> exec :sname := 'Lunar_11204STS_GPS';
exec :tname := 'SPA_LUNARTEST1';
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.01
11:33:10 LUNARSPA@ lunardb>
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
11:33:10 LUNARSPA@ lunardb> exec :tname := DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => :sname, task_name => :tname);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.19
11:33:10 LUNARSPA@ lunardb>
2)生成11.2.0.1的SPA Trail,採用STS轉化方式
begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'CONVERT SQLSET',
execution_name => 'CONVERT_11204G');
end;
/
3)在11.2.0.4中測試執行,從性能數據生成SPA trial
begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'TEST EXECUTE',
execution_name => 'EXEC_11204G');
end;
/
5 執行比較任務(通常取Elapsed Time、CPU Time、Buffer Get等指標)
begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'COMPARE PERFORMANCE',
execution_name => 'Compare_elapsed_time',
execution_params => dbms_advisor.arglist('execution_name1', 'CONVERT_11204G', 'execution_name2', 'EXEC_11204G', 'comparison_metric', 'elapsed_time') );
end;
/
begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'COMPARE PERFORMANCE',
execution_name => 'Compare_CPU_time',
execution_params => dbms_advisor.arglist('execution_name1', 'CONVERT_11204G', 'execution_name2', 'EXEC_11204G', 'comparison_metric', 'CPU_TIME') );
end;
/
begin
DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(
task_name => 'SPA_LUNARTEST1',
execution_type => 'COMPARE PERFORMANCE',
execution_name => 'Compare_BUFFER_GETS_time',
execution_params => dbms_advisor.arglist('execution_name1', 'CONVERT_11204G', 'execution_name2', 'EXEC_11204G', 'comparison_metric', 'BUFFER_GETS') );
end;
/
6 生成SPA報告
spool spa_report_elapsed_time.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'ALL','ALL', execution_name=>'Compare_elapsed_time',top_sql=>500) FROM dual;
spool off;
spool spa_report_CPU_time.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'ALL','ALL', execution_name=>'Compare_CPU_time',top_sql=>500) FROM dual;
spool off;
spool spa_report_buffer_time.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1','HTML','ALL','ALL', execution_name=>'Compare_BUFFER_GETS_time',top_sql=>500) FROM dual;
spool off;
spool spa_report_elapsed_time_regressed.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'REGRESSED','ALL', execution_name=>'Compare_elapsed_time',top_sql=>500) FROM dual;
spool off;
spool spa_report_CPU_time_regressed.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'REGRESSED','ALL', execution_name=>'Compare_CPU_time',top_sql=>500) FROM dual;
spool off;
spool spa_report_buffer_time_regressed.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1','HTML','REGRESSED','ALL', execution_name=>'Compare_BUFFER_GETS_time',top_sql=>500) FROM dual;
spool off;
spool spa_report_errors.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'errors','summary') FROM dual;
spool off;
spool spa_report_unsupport.html
SELECT dbms_sqlpa.report_analysis_task('SPA_LUNARTEST1', 'HTML', 'unsupported','all') FROM dual;
spool off;
生成的報告通常以下:
-rwxrwxrwx 1 oracle oracle 1850 Apr 19 21:33 report_spa.sh
-rw-rw-r-- 1 oracle oracle 8498134 Apr 19 21:37 spa_report_elapsed_time.html
-rw-rw-r-- 1 oracle oracle 8954773 Apr 19 21:41 spa_report_CPU_time.html
-rw-rw-r-- 1 oracle oracle 7941640 Apr 19 21:44 spa_report_buffer_time.html
-rw-rw-r-- 1 oracle oracle 38933 Apr 19 21:44 spa_report_elapsed_time_regressed.html
-rw-rw-r-- 1 oracle oracle 61982 Apr 19 21:44 spa_report_CPU_time_regressed.html
-rw-rw-r-- 1 oracle oracle 28886 Apr 19 21:44 spa_report_buffer_time_regressed.html
-rw-rw-r-- 1 oracle oracle 15537 Apr 19 21:44 spa_report_errors.html
-rw-rw-r-- 1 oracle oracle 58703 Apr 19 21:44 spa_report_unsupport.html
-rw-rw-r-- 1 oracle oracle 18608938 Apr 19 21:44 report_spa.log
[oracle@lunardb tmp]$
############https://www.databasejournal.com/img/2008/02/jsc_Oracle_11g_SQL_Plan_Management_Listing2.html#List0201
/*
|| Oracle 11g SQL Plan Management Listing 2
||
|| Demonstrates Oracle 11g SQL Plan Management (SPM) advanced techniques,
|| including:
|| - Capturing SQL Plan Baselines via manual methods with DBMS_SPM
|| - Transferring captured SQL Plan Baselines between Oracle 10g and 11g databases
|| to "pre-seed" the SQL Management Baseline (SMB) with the most optimal execution
|| plans before an upgrade of an Oracle 10g database to Oracle 11g
|| - Transferring captured SQL Plan Baselines between test and production environments
|| to "pre-seed" the SQL Management Baseline (SMB) with the most typical execution
|| plans prior to deployment of a brand-new application
|| - Dropping existing SQL Plan Baselines from the SMB via manual methods
||
|| Author: Jim Czuprynski
||
|| Usage Notes:
|| These examples are provided to demonstrate various features of Oracle 11g
|| SQL Plan Management features, and they should be carefully proofread
|| before executing them against any existing Oracle database(s) to avoid
|| potential damage!
*/
/*
|| Listing 2.1:
|| Create and prepare to populate a SQL Tuning Set (STS)
|| for selected SQL statements. Note that this STS will capture
|| all SQL statements which are executed by the LDGN user account
|| within a 5-minute period, and Oracle will check every 5 seconds
|| for any new statements
*/
BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SPM_200'
);
END;
@SPM_2_1.sql;
/
BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(
sqlset_name => 'STS_SPM_200'
);
DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET(
sqlset_name => 'STS_SPM_200'
,basic_filter=> q'#sql_text LIKE '%SPM_2_1%' AND parsing_schema_name = 'LDGN'#'
,time_limit => 300
,repeat_interval => 5
);
END;
/
/*
|| Listing 2.2:
|| "Packing up" and exporting the Oracle 10gR2 SQL Tuning Set prior to
|| its transport to Oracle 11g
*/
-----
-- Create a staging table to hold the SQL Tuning Set statements just created,
-- and then "pack up" (i.e. populate) the staging table
-----
DROP TABLE ldgn.sts_staging PURGE;
BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLSET(
table_name => 'STS_STAGING'
,schema_name => 'LDGN'
,tablespace_name => 'USERS'
);
DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
sqlset_name => 'STS_SPM_200'
,sqlset_owner => 'SYS'
,staging_table_name => 'STS_STAGING'
,staging_schema_owner => 'LDGN'
);
END;
/
-----
-- Invoke DataPump Export to export the table that contains the staged
-- SQL Tuning Set statements
-----
rm -f /u01/app/oracle/product/10.2.0/db_1/rdbms/log/*.log
rm -f /u01/app/oracle/product/10.2.0/db_1/rdbms/log/*.dmp
expdp system/oracle PARFILE=DumpStagingTable.dpectl
#####
# Contents of DumpStagingTable.dpectl parameter file:
#####
JOB_NAME=DumpStagingTable
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=LDGN_STS_Staging.dmp
SCHEMAS=LDGN
>>> Results:
Export: Release 10.2.0.1.0 - Production on Monday, 18 February, 2008 19:03:57
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."DUMPSTAGINGTABLE": system/******** PARFILE=DumpStagingTable.dpectl
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 576 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
. . exported "LDGN"."STS_STAGING" 22.67 KB 8 rows
. . exported "LDGN"."STS_STAGING_CPLANS" 35.35 KB 25 rows
. . exported "LDGN"."STS_STAGING_CBINDS" 9.476 KB 0 rows
. . exported "LDGN"."PLAN_TABLE" 0 KB 0 rows
Master table "SYSTEM"."DUMPSTAGINGTABLE" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.DUMPSTAGINGTABLE is:
/u01/app/oracle/product/10.2.0/db_1/rdbms/log/LDGN_STS_Staging.dmp
Job "SYSTEM"."DUMPSTAGINGTABLE" successfully completed at 19:05:21
/*
|| Listing 2.3:
|| Transporting, importing, and "unpacking" the staged Oracle 10gR2 SQL Tuning
|| Set on the target Oracle 11g database
*/
-----
-- Invoke DataPump Import to import the table that contains the staged
-- SQL Tuning Set statements. Note that the default action of SKIPping
-- a table if it already exists has been overridden by supplying a value
-- of REPLACE for parameter TABLE_EXISTS_ACTION.
-----
impdp system/oracle PARFILE=LoadStagingTable.dpictl
#####
# Contents of LoadStagingTable.dpictl parameter file:
#####
JOB_NAME=LoadStagingTable
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=LDGN_STS_Staging.dmp
TABLE_EXISTS_ACTION=REPLACE
>>> Results of DataPump Import operation:
Import: Release 11.1.0.6.0 - Production on Monday, 18 February, 2008 19:09:29
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."LOADSTAGINGTABLE" successfully loaded/unloaded
Starting "SYSTEM"."LOADSTAGINGTABLE": system/******** PARFILE=LoadStagingTable.dpictl
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"LDGN" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "LDGN"."STS_STAGING" 22.67 KB 8 rows
. . imported "LDGN"."STS_STAGING_CPLANS" 35.35 KB 25 rows
. . imported "LDGN"."STS_STAGING_CBINDS" 9.476 KB 0 rows
. . imported "LDGN"."PLAN_TABLE" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Job "SYSTEM"."LOADSTAGINGTABLE" completed with 1 error(s) at 19:11:07
-----
-- Accept the SQL Tuning Set statements from the imported staging table
-- into the Oracle 11gR1 database
-----
BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SPM_200'
);
END;
/
BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(
sqlset_name => 'STS_SPM_200'
,sqlset_owner => 'SYS'
,replace => TRUE
,staging_table_name => 'STS_STAGING'
,staging_schema_owner => 'LDGN'
);
END;
/
-----
-- Listing 2.4:
-- Prove that the SQL Plan Baselines loaded into the SMB via manual methods are
-- actually being utilized by executing EXPLAIN PLANs against each statement from the
-- target Oracle 11g database
-----
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.1*/
3 CTY.country_total_id
4 ,PR.promo_total_id
5 ,COUNT(S.amount_sold)
6 ,SUM(S.amount_sold)
7 ,SUM(S.quantity_sold)
8 FROM
9 sh.sales S
10 ,sh.customers C
11 ,sh.countries CTY
12 ,sh.promotions PR
13 WHERE S.cust_id = C.cust_id
14 AND C.country_id = CTY.country_id
15 AND S.promo_id = PR.promo_id
16 GROUP BY
17 CTY.country_total_id
18 ,PR.promo_total_id
19 ;
Explained.
SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ;
Plan hash value: 491136032
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 44 | | 2325 (5)| 00:00:28 | | |
| 1 | HASH GROUP BY | | 1 | 44 | | 2325 (5)| 00:00:28 | | |
|* 2 | HASH JOIN | | 918K| 38M| | 2270 (3)| 00:00:28 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 32M| | 2246 (2)| 00:00:27 | | |
| 5 | TABLE ACCESS FULL | COUNTRIES | 23 | 230 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 23M| 1200K| 2236 (2)| 00:00:27 | | |
| 7 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 8 | PARTITION RANGE ALL| | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
| 9 | TABLE ACCESS FULL | SALES | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
6 - access("S"."CUST_ID"="C"."CUST_ID")
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_587c0594825d2e47" used for this statement
27 rows selected.
SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.2*/
3 CTY.country_id
4 ,CTY.country_subregion_id
5 ,CTY.country_region_id
6 ,CTY.country_total_id
7 ,PR.promo_total_id
8 ,COUNT(S.amount_sold)
9 ,SUM(S.amount_sold)
10 ,SUM(S.quantity_sold)
11 FROM
12 sh.sales S
13 ,sh.customers C
14 ,sh.countries CTY
15 ,sh.promotions PR
16 WHERE S.cust_id = C.cust_id
17 AND C.country_id = CTY.country_id
18 AND S.promo_id = PR.promo_id
19 GROUP BY
20 CTY.country_id
21 ,CTY.country_subregion_id
22 ,CTY.country_region_id
23 ,CTY.country_total_id
24 ,PR.promo_total_id
25 ;
Explained.
SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ;
Plan hash value: 491136032
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 228 | 12312 | | 2325 (5)| 00:00:28 | | |
| 1 | HASH GROUP BY | | 228 | 12312 | | 2325 (5)| 00:00:28 | | |
|* 2 | HASH JOIN | | 918K| 47M| | 2270 (3)| 00:00:28 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 41M| | 2246 (2)| 00:00:27 | | |
| 5 | TABLE ACCESS FULL | COUNTRIES | 23 | 460 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 23M| 1200K| 2236 (2)| 00:00:27 | | |
| 7 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 8 | PARTITION RANGE ALL| | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
| 9 | TABLE ACCESS FULL | SALES | 918K| 14M| | 498 (4)| 00:00:06 | 1 | 28 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
6 - access("S"."CUST_ID"="C"."CUST_ID")
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_54f64750825d2e47" used for this statement
27 rows selected.
SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.3*/
3 CTY.country_total_id
4 ,P.prod_id
5 ,P.prod_subcategory_id
6 ,P.prod_category_id
7 ,P.prod_total_id
8 ,CH.channel_id
9 ,CH.channel_class_id
10 ,CH.channel_total_id
11 ,PR.promo_total_id
12 ,COUNT(S.amount_sold)
13 ,SUM(S.amount_sold)
14 ,SUM(S.quantity_sold)
15 FROM
16 sh.sales S
17 ,sh.customers C
18 ,sh.countries CTY
19 ,sh.products P
20 ,sh.channels CH
21 ,sh.promotions PR
22 WHERE S.cust_id = C.cust_id
23 AND C.country_id = CTY.country_id
24 AND S.prod_id = P.prod_id
25 AND S.channel_id = CH.channel_id
26 AND S.promo_id = PR.promo_id
27 GROUP BY
28 CTY.country_total_id
29 ,P.prod_id
30 ,P.prod_subcategory_id
31 ,P.prod_category_id
32 ,P.prod_total_id
33 ,CH.channel_id
34 ,CH.channel_class_id
35 ,CH.channel_total_id
36 ,PR.promo_total_id
37 ;
Explained.
SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ;
Plan hash value: 2634317694
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5940 | 435K| | 8393 (2)| 00:01:41 | | |
| 1 | HASH GROUP BY | | 5940 | 435K| 74M| 8393 (2)| 00:01:41 | | |
|* 2 | HASH JOIN | | 918K| 65M| | 2593 (3)| 00:00:32 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 59M| | 2569 (3)| 00:00:31 | | |
| 5 | TABLE ACCESS FULL | PRODUCTS | 72 | 1080 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 46M| | 2560 (2)| 00:00:31 | | |
| 7 | TABLE ACCESS FULL | COUNTRIES | 23 | 230 | | 3 (0)| 00:00:01 | | |
|* 8 | HASH JOIN | | 918K| 37M| | 2550 (2)| 00:00:31 | | |
| 9 | TABLE ACCESS FULL | CHANNELS | 5 | 45 | | 3 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 918K| 29M| 1200K| 2541 (2)| 00:00:31 | | |
| 11 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 12 | PARTITION RANGE ALL| | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
| 13 | TABLE ACCESS FULL | SALES | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
6 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
8 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
10 - access("S"."CUST_ID"="C"."CUST_ID")
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_8ec1a5862d9d97db" used for this statement
33 rows selected.
SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.4*/
3 CTY.country_total_id
4 ,P.prod_category_id
5 ,P.prod_total_id
6 ,CH.channel_id
7 ,CH.channel_class_id
8 ,CH.channel_total_id
9 ,PR.promo_total_id
10 ,COUNT(S.amount_sold)
11 ,SUM(S.amount_sold)
12 ,SUM(S.quantity_sold)
13 FROM
14 sh.sales S
15 ,sh.customers C
16 ,sh.countries CTY
17 ,sh.products P
18 ,sh.channels CH
19 ,sh.promotions PR
20 WHERE S.cust_id = C.cust_id
21 AND C.country_id = CTY.country_id
22 AND S.prod_id = P.prod_id
23 AND S.channel_id = CH.channel_id
24 AND S.promo_id = PR.promo_id
25 GROUP BY
26 CTY.country_total_id
27 ,P.prod_category_id
28 ,P.prod_total_id
29 ,CH.channel_id
30 ,CH.channel_class_id
31 ,CH.channel_total_id
32 ,PR.promo_total_id
33 ;
Explained.
SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ;
Plan hash value: 2634317694
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8 | 568 | | 2648 (5)| 00:00:32 | | |
| 1 | HASH GROUP BY | | 8 | 568 | | 2648 (5)| 00:00:32 | | |
|* 2 | HASH JOIN | | 918K| 62M| | 2593 (3)| 00:00:32 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 56M| | 2569 (3)| 00:00:31 | | |
| 5 | TABLE ACCESS FULL | PRODUCTS | 72 | 792 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 46M| | 2560 (2)| 00:00:31 | | |
| 7 | TABLE ACCESS FULL | COUNTRIES | 23 | 230 | | 3 (0)| 00:00:01 | | |
|* 8 | HASH JOIN | | 918K| 37M| | 2550 (2)| 00:00:31 | | |
| 9 | TABLE ACCESS FULL | CHANNELS | 5 | 45 | | 3 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 918K| 29M| 1200K| 2541 (2)| 00:00:31 | | |
| 11 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 12 | PARTITION RANGE ALL| | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
| 13 | TABLE ACCESS FULL | SALES | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
6 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
8 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
10 - access("S"."CUST_ID"="C"."CUST_ID")
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_96f761da2d9d97db" used for this statement
33 rows selected.
SQL>
SQL> EXPLAIN PLAN FOR
2 SELECT /*SPM_2_1.5*/
3 CTY.country_id
4 ,CTY.country_subregion_id
5 ,CTY.country_region_id
6 ,CTY.country_total_id
7 ,P.prod_id
8 ,P.prod_subcategory_id
9 ,P.prod_category_id
10 ,P.prod_total_id
11 ,CH.channel_id
12 ,CH.channel_class_id
13 ,CH.channel_total_id
14 ,PR.promo_total_id
15 ,COUNT(S.amount_sold)
16 ,SUM(S.amount_sold)
17 ,SUM(S.quantity_sold)
18 FROM
19 sh.sales S
20 ,sh.customers C
21 ,sh.countries CTY
22 ,sh.products P
23 ,sh.channels CH
24 ,sh.promotions PR
25 WHERE S.cust_id = C.cust_id
26 AND C.country_id = CTY.country_id
27 AND S.prod_id = P.prod_id
28 AND S.channel_id = CH.channel_id
29 AND S.promo_id = PR.promo_id
30 GROUP BY
31 CTY.country_id
32 ,CTY.country_subregion_id
33 ,CTY.country_region_id
34 ,CTY.country_total_id
35 ,P.prod_id
36 ,P.prod_subcategory_id
37 ,P.prod_category_id
38 ,P.prod_total_id
39 ,CH.channel_id
40 ,CH.channel_class_id
41 ,CH.channel_total_id
42 ,PR.promo_total_id
43 ;
Explained.
SQL> SELECT *
2 FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE',NULL,'+NOTE'))
3 ;
Plan hash value: 2634317694
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 918K| 74M| | 20707 (1)| 00:04:09 | | |
| 1 | HASH GROUP BY | | 918K| 74M| 168M| 20707 (1)| 00:04:09 | | |
|* 2 | HASH JOIN | | 918K| 74M| | 2593 (3)| 00:00:32 | | |
| 3 | TABLE ACCESS FULL | PROMOTIONS | 503 | 3521 | | 17 (0)| 00:00:01 | | |
|* 4 | HASH JOIN | | 918K| 68M| | 2569 (3)| 00:00:31 | | |
| 5 | TABLE ACCESS FULL | PRODUCTS | 72 | 1080 | | 3 (0)| 00:00:01 | | |
|* 6 | HASH JOIN | | 918K| 55M| | 2560 (2)| 00:00:31 | | |
| 7 | TABLE ACCESS FULL | COUNTRIES | 23 | 460 | | 3 (0)| 00:00:01 | | |
|* 8 | HASH JOIN | | 918K| 37M| | 2550 (2)| 00:00:31 | | |
| 9 | TABLE ACCESS FULL | CHANNELS | 5 | 45 | | 3 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 918K| 29M| 1200K| 2541 (2)| 00:00:31 | | |
| 11 | TABLE ACCESS FULL | CUSTOMERS | 55500 | 541K| | 406 (1)| 00:00:05 | | |
| 12 | PARTITION RANGE ALL| | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
| 13 | TABLE ACCESS FULL | SALES | 918K| 21M| | 498 (4)| 00:00:06 | 1 | 28 |
----------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("S"."PROMO_ID"="PR"."PROMO_ID")
4 - access("S"."PROD_ID"="P"."PROD_ID")
6 - access("C"."COUNTRY_ID"="CTY"."COUNTRY_ID")
8 - access("S"."CHANNEL_ID"="CH"."CHANNEL_ID")
10 - access("S"."CUST_ID"="C"."CUST_ID")
Note
-----
- SQL plan baseline "SYS_SQL_PLAN_816fca3a2d9d97db" used for this statement
33 rows selected.
/*
|| Listing 2.5:
|| Prepare to deploy a simulated new application to the current Oracle 11g database.
|| Note that all SQL Plan Baselines that are currently tagged as SPM_2 statements
|| will first be purged from the SMB.
*/
-----
-- Clear all SQL Plan Baselines whose SQL text contains the tag "SPM_2"
-----
SET SERVEROUTPUT ON
VARIABLE nRtnCode NUMBER;
BEGIN
:nRtnCode := 0;
FOR r_SPMB IN (
SELECT sql_handle, plan_name
FROM dba_sql_plan_baselines
WHERE sql_text LIKE '%SPM_2%'
)
LOOP
:nRtnCode :=
DBMS_SPM.DROP_SQL_PLAN_BASELINE(r_SPMB.sql_handle, r_SPMB.plan_name);
DBMS_OUTPUT.PUT_LINE('Drop of SPBs for Handle ' || r_SPMB.sql_handle
|| ' and Plan ' || r_SPMB.plan_name
|| ' completed: RC = ' || :nRtnCode);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Fatal error during cleanup of SQL Plan Baselines!');
ROLLBACK;
END;
/
-----
-- Run DDL commands to create the Sales Force Administration (SFA) schema
-- and all related objects
-----
@SFA_Setup.sql;
/*
|| Listing 2.6:
|| Generate a SQL workload against the new application objects using six
|| queries tagged with a comment of SPM_2_2, and then capture the SQL Plan
|| Baselines into the SMB using DBMS_SPM.LOAD_PLANS_FROM CURSOR_CACHE
*/
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;
@SPM_2_2.sql;
SET SERVEROUTPUT ON
VARIABLE plans_cached NUMBER;
BEGIN
:plans_cached :=
DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE(
attribute_name => 'SQL_TEXT'
,attribute_value => '%SPM_2_2%'
,fixed => 'NO'
,enabled => 'YES'
);
DBMS_OUTPUT.PUT_LINE('>>> ' || :plans_cached || ' SQL statement(s) loaded from the cursor cache.');
END;
/
/*
|| Listing 2.7:
|| Staging, Packing, and Exporting SQL Plan Baselines
*/
-----
-- Create a SQL Plan Baseline staging table and then "pack" those SQL
-- Plan Baselines into a staging table
-----
BEGIN
DBMS_SPM.CREATE_STGTAB_BASELINE (
table_name => 'SPM_STAGING'
,table_owner => 'SFA'
,tablespace_name => 'EXAMPLE'
);
END;
/
SET SERVEROUTPUT ON
VARIABLE plans_staged NUMBER;
BEGIN
:plans_staged :=
DBMS_SPM.PACK_STGTAB_BASELINE (
table_name => 'SPM_STAGING'
,table_owner => 'SFA'
,creator => 'SYS'
);
DBMS_OUTPUT.PUT_LINE('Total SQL Plan Baselines Staged: ' || :plans_staged);
END;
/
-----
-- Export SPM staging table via DataPump Export
-----
rm -f /u01/app/oracle/admin/orcl/dpdump/*.log
rm -f /u01/app/oracle/admin/orcl/dpdump/*.dmp
expdp system/oracle PARFILE=DumpStagedSPMs.dpectl
#####
# Contents of DumpStagedSPMs.dpectl parameter file:
#####
JOB_NAME=DumpStagedSPMs
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SFA_SPM_Staging.dmp
TABLES=SFA.SPM_STAGING
>>> Results:
Export: Release 11.1.0.6.0 - Production on Tuesday, 19 February, 2008 9:28:34
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."DUMPSTAGEDSPMS": system/******** PARFILE=DumpStagedSPMs.dpectl
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "SFA"."SPM_STAGING" 46.49 KB 6 rows
Master table "SYSTEM"."DUMPSTAGEDSPMS" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.DUMPSTAGEDSPMS is:
/u01/app/oracle/admin/orcl/dpdump/SFA_SPM_Staging.dmp
Job "SYSTEM"."DUMPSTAGEDSPMS" successfully completed at 09:29:50
/*
|| Listing 2.8:
|| Importing and "unpacking" the staged Oracle 11g SQL Plan Baselines into
|| the target Oracle 11g database
*/
-----
-- Invoke DataPump Import to import the table that contains the staged
-- SQL Tuning Set statements. Note that the default action of SKIPping
-- a table if it already exists has been overridden by supplying a value
-- of REPLACE for parameter TABLE_EXISTS_ACTION.
-----
impdp system/oracle PARFILE=LoadStagedSPMs.dpictl
#####
# Contents of LoadStagedSPMs.dpictl parameter file:
#####
JOB_NAME=LoadStagedSPMs
DIRECTORY=DATA_PUMP_DIR
DUMPFILE=SFA_SPM_Staging.dmp
TABLE_EXISTS_ACTION=REPLACE
>>> Results of DataPump Import operation:
Import: Release 11.1.0.6.0 - Production on Tuesday, 19 February, 2008 9:31:41
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."LOADSTAGEDSPMS" successfully loaded/unloaded
Starting "SYSTEM"."LOADSTAGEDSPMS": system/******** PARFILE=LoadStagedSPMs.dpictl
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "SFA"."SPM_STAGING" 46.49 KB 6 rows
Job "SYSTEM"."LOADSTAGEDSPMS" successfully completed at 09:31:52
-----
-- Clear all SQL Plan Baselines whose SQL text contains the tag "SPM_2"
-----
SET SERVEROUTPUT ON
VARIABLE nRtnCode NUMBER;
BEGIN
:nRtnCode := 0;
FOR r_SPMB IN (
SELECT sql_handle, plan_name
FROM dba_sql_plan_baselines
WHERE sql_text LIKE '%SPM_2%'
)
LOOP
:nRtnCode :=
DBMS_SPM.DROP_SQL_PLAN_BASELINE(r_SPMB.sql_handle, r_SPMB.plan_name);
DBMS_OUTPUT.PUT_LINE('Drop of SPBs for Handle ' || r_SPMB.sql_handle
|| ' and Plan ' || r_SPMB.plan_name
|| ' completed: RC = ' || :nRtnCode);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Fatal error during cleanup of SQL Plan Baselines!');
ROLLBACK;
END;
/
-----
-- Now, "unpack" the SQL Plan Baselines staging table directly into the SMB
-----
SET SERVEROUTPUT ON
VARIABLE plans_loaded NUMBER;
BEGIN
:plans_loaded :=
DBMS_SPM.UNPACK_STGTAB_BASELINE (
table_name => 'SPM_STAGING'
,table_owner => 'SFA'
,creator => 'SYS'
);
DBMS_OUTPUT.PUT_LINE('Total SQL Plan Baselines Loaded: ' || :plans_loaded);
END;
/
/*
|| Listing 2.9:
|| Show the current contents of the SQL Management Base
*/
Tue Feb 19 page 1
Current SQL Plan Baselines
(From DBA_SQL_PLAN_BASELINES)
SQL Plan CBO Ena- Auto Created Last
Creator Handle Name SQL Text Origin Cost bled Acpt Fixd Purg On Executed
-------- -------- -------- ------------------------- ------------ -------- ---- ---- ---- ---- ----------- -----------
LDGN 68516a84 07e0351f SELECT /*SPM_1.1*/ AUTO-CAPTURE 757 YES YES NO YES 2008-01-20 2008-01-20
S.cust_id 10:47:14 10:47:31
,C.cust_last_name
,S.prod_id
,P.pro
LDGN 68516a84 ddc1fcd0 SELECT /*SPM_1.1*/ AUTO-CAPTURE 2388 YES NO NO YES 2008-01-20
S.cust_id 11:04:03
,C.cust_last_name
,S.prod_id
,P.pro
SYS 0047dfb5 e86f00e7 SELECT /*SPM_2_2.5*/ MANUAL-LOAD 13 YES YES NO YES 2008-02-19
SR.abbr 09:32:42
,SD.abbr
,SUM(SH.quantity_sold)
SYS 1e72d0bd dd777d18 SELECT /*SPM_2_2.4*/ MANUAL-LOAD 13 YES YES NO YES 2008-02-19
rgn_abbr 09:32:42
,dst_abbr
,ter_abbr
,cust_id
SYS 7f161ead bb24e20c SELECT /*SPM_2.2.2*/ MANUAL-LOAD 415 YES YES NO YES 2008-02-19
SR.abbr, 09:32:42
SD.abbr,
SZ.geo_id,
SYS 831c508c 3519879f SELECT /*SPM_2_2.3*/ MANUAL-LOAD 71 YES YES NO YES 2008-02-19
SR.abbr 09:32:42
,SD.abbr
,SZ.geo_id
,C.cust_id
SYS 9c7bbbfb 9d1c7b8e SELECT /*SPM_2.2.1*/ MANUAL-LOAD 921 YES YES NO YES 2008-02-19
C.cust_state_provinc 09:32:42
e
,SUM(sh.quantity_sold
)
SYS f6743c1d b197d40d SELECT /*SPM_2_2.6*/ MANUAL-LOAD 60 YES YES NO YES 2008-02-19
SR.abbr 09:32:42
,SUM(SH.quantity_sold)
,AVG(SH.quant
8 rows selected.
##############ref 3
https://blog.yannickjaquier.com/oracle/sql-performance-analyzer.html
SQL Performance Analyzer
Preamble
How to test and predict impact on your application of any system change ? What a difficult question for a developer/DBA and except comparing SQL statement by SQL statement there is no dedicated tool for this. With 11gR2 Oracle has released a new tool called SQL Performance Analyzer.
Before going further it’s worth to mention that SQL Performance Analyzer (SPA) is included in Oracle Real Application Testing (RAT) Enterprise Edition paidoption.
By system changes Oracle mean (not exhaustively):
- Database upgrades.
- Tuning changes.
- Schema changes.
- Statistics gathering.
- Initialization parameter change
- OS or hardware changes.
Database upgrade is exactly what we will test in this blog post by simulating execution of a SQL statement in Oracle 9iR2 and then in 11gR2. Notice that same strategy can be applied to evaluate any initialization parameter, statistics (with pending statistics) changes.
Test database of this blog post is Oracle Enterprise Edition 11.2.0.2.0 running on Red Hat Enterprise Linux Server release 5.6 (Tikanga).
SQL Performance Analyzer testing
Just to show one limitation of SPA, but not of SQL Tuning Set (STS) I’m choosing a sql_id that has multiple plan with following query:
SQL> SELECT * FROM (SELECT sql_id,COUNT(DISTINCT plan_hash_value) FROM v$sql a
WHERE EXISTS (SELECT sql_id, COUNT(*) FROM v$sql b WHERE a.sql_id=b.sql_id GROUP BY sql_id HAVING COUNT(*)>1)
GROUP BY sql_id ORDER BY 2 DESC) WHERE rownum<=10;
SQL_ID COUNT(DISTINCTPLAN_HASH_VALUE)
------------- ------------------------------
94rn6s4ba24wn 5
9j8p0n3104sdg 4
gv9varx8zfkq4 4
9wbvj5pud8t2f 4
20pm94kcsc31s 3
afrmyr507wu03 3
0tnssv00b0nyr 2
1ds1kuqzkr7kn 2
18hzyzu9945g4 2
1290sa2814wt2 2
10 ROWS selected. |
Let’s choose first one and create a STS with the five sql_id, plan_hash_value pairs:
DECLARE
cursor1 DBMS_SQLTUNE.SQLSET_CURSOR;
BEGIN
OPEN cursor1 FOR SELECT VALUE(p)
FROM TABLE(DBMS_SQLTUNE.SELECT_CURSOR_CACHE('sql_id = ''94rn6s4ba24wn''')) p;
DBMS_SQLTUNE.LOAD_SQLSET(sqlset_name => 'STS01', populate_cursor => cursor1);
END;
/
PL/SQL PROCEDURE successfully completed. |
We easily see that one SQL statement has been added to our STS even if the sql_id has five distinct explain plan:
SQL> SET lines 200
SQL> col description FOR a30
SQL> SELECT * FROM dba_sqlset;
ID NAME OWNER DESCRIPTION CREATED LAST_MODI STATEMENT_COUNT
---------- ------------------------------ ------------------------------ ------------------------------ --------- --------- ---------------
1 STS01 SYS STS FOR sql_id 94rn6s4ba24wn 17-NOV-11 17-NOV-11 1 |
Now let’s create a SPA task and associate the STS with it:
DECLARE
task_name VARCHAR2(64);
sts_task VARCHAR2(64);
BEGIN
task_name := 'Task01';
sts_task:=DBMS_SQLPA.CREATE_ANALYSIS_TASK(sqlset_name => 'STS01', task_name => task_name, description => 'Task for sql_id 94rn6s4ba24wn');
END;
/
PL/SQL PROCEDURE successfully completed. |
When executing the task you have to decide, with execution_type parameter, which kind of execution you want to perform. A standard SPA task is made of following steps:
- Execute the task in TEST EXECUTE mode and generate a before change task report.
- Change what you want on your database (upgrade, optimizer parameters, statistics, …), execute the task in TEST EXECUTE mode and generate an after change task report.
- Execute the task in COMPARE PERFORMANCE mode and generate a compare performance task report.
Just to show one limitation of SPA I’ll first use the CONVERT SQLSET mode and generate the report:
SQL> EXEC DBMS_SQLPA.RESET_ANALYSIS_TASK('Task01');
PL/SQL PROCEDURE successfully completed.
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'CONVERT SQLSET', execution_name => 'convert_sqlset');
PL/SQL PROCEDURE successfully completed. |
You can check the task has well completed with:
SQL> SELECT execution_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, status
FROM dba_advisor_executions
WHERE task_name='Task01';
EXECUTION_NAME EXECUTION_TYPE EXECUTION_START EXECUTION_END STATUS
-------------------- ------------------------------ ----------------------------- ----------------------------- -----------
convert_sqlset CONVERT SQLSET 29-nov-2011 15:01:26 29-nov-2011 15:01:27 COMPLETED |
Generate the report with a SQL statement like this:
SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off |
We can see in task01_convert_sqlset.html result file that with this particular execution all different plans are displayed, while in compare performance objective SPA is taking only one plan (the one it is currently parsing with information currently available). In any case comparing all plans of same sql_id would provide very complex reports that would probably be not usable…
The EXPLAIN PLAN execute mode does not provide any added value as it generates the explain plan for every SQL statement of the STS, explain plans that are also displayed in TEST EXECUTE reports.
For my test case, to simulate a database upgrade from 9iR2 to 11gR2, I first set optimizer feature to 9iR2 optimizer, execute the task in TEST EXECUTE mode and generate report, put then back optimizer parameter to default value, execute again the task in TEST EXECUTE mode and generate report and finally execute the task in COMPARE PERFORMANCE mode and generate final comparison report (the most interesting one).
We set environment variable for report generation, check optimizer value before changing and reset the task before starting:
SQL> SET LONG 999999 longchunksize 100000 linesize 200 head off feedback off echo off
SQL> show parameter optimizer_features_enable
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
optimizer_features_enable string 11.2.0.2
SQL> EXEC DBMS_SQLPA.RESET_ANALYSIS_TASK('Task01');
PL/SQL PROCEDURE successfully completed. |
We set optimizer to 9.2.0 to simulate a database upgrade situation and execute the task:
SQL> ALTER SESSION SET optimizer_features_enable='9.2.0';
SESSION altered.
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'TEST EXECUTE', execution_name => 'before_change');
PL/SQL PROCEDURE successfully completed.
SQL> spool task01_before_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off |
We set back optimizer to default value and execute task again (we see that plan chosen is one of five initial plans associated with query):
SQL> ALTER SESSION SET optimizer_features_enable='11.2.0.2';
SESSION altered.
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'TEST EXECUTE', execution_name => 'after_change');
PL/SQL PROCEDURE successfully completed.
SQL> spool task01_after_change.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off |
We can now execute a third time the task and generate the compare performance report based on the two previous test execute runs:
SQL> EXEC DBMS_SQLPA.EXECUTE_ANALYSIS_TASK(task_name => 'Task01', execution_type => 'COMPARE PERFORMANCE', execution_name => 'compare_performance');
PL/SQL PROCEDURE successfully completed.
SQL> spool task01_compare_performance.html
SQL> SELECT DBMS_SQLPA.REPORT_ANALYSIS_TASK('Task01', 'HTML', 'ALL', 'ALL') FROM dual;
SQL> spool off |
SQL Performance Analyzer result
First let’s check all has been well executed:
SQL> SELECT execution_name, execution_type, TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end, advisor_name, status
FROM dba_advisor_executions
WHERE task_name='Task01';
EXECUTION_NAME EXECUTION_TYPE EXECUTION_START EXECUTION_END STATUS
------------------------------ ------------------------------ ----------------------------- ----------------------------- -----------
after_change TEST EXECUTE 18-nov-2011 16:18:01 18-nov-2011 16:18:18 COMPLETED
before_change TEST EXECUTE 18-nov-2011 16:16:39 18-nov-2011 16:17:11 COMPLETED
compare_performance COMPARE PERFORMANCE 18-nov-2011 16:18:54 18-nov-2011 16:18:57 COMPLETED
SQL> SELECT last_execution,execution_type,TO_CHAR(execution_start,'dd-mon-yyyy hh24:mi:ss') AS execution_start,
TO_CHAR(execution_end,'dd-mon-yyyy hh24:mi:ss') AS execution_end,status
FROM dba_advisor_tasks
WHERE task_name='Task01';
LAST_EXECUTION EXECUTION_TYPE EXECUTION_START EXECUTION_END STATUS
------------------------------ ------------------------------ ----------------------------- ----------------------------- -----------
compare_performance COMPARE PERFORMANCE 18-nov-2011 16:18:54 18-nov-2011 16:18:57 COMPLETED |
We can have a test overview of plan comparison with:
SQL> col EXECUTION_NAME FOR a15
SQL> SELECT execution_name, plan_hash_value, parse_time, elapsed_time, cpu_time,user_io_time,buffer_gets,disk_reads,direct_writes,
physical_read_bytes,physical_write_bytes,rows_processed
FROM dba_advisor_sqlstats
WHERE task_name='Task01';
EXECUTION_NAME PLAN_HASH_VALUE PARSE_TIME ELAPSED_TIME CPU_TIME USER_IO_TIME BUFFER_GETS DISK_READS DIRECT_WRITES PHYSICAL_READ_BYTES PHYSICAL_WRITE_BYTES ROWS_PROCESSED
--------------- --------------- ---------- ------------ ---------- ------------ ----------- ---------- ------------- ------------------- -------------------- --------------
before_change 1328242299 40664 8630688 1831720 808827 135782 117208 0 960167936 0 60
after_change 2949292326 167884 1808470 988850 340845 57450 38114 0 312229888 0 60 |
We can generate the text version of the two explain plans, but again html version is much more readable for a normal human being:
SQL> col PLAN FOR a140
SQL> SET pages 500
SQL> SELECT p.plan_id, RPAD('(' || p.ID || ' ' || NVL(p.parent_id,'0') || ')',8) || '|' ||
RPAD(LPAD (' ', 2*p.DEPTH) || p.operation || ' ' || p.options,40,'.') ||
NVL2(p.object_owner||p.object_name, '(' || p.object_owner|| '.' || p.object_name || ') ', '') ||
'Cost:' || p.COST || ' ' || NVL2(p.bytes||p.CARDINALITY,'(' || p.bytes || ' bytes, ' || p.CARDINALITY || ' rows)','') || ' ' ||
NVL2(p.partition_id || p.partition_start || p.partition_stop,'PId:' || p.partition_id || ' PStart:' ||
p.partition_start || ' PStop:' || p.partition_stop,'') ||
'io cost=' || p.io_cost || ',cpu_cost=' || p.cpu_cost AS PLAN
FROM dba_advisor_sqlplans p
WHERE task_name='Task01'
oder BY p.plan_id, p.id, p.parent_id;
PLAN_ID PLAN
---------- --------------------------------------------------------------------------------------------------------------------------------------------
89713 (0 0) |SELECT STATEMENT .......................COST:11331 (207480 bytes, 1064 ROWS) io COST=11331,cpu_cost=
89713 (1 0) | SORT GROUP BY.........................COST:11331 (207480 bytes, 1064 ROWS) io COST=11331,cpu_cost=
89713 (2 1) | FILTER .............................COST: io COST=,cpu_cost=
89713 (3 2) | HASH JOIN ........................COST:11294 (207480 bytes, 1064 ROWS) io COST=11294,cpu_cost=
89713 (4 3) | TABLE ACCESS BY INDEX ROWID.....(GSNX.OM_SHIPMENT_LINE) COST:3 (130 bytes, 5 ROWS) io COST=3,cpu_cost=
89713 (5 4) | NESTED LOOPS .................COST:905 (166423 bytes, 1021 ROWS) io COST=905,cpu_cost=
89713 (6 5) | HASH JOIN ..................COST:275 (28770 bytes, 210 ROWS) io COST=275,cpu_cost=
89713 (7 6) | TABLE ACCESS FULL.........(GSNX.CORE_PARTY) COST:3 (4932 bytes, 274 ROWS) io COST=3,cpu_cost=
89713 (8 6) | HASH JOIN ................COST:271 (24990 bytes, 210 ROWS) io COST=271,cpu_cost=
89713 (9 8) | TABLE ACCESS FULL.......(GSNX.CORE_PARTY) COST:3 (4932 bytes, 274 ROWS) io COST=3,cpu_cost=
89713 (10 8) | TABLE ACCESS FULL.......(GSNX.OM_SHIPMENT) COST:267 (21210 bytes, 210 ROWS) io COST=267,cpu_cost=
89713 (11 5) | INDEX RANGE SCAN............(GSNX.OM_SHIPMENT_LINE_N1) COST:2 ( bytes, 6 ROWS) io COST=2,cpu_cost=
89713 (12 3) | VIEW ...........................(SYS.VW_NSO_1) COST:10385 (637184 bytes, 19912 ROWS) io COST=10385,cpu_cost=
89713 (13 12) | SORT UNIQUE...................COST:10385 (423284 bytes, 19912 ROWS) io COST=8900,cpu_cost=
89713 (14 13) | UNION-ALL ..................COST: io COST=,cpu_cost=
89713 (15 14) | SORT UNIQUE...............COST:8900 (190 bytes, 2 ROWS) io COST=8900,cpu_cost=
89713 (16 15) | FILTER .................COST: io COST=,cpu_cost=
89713 (17 16) | SORT GROUP BY.........COST:8900 (190 bytes, 2 ROWS) io COST=8900,cpu_cost=
89713 (18 17) | NESTED LOOPS .......COST:8892 (2755 bytes, 29 ROWS) io COST=8892,cpu_cost=
89713 (19 18) | HASH JOIN ........COST:8842 (1975 bytes, 25 ROWS) io COST=8842,cpu_cost=
89713 (20 19) | TABLE ACCESS FUL(GSNX.MFG_WIP) COST:8808 (166191 bytes, 5361 ROWS) io COST=8808,cpu_cost=
89713 (21 19) | INDEX FAST FULL (GSNX.OM_SHIPMENT_N2) COST:27 (1008432 bytes, 21009 ROWS) io COST=27,cpu_cost=
89713 (22 18) | INDEX RANGE SCAN..(GSNX.MFG_WIP_LOT_QTY_N1) COST:2 (16 bytes, 1 ROWS) io COST=2,cpu_cost=
89713 (23 16) | SORT AGGREGATE........COST: (9 bytes, 1 ROWS) io COST=,cpu_cost=
89713 (24 23) | INDEX RANGE SCAN....(GSNX.OM_SHIPMENT_LINE_N1) COST:3 (360 bytes, 40 ROWS) io COST=3,cpu_cost=
89713 (25 14) | MINUS ....................COST: io COST=,cpu_cost=
89713 (26 25) | SORT UNIQUE.............COST: (219010 bytes, 19910 ROWS) io COST=,cpu_cost=
89713 (27 26) | INDEX FAST FULL SCAN..(GSNX.OM_SHIPMENT_UK1) COST:19 (231099 bytes, 21009 ROWS) io COST=19,cpu_cost=
89713 (28 25) | SORT UNIQUE.............COST: (204084 bytes, 22676 ROWS) io COST=,cpu_cost=
89713 (29 28) | INDEX FAST FULL SCAN..(GSNX.MFG_WIP_N5) COST:1296 (518760 bytes, 57640 ROWS) io COST=1296,cpu_cost=
89713 (30 2) | FILTER ...........................COST: io COST=,cpu_cost=
89713 (31 30) | TABLE ACCESS BY INDEX ROWID.....(GSNX.MFG_WIP) COST:4 (19 bytes, 1 ROWS) io COST=4,cpu_cost=
89713 (32 31) | INDEX RANGE SCAN..............(GSNX.MFG_WIP_N5) COST:3 ( bytes, 1 ROWS) io COST=3,cpu_cost=
89714 (0 0) |SELECT STATEMENT .......................COST:19324 (252720 bytes, 1296 ROWS) io COST=19260,cpu_cost=663547063
89714 (1 0) | SORT GROUP BY.........................COST:19324 (252720 bytes, 1296 ROWS) io COST=19260,cpu_cost=663547063
89714 (2 1) | FILTER .............................COST: io COST=,cpu_cost=
89714 (3 2) | HASH JOIN ........................COST:16730 (252720 bytes, 1296 ROWS) io COST=16668,cpu_cost=633246741
89714 (4 3) | NESTED LOOPS ...................COST: io COST=,cpu_cost=
89714 (5 4) | NESTED LOOPS .................COST:1248 (166423 bytes, 1021 ROWS) io COST=1240,cpu_cost=81128743
89714 (6 5) | HASH JOIN ..................COST:617 (28770 bytes, 210 ROWS) io COST=610,cpu_cost=74949007
89714 (7 6) | VIEW .....................(GSNX.INDEX$_join$_004) COST:3 (4932 bytes, 274 ROWS) io COST=2,cpu_cost=5340263
89714 (8 7) | HASH JOIN ..............COST: io COST=,cpu_cost=
89714 (9 8) | INDEX FAST FULL SCAN..(GSNX.CORE_PARTY_PK) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (10 8) | INDEX FAST FULL SCAN..(GSNX.CORE_PARTY_UK2) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (11 6) | HASH JOIN ................COST:614 (24990 bytes, 210 ROWS) io COST=608,cpu_cost=64398724
89714 (12 11) | VIEW ...................(GSNX.INDEX$_join$_003) COST:3 (4932 bytes, 274 ROWS) io COST=2,cpu_cost=5340263
89714 (13 12) | HASH JOIN ............COST: io COST=,cpu_cost=
89714 (14 13) | INDEX FAST FULL SCAN(GSNX.CORE_PARTY_PK) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (15 13) | INDEX FAST FULL SCAN(GSNX.CORE_PARTY_UK2) COST:1 (4932 bytes, 274 ROWS) io COST=1,cpu_cost=77402
89714 (16 11) | TABLE ACCESS FULL.......(GSNX.OM_SHIPMENT) COST:611 (21210 bytes, 210 ROWS) io COST=606,cpu_cost=53848440
89714 (17 5) | INDEX RANGE SCAN............(GSNX.OM_SHIPMENT_LINE_N1) COST:2 ( bytes, 6 ROWS) io COST=2,cpu_cost=16293
89714 (18 4) | TABLE ACCESS BY INDEX ROWID...(GSNX.OM_SHIPMENT_LINE) COST:3 (130 bytes, 5 ROWS) io COST=3,cpu_cost=29445
89714 (19 3) | VIEW ...........................(SYS.VW_NSO_1) COST:15481 (808672 bytes, 25271 ROWS) io COST=15428,cpu_cost=544289828
89714 (20 19) | HASH UNIQUE...................COST:15481 (685783 bytes, 25271 ROWS) io COST=12215,cpu_cost=193587837
89714 (21 20) | UNION-ALL ..................COST: io COST=,cpu_cost=
89714 (22 21) | HASH UNIQUE...............COST:12234 (262689 bytes, 5361 ROWS) io COST=12215,cpu_cost=193587837
89714 (23 22) | FILTER .................COST: io COST=,cpu_cost=
89714 (24 23) | HASH JOIN ............COST:12230 (262689 bytes, 5361 ROWS) io COST=12212,cpu_cost=180277196
89714 (25 24) | TABLE ACCESS BY INDE(GSNX.MFG_WIP) COST:12169 (128664 bytes, 5361 ROWS) io COST=12152,cpu_cost=167815964
89714 (26 25) | INDEX SKIP SCAN...(GSNX.MFG_WIP_N2) COST:647 ( bytes, 57640 ROWS) io COST=645,cpu_cost=16123961
89714 (27 24) | INDEX FAST FULL SCAN(GSNX.OM_SHIPMENT_N2) COST:60 (525225 bytes, 21009 ROWS) io COST=60,cpu_cost=4408262
89714 (28 23) | SORT AGGREGATE........COST: (9 bytes, 1 ROWS) io COST=,cpu_cost=
89714 (29 28) | INDEX RANGE SCAN....(GSNX.OM_SHIPMENT_LINE_N1) COST:3 (54 bytes, 6 ROWS) io COST=3,cpu_cost=22564
89714 (30 23) | SORT AGGREGATE........COST: (10 bytes, 1 ROWS) io COST=,cpu_cost=
89714 (31 30) | INDEX RANGE SCAN....(GSNX.MFG_WIP_LOT_QTY_N1) COST:3 (10 bytes, 1 ROWS) io COST=3,cpu_cost=21764
89714 (32 21) | MINUS ....................COST: io COST=,cpu_cost=
89714 (33 32) | SORT UNIQUE.............COST: (219010 bytes, 19910 ROWS) io COST=,cpu_cost=
89714 (34 33) | INDEX FAST FULL SCAN..(GSNX.OM_SHIPMENT_UK1) COST:42 (231099 bytes, 21009 ROWS) io COST=42,cpu_cost=3824304
89714 (35 32) | SORT UNIQUE.............COST: (204084 bytes, 22676 ROWS) io COST=,cpu_cost=
89714 (36 35) | INDEX FAST FULL SCAN..(GSNX.MFG_WIP_N5) COST:2972 (518760 bytes, 57640 ROWS) io COST=2946,cpu_cost=267746021
89714 (37 2) | FILTER ...........................COST: io COST=,cpu_cost=
89714 (38 37) | TABLE ACCESS BY INDEX ROWID.....(GSNX.MFG_WIP) COST:4 (19 bytes, 1 ROWS) io COST=4,cpu_cost=29946
89714 (39 38) | INDEX RANGE SCAN..............(GSNX.MFG_WIP_N5) COST:3 ( bytes, 1 ROWS) io COST=3,cpu_cost=21614
73 ROWS selected. |
Below view provide potential improvement (or regression, colors are self explaining), 79% in our case. So in clear moving this database from 9iR2 to 11gR2 would provide huge gain to this particular query with no effort. Obviously this simple query is not representative of anything and you would need to add much more statements to your STS to be in better position before upgrade. Of course RAT and workload capture could be a great help for such task:
SQL> col message FOR a80
SQL> col FINDING_NAME FOR a30
SQL> col EXECUTION_NAME FOR a20
SQL> SELECT execution_name,finding_name,TYPE,impact,message FROM dba_advisor_findings WHERE task_name='Task01';
EXECUTION_NAME FINDING_NAME TYPE IMPACT MESSAGE
-------------------- ------------------------------ ----------- ---------- --------------------------------------------------------------------------------
compare_performance normal, SUCCESSFUL completion INFORMATION 0 The structure OF the SQL PLAN IN execution 'before_change' IS different than its
corresponding PLAN which IS stored IN the SQL Tuning SET.
compare_performance normal, SUCCESSFUL completion SYMPTOM 0 The structure OF the SQL execution PLAN has changed.
compare_performance normal, SUCCESSFUL completion INFORMATION 79.0460506 The performance OF this SQL has improved. |
Finally all reports generated:
References
- Using SQL Performance Analyzer to Test SQL Performance Impact of 9i to 10gR2 Upgrade [ID 562899.1]
- SQL PERFORMANCE ANALYZER EXAMPLE [ID 455889.1]
##############ref 4
http://czmmiao.iteye.com/blog/1914603
The execution_type parameter of the EXECUTE_ANALYSIS_TASK procedure can take one of the following three values:
TEST_EXECUTE:Executes all SQL statements in the captured SQL workload. The database only executes the query portion of the DML statements, in order to avoid adversely impacting user data or the database itself. The database generates both execution plans and execution statistics (for example, disk reads and buffer gets).
COMPARE_PERFORMANCE:Compares performance between two executions of the workload performance analysis.
EXPLAIN PLAN:Lets you generate SQL plans only, without actually executing them.
The EXECUTE_ANALYSIS_TASK procedure executes all DML statements but ignores any DDL statements to avoid unduly affecting the test data.
Now we have the "before" performance information, we need to make a change so we can test the "after" performance. For this example we will simply add an index to the test table on the OBJECT_ID column. In a new SQL*Plus session create the index using the following statements.
CONN spa_test_user/spa_test_user
CREATE INDEX my_objects_index_01 ON my_objects(object_id);
EXEC DBMS_STATS.gather_table_stats(USER, 'MY_OBJECTS', cascade => TRUE);
Now, we can return to our original session and test the performance after the database change. Once again use the EXECUTE_ANALYSIS_TASK procedure, naming the analysis task "after_change".
BEGIN
DBMS_SQLPA.execute_analysis_task(
task_name => :v_task,
execution_type => 'test execute',
execution_name => 'after_change');
END;
/
Once the before and after analysis tasks are complete, we must run a comparison analysis task. The following code explicitly names the analysis tasks to compare using name-value pairs in the EXECUTION_PARAMS parameter. If this is ommited, the latest two analysis runs are compared.
BEGIN
DBMS_SQLPA.execute_analysis_task(
task_name => :v_task,
execution_type => 'compare performance',
execution_params => dbms_advisor.arglist(
'execution_name1',
'before_change',
'execution_name2',
'after_change')
);
END;
/
With this final analysis run complete, we can check out the comparison report using the REPORT_ANALYSIS_TASK function. The function returns a CLOB containing the report in 'TEXT', 'XML' or 'HTML' format. Its usage is shown below.
Note. Oracle 11gR2 also includes an 'ACTIVE' format that looks more like the Enterprise Manager output.
SET PAGESIZE 0
SET LINESIZE 1000
SET LONG 1000000
SET LONGCHUNKSIZE 1000000
SET TRIMSPOOL ON
SET TRIM ON
SPOOL /tmp/execute_comparison_report.htm
SELECT DBMS_SQLPA.report_analysis_task(:v_task, 'HTML', 'ALL')
FROM dual;
SPOOL OFF
An example of this file for each available type is shown below.
TEXT
HTML
XML
ACTIVE
#####
I'm more the command line type of person. Once I've understand what's going on behind the curtains I certainly switch to the GUI-click-click tools. But in the case of Real Application Testing - even though the support via the OEM GUI is excellent - sometimes I prefer to run my procedures from the command lineand check my reports in the browser.
Recently Thomas, a colleague from Oracle ACS Support, and I were asking ourselves about the different comparison metrics for the SQL Performance Analyzer reporting We did scan the documentation but we found only examples but no complete list. Then we did ask a colleague but thanks to OEM we got an incomplete list as well.
Finally Thomas dug it out - it's stored in the dictionary in the table V$SQLPA_METRIC:
SQL> SELECT metric_name FROM v$sqlpa_metric;
METRIC_NAME
-------------------------
PARSE_TIME
ELAPSED_TIME
CPU_TIME
USER_IO_TIME
BUFFER_GETS
DISK_READS
DIRECT_WRITES
OPTIMIZER_COST
IO_INTERCONNECT_BYTES
9 rows selected.
What do you do with these metrics now?
You can use them in such a way:
set timing on
begin
dbms_sqlpa.execute_analysis_task(
task_name=>'SPA_TASK_MR07PLP_11107_12102',
execution_name=>'Compare workload Elapsed',
execution_type=>'compare performance',
execution_params=>dbms_advisor.arglist(
'comparison_metric','elapsed_time',
'execution_name1','EXEC_SPA_TASK_MR07PLP_11107',
'execution_name2','TEST 11107 workload'),
execution_desc=>'Compare 11107 Workload on 12102 Elapsed');
end;
/
You can vary the elapsed_time in my example with the various comparison metrics mentioned in v$sqlpa_metric.
--Mike
###############5
How to Load Queries into a SQL Tuning Set (STS)
SOLUTION
NOTE: The example provided below works successfully on 11g and may not work in 10g. In this case, please just list the values without any parameter names similar to the following:
select value(p) from table(dbms_sqltune.select_cursor_cache('sql_id =''fgtq4z4vb0xx5''',NULL,NULL,NULL,NULL,1,NULL,'ALL')) p;
Create a SQL Tuning Set:
EXEC dbms_sqltune.create_sqlset('mysts');
Load SQL into the STS
1. From Cursor Cache
1) To load a query with a specific sql_id
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_cursor_cache('sql_id = ''fgtq4z4vb0xx5''')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
2) To load queries with a specific query string and more than 1,000 buffer_gets
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_cursor_cache('sql_text like ''%querystring%'' and buffer_gets > 1000')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
2. From AWR Snapshots
1) Find the two snapshots you want
select snap_id, begin_interval_time, end_interval_time from dba_hist_snapshot order by 1;
2) To load all the queries between two snapshots
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_workload_repository(begin_snap => 2245, end_snap => 2248)) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
3) To load a query with a specific sql_id and plan_hash_value
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_workload_repository(begin_snap => 2245, end_snap => 2248, basic_filter => 'sql_id = ''fgtq4z4vb0xx5'' and plan_hash_value = 431456802')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
NOTE: "basic_filter" is the SQL predicate to filter the SQL from the cursor cache defined on attributes of the SQLSET_ROW. If basic_filter is not set by the caller, the subprogram captures only statements of the type CREATE TABLE, INSERT, SELECT, UPDATE, DELETE, and MERGE.
CREATE TYPE sqlset_row AS object (
sql_id VARCHAR(13),
force_matching_signature NUMBER,
sql_text CLOB,
object_list sql_objects,
bind_data RAW(2000),
parsing_schema_name VARCHAR2(30),
module VARCHAR2(48),
action VARCHAR2(32),
elapsed_time NUMBER,
cpu_time NUMBER,
buffer_gets NUMBER,
disk_reads NUMBER,
direct_writes NUMBER,
rows_processed NUMBER,
fetches NUMBER,
executions NUMBER,
end_of_fetch_count NUMBER,
optimizer_cost NUMBER,
optimizer_env RAW(2000),
priority NUMBER,
command_type NUMBER,
first_load_time VARCHAR2(19),
stat_period NUMBER,
active_stat_period NUMBER,
other CLOB,
plan_hash_value NUMBER,
sql_plan sql_plan_table_type,
bind_list sql_binds) ;
3. From an AWR Baseline
1) Find the baseline you want to load
select baseline_name, start_snap_id, end_snap_id from dba_hist_baseline;
2) Load queries from the baseline
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_workload_repository('MY_BASELINE')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
4. From another SQL Tuning Set
1) Find the SQL Tuning Set you want to load
select name, owner, statement_count from dba_sqlset;
2) Load queries from the SQL Tuning Set
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_sqlset(sqlset_name => 'HR_STS', sqlset_owner => 'HR', basic_filter => 'sql_text like ''%querystring%''')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
5. From 10046 trace files (11g+)
1) Loading into a SQL Tuning Set in the same database that it originated from
i. Create a directory object for the directory where the trace files are.
create directory my_dir as '/home/oracle/trace';
ii. Load the queries
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_sql_trace(directory=>'MY_DIR', file_name=>'%.trc')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
2) Loading into a SQL Tuning Set in a different database
i. Create a mapping table from the database where the trace files were captured.
create table mapping as
select object_id id, owner, substr(object_name, 1, 30) name
from dba_objects
union all
select user_id id, username owner, null name
from dba_users;
ii. Copy the trace files into a directory of the target server and create a directory object for the directory. And import the mapping table into the target database.
create directory my_dir as '/home/oracle/trace';
iii. Specify the mapping table when loading the queries.
DECLARE
cur sys_refcursor;
BEGIN
open cur for
select value(p) from table(dbms_sqltune.select_sql_trace(directory=>'MY_DIR', file_name=>'%.trc', mapping_table_name=> 'MAPPING', mapping_table_owner=> 'HR')) p;
dbms_sqltune.load_sqlset('mysts', cur);
close cur;
END;
/
-----spa
:ORA-13757: "SQL Tuning Set" "OCMHU_STS" owned by user "SYS" is active.
-方法:
http://blog.itpub.net/14359/viewspace-1253599/
STEP 1:
select owner,description, created,last_modified,TASK_NAME
from DBA_ADVISOR_TASKS where owner='DBMGR'
STEP 2:
(7) You can drop sql tuning set by issuing following command :
SQL>BEGIN
DBMS_SQLTUNE.DROP_SQLSET(
sqlset_name => 'STS_SQLSET'
);
END;
/
PL/SQL procedure successfully completed.
step 3:
(5) Check the no. of dependent task using below mentioned query.
SQL> SELECT count(*)
FROM wri$_sqlset_definitions d, wri$_sqlset_references r
WHERE d.name = 'STS_SQLSET'
AND r.sqlset_id = d.id;
COUNT(*)
———-
0
In normal case the output of the query should be 「0」. In this case, it is taking advisory task 「SPA01″ as it has not been deleted from database.
(6) You need to manually edit table and remove the entry which contains information regarding sql tuning task :
conn / as sysdba
delete from wri$_sqlset_references
where sqlset_id in (select id
from wri$_sqlset_definitions
where name in ('SPA01'));
commit;
This command will update oracle table and will remove dependency from sql tuning set .
STEP 4:
(3) To check all the dependent advisory tasks attached to sql tuning set you need to issue following command :SQL> SET WRAP OFFSQL> SET LINE 140SQL> COL NAME FOR A15SQL> COL DESCRIPTION FOR A50 WRAPPEDSQL>SQL> select description, created, owner from DBA_SQLSET_REFERENCES where sqlset_name ='STS_SQLSET';DESCRIPTION CREATED OWNER-------------------------------------------------- ------------------- ----------created by: SQL Performance Analyzer - task: SPA01 2014-08-05 16:22:27 SYSFrom the above output we can evaluate that task 「SPA01″ is dependent on sql tuning set 「OCMHU_STS」. So, if you want to drop sql tuning set 」OCMHU_STS」 you need to drop task 」SPA01「.NOTE : Think before dropping sql tuning task as if you drop it, all the information related to sql_profile, stats, indexes related to this advisory task will be deleted.(4) To check the details regarding SQL Performance Analyzer - task: 「SPA01」 you can issue following command :a. Normal Output :SQL> SET WRAP OFFSQL> SET LINE 140SQL> COL NAME FOR A15SQL> COL OWNER FOR A10SQL> COL DESCRIPTION FOR A50 WRAPPEDSQL>SQL> select owner,description, created,last_modified from DBA_ADVISOR_TASKS where task_name = 'TASK_10G';OWNER DESCRIPTION CREATED LAST_MODIFIED---------- -------------------------------------------------- ------------------- -------------------SYS 2014-08-05 16:22:26 2014-08-05 17:25:12If you are getting the above mentioned output then you can solve the issue by dropping the abve mentioned sql advisory tasks. The command for dropping sql advisory task is mentioned below. SQL> execute dbms_sqltune.drop_tuning_task('TASK_10G');PL/SQL procedure successfully completed.b. Issue based output :