phoenix 二級索引使用實踐

1、參考的博客html

phoenix的基本安裝和使用,功能介紹等python

https://www.cnblogs.com/kekukekro/p/6339587.htmlsql

 

 phoenix全局索引和本地索引 的詳細對比和測試apache

https://blog.csdn.net/dante_003/article/details/76439021app

 

 phoenix索引的詳細使用python2.7

http://www.cnblogs.com/haoxinyue/p/6724365.html異步

                        一致性、事物和索引調優async

http://www.cnblogs.com/haoxinyue/p/6747948.html工具

 

phoenix索引使用,強制使用索引,查看執行計劃oop

https://blog.csdn.net/liyong1115/article/details/70332102

 

 

 

 

 

2、phoenix須要python2.7環境,若是安裝以後少組件,請執行下面語句進行安裝。

yum install python-argparse

scan "logs:rad",{LIMIT=>15}

 

3、安裝位置(安裝請參見我以前的博客)

192.168.180.225
cd /usr/local/apps/phoenix/bin/
./sqlline.py 192.168.180.228:2181

 

 

4、特別注意事項:

一、用命令行建表時:表名、字段名若是不是默認大寫,必定要用""擴起來。
二、表名必定要大寫,不然會有各類組件不兼容小寫的bug。
三、hbase的原始表名通常是(logs:rad),冒號前面是命名空間名。phoenix不支持表名當中有冒號。
雖然它也有命名空間的概念,可是須要客戶端、服務端一塊兒設置以後才能使用。
四、本地索引和全局索引,均可以異步構建,且是一樣的操做。
五、表分可修改表和不可修改表(指的是表內的數據只能增長,不能修改),表的類型換了,表的全局索引和本地索引也跟着變爲可修改或不可修改。
六、全局索引適合讀多寫少,本地索引適合寫多讀少的場景。不可修改索引優化更好,相對性能更高。

 

 

5、=========採坑紀實======================================================================

 

5.一、建立原始表
CREATE TABLE "logsrad" (
id VARCHAR NOT NULL PRIMARY KEY ,
"info"."receiveTime" VARCHAR ,
"info"."sourceIp" VARCHAR ,
"info"."destinationIp" VARCHAR ,
"info"."destinationPort" VARCHAR ,
"info"."natIp" VARCHAR ,
"info"."deviceIp" VARCHAR ,
"info"."alarmLevel" VARCHAR ,
"info"."startTime" VARCHAR ,
"info"."endTime" VARCHAR ,
"info"."interfaceIp" VARCHAR ,
"info"."protocol" VARCHAR ,
"info"."natType" VARCHAR ,
"info"."messageBytes" VARCHAR
)

 

5.二、可修改表、不可修改表


若是你有一個已經存在的表,想把不可變索引修改爲可變索引的話,能夠用以下語句實現:
alter table "logsrad" set IMMUTABLE_ROWS = false;

修改爲不可變表
alter table "logsrad" set IMMUTABLE_ROWS = true;

#CREATE LOCAL INDEX MYINDEX ON "logsrad"("destinationIp");

 

5.三、異步建立索引例子

#首先在phoenix中創建索引表信息
create index car_index_datehphm on "car"("f1"."date","f1"."hphm") include ("f1"."coorid","f1"."cx","f1"."ys") async;
#這裏的創建索引語句加了async,異步創建索引。另外f1是hbase中原始的列族名,這張表是原始hbase錶轉過來的,爲何這麼寫就不解釋了,"f1"."date"就表明一個字段。include是什麼後面再解釋

#下面啓動批量創建索引的mr任務
${HBASE_HOME}/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool \
--data-table "car" --index-table CAR_INDEX_DATEHPHM \
--output-path ASYNC_IDX_HFILES

 

5.四、本地索引

CREATE LOCAL INDEX INDEX_LOGSRAD_DESIP ON "logsrad"("info"."destinationIp") async;

cd /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hbase/bin/

#執行異步建立索引的mr任務
hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table "logsrad" --index-table INDEX_LOGSRAD_DESIP --output-path ASYNC_IDX_HFILES


DROP INDEX MYINDEX ON "logsrad" ;

DROP INDEX INDEX_LOGSRAD_DESIP ON "logsrad" ;


count 'INDEX_LOGSRAD_DESIP'


5.五、全局索引
CREATE INDEX INDEX_LOGSRAD_SOURCEIP ON "logsrad"("info"."sourceIp" DESC) include("info"."deviceIp","info"."natType") async;

cd /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hbase/bin/

#執行異步建立索引的mr任務


hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table "logsrad" --index-table INDEX_LOGSRAD_SOURCEIP --output-path ASYNC_IDX_HFILES


===================================================================================================================

 

 


6、========成功案例==={(數據記錄)不可修改表、(數據記錄)不可修改索引}========================================================================================================
報錯,多是org.apache.phoenix.mapreduce.index.IndexTool這個工具,默認表名是大寫了。 咱們這個表是小寫的表名,因此對應不上。
(下面的實踐證實,表名仍是要大寫,不然上面這個工具會對應不上索引是哪張表的索引,報錯)

二、表名必定要大寫,不然會有各類組件不兼容小寫的bug。

 

 

6.一、--------修改原始hbase表名------------------------------------------------------------
請參見本人其餘博客,或者搜索

 

6.二、建立phoenix裏面的表(表名須要和hbase表名一致)

一、用命令行建表時:表名、字段名若是不是默認大寫,必定要用""擴起來。
三、hbase的原始表名通常是(logs:rad),冒號前面是命名空間名。phoenix不支持表名當中有冒號。
雖然它也有命名空間的概念,可是須要客戶端、服務端一塊兒設置以後才能使用。

CREATE TABLE LOGSRADL (
id VARCHAR NOT NULL PRIMARY KEY ,
"info"."receiveTime" VARCHAR ,
"info"."sourceIp" VARCHAR ,
"info"."destinationIp" VARCHAR ,
"info"."destinationPort" VARCHAR ,
"info"."natIp" VARCHAR ,
"info"."deviceIp" VARCHAR ,
"info"."alarmLevel" VARCHAR ,
"info"."startTime" VARCHAR ,
"info"."endTime" VARCHAR ,
"info"."interfaceIp" VARCHAR ,
"info"."protocol" VARCHAR ,
"info"."natType" VARCHAR ,
"info"."messageBytes" VARCHAR
);

 

6.三、-------全局索引------------------------------------------------------------------------
修改爲不可變表
alter table LOGSRADL set IMMUTABLE_ROWS = true;
全局索引
CREATE INDEX INDEX_LOGSRADL_SOURCEIP ON LOGSRADL("info"."sourceIp" DESC) include("info"."deviceIp","info"."natType") async;

cd /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hbase/bin/
hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table LOGSRADL --index-table INDEX_LOGSRADL_SOURCEIP --output-path ASYNC_IDX_HFILES

測試:索引

select * from LOGSRADL limit 10;
select deviceIp,natType from LOGSRADL limit 10;

select "info"."deviceIp","info"."natType" from LOGSRADL where "info"."sourceIp"='46.234.125.89' limit 10;

scan "LOGSRADL",{LIMIT=>15}

#DROP INDEX INDEX_LOGSRADL_SOURCEIP ON "logsrad" ;

 

6.四、-------本地索引-------------------------------------------------------------
本地索引
CREATE LOCAL INDEX INDEX_LOGSRADL_DESIP ON LOGSRADL("info"."destinationIp") async;

 

cd /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/hbase/bin/
hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table LOGSRADL --index-table INDEX_LOGSRADL_DESIP --output-path ASYNC_IDX_HFILES

測試:索引


select "info"."deviceIp","info"."natType" from LOGSRADL where "info"."destinationIp"='210.29.144.128' limit 10;

select * from LOGSRADL where "info"."destinationIp"='210.29.144.128' limit 10;

 

7、========成功案例==={(數據記錄)可修改表、(數據記錄)可修改索引}========================================================================================================

 

7.一、首先修改hbase的配置

官網的說明:

You will need to add the following parameters to hbase-site.xml on each region server:

<property>
  <name>hbase.regionserver.wal.codec</name>
  <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
</property>

The above property enables custom WAL edits to be written, ensuring proper writing/replay of the index updates. This codec supports the usual host of WALEdit options, most notably WALEdit compression.

<property>
  <name>hbase.region.server.rpc.scheduler.factory.class</name>
  <value>org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory</value>
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
</property>
<property>
  <name>hbase.rpc.controllerfactory.class</name>
  <value>org.apache.hadoop.hbase.ipc.controller.ServerRpcControllerFactory</value>
  <description>Factory to create the Phoenix RPC Scheduler that uses separate queues for index and metadata updates</description>
</property>

The above properties prevent deadlocks from occurring during index maintenance for global indexes (HBase 0.98.4+ and Phoenix 4.3.1+ only) by ensuring index updates are processed with a higher priority than data updates. It also prevents deadlocks by ensuring metadata rpc calls are processed with a higher priority than data rpc calls.

cloudera manager上的設置

 

 


修改爲可變表(五、表分可修改表和不可修改表(指的是表內的數據只能增長,不能修改),表的類型換了,表的全局索引和本地索引也跟着變爲可修改或不可修改。)
alter table LOGSRADL set IMMUTABLE_ROWS = false;

 

7.二、-------全局索引------------------------------------------------------------------------

 

全局索引
建索引的方式同上面的不可修改索引

測試:索引

select * from LOGSRADL limit 10;
select deviceIp,natType from LOGSRADL limit 10;

select "info"."deviceIp","info"."natType" from LOGSRADL where "info"."sourceIp"='46.234.125.89' limit 10;

scan "LOGSRADL",{LIMIT=>15}

#DROP INDEX INDEX_LOGSRADL_SOURCEIP ON "logsrad" ;

select "info"."sourceIp",count(*) from LOGSRADL group by "info"."sourceIp";

select "info"."deviceIp",count(*) from LOGSRADL group by "info"."deviceIp";

 

7.三、-------本地索引-------------------------------------------------------------
本地索引
建索引的方式同上面的不可修改索引

 

測試:索引


select "info"."deviceIp","info"."natType" from LOGSRADL where "info"."destinationIp"='210.29.144.128' limit 10;

select * from LOGSRADL where "info"."destinationIp"='210.29.144.128' limit 10;

 

select count("info"."natType") from LOGSRADL where "info"."destinationIp"='210.29.144.128' group by "info"."deviceIp";

select "info"."destinationIp",count(*) from LOGSRADL group by "info"."destinationIp";

相關文章
相關標籤/搜索