在CDH5.14.2中安裝Phoenix與使用

在CDH5.14.2中安裝Phoenix與使用

標籤(空格分隔): 大數據平臺構建html


  • 一:安裝及配置Phoenix
  • 二:Phoenix的基本操做
  • 三:使用Phoenix bulkload數據到HBase
  • 四:使用Phoenix從HBase中導出數據到HDFS

一:安裝及配置Phoenix

1.0:phoienx 的介紹

Phoenix中文翻譯爲鳳凰, 其最先是Salesforce的一個開源項目,Salesforce背景是一個搞ERP的,ERP軟件一個很大的特色就是數據庫操做,因此能搞出一個數據庫中間件也是很正常的。然後,Phoenix成爲Apache基金的頂級項目。

Phoenix具體是什麼呢,其本質是用Java寫的基於JDBC API操做HBase的開源SQL引擎

1.1: 下載CDH 須要parcel包

下載地址:
  http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/

  CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel

  CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel.sha1

  manifest.json

image_1cfkmlhklkfh15j5kvh87ekas9.png-518.2kB

1.2 配置httpd的服務

yum install -y httpd* 

service httpd start 

chkconfig httpd on 
mkdir -p /var/www/html/phoenix
mv CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel* /var/www/html/phoenix/

mv manifest.json /var/www/html/phoenix/

cd /var/www/html/phoenix/

mv CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel.sha1 CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel.sha

image_1cfkn236oj5f1o0t15fs15vg4j3m.png-187kB

image_1cfkn6ugqu77nsk1sa81fd81tnu13.png-113.5kB

image_1cfknfokc1hft1afu1v09dogjhm1t.png-127.7kB

image_1cfknglom1k341q61i5g16t8b4s2a.png-191.4kB

1.3 在CDH5.14.2 上面 配置 phoenix

image_1cfknj7m3f0f20g1pr5l0anj32n.png-574.2kB

image_1cfknod3k2pc1tlkf3adbbhkp3k.png-596kB

image_1cfknp2me7rp1cke1n0p1o2a7m741.png-588.8kB

image_1cfknpl77akn1j34v8t1q5g113p4e.png-581.8kB

image_1cfknuiqs1u1kirl1t7s11roi7l5b.png-545.4kB

image_1cfknv1u9es9i1s1i4m13m1l0t5o.png-355.1kB

image_1cfko9rv0e7gb6j12k1o3cfg665.png-418kB


1.4 HBase服務須要部署客戶端配置以及重啓

image_1cfkoeadjhsu10jc1r4910pc1r386i.png-445.8kB

image_1cfkoesfluk2drql101t82c3e6v.png-313.7kB

image_1cfkoftemtg8mh9mk5mvkktv7c.png-277.5kB

image_1cfkoohersukgd9133slqo1fjl7p.png-329.3kB

image_1cfkopfe21pqk1dk715t51fsaubi86.png-473.9kB

1.5 phoeinx的鏈接操做

cd /opt/cloudera/parcels/CLABS_PHOENIX/bin

image_1cfkosml71p2p1j4s628lpr19088j.png-60.7kB

使用Phoenix登陸HBase
./phoenix-sqlline.py

image_1cfkp1kug139qq911hogr41e2j90.png-63.7kB

須要指定Zookeeper

./phoenix-sqlline.py node-01.flyfish:2181:/hbase

!table

image_1cfkp4rmk1ffbgit9hmgbmrpn9d.png-283.9kB

image_1cfkpbnuj15lisf712p51ps1agg9q.png-120.3kB

二:Phoenix的基本操做

2.1 使用phoinex建立表

create table hbase_test
(
s1 varchar not null primary key,
s2 varchar,
s3 varchar,
s4 varchar
);

image_1cfkpfs011mqa1767uml8m1542a7.png-66.9kB

hbase 的接口登陸

hbase shell

image_1cfkpiquc1uts1poa13fq1or91mkqb1.png-86.7kB

image_1cfkqcq467j65ar5rn1982uafbe.png-164.4kB

upsert into hbase_test values('1','testname','testname1','testname2');

upsert into hbase_test values('2','tom','jack','harry');

image_1cfkqg0de1fd6qfgmac12hq17vacb.png-184.8kB

image_1cfkqkeep1dhic02nns1hbs8un9.png-140.3kB

刪除:
delete from hbase_test where s1='1'; (刪除是按rowkey)

image_1cfkqq48fqir1vo71olhml5jq3m.png-157.8kB

image_1cfkqr1kc9dqdq91gdu12a616uf13.png-80.7kB

upsert into hbase_test values('1','hadoop','hive','zookeeper');

upsert into hbase_test values('2','oozie','hue','spark');

image_1cfkqur4h15hstfiqh1cq01j6m1g.png-155.8kB

更新數據測試,注意Phoenix中沒有update語法,用upsert代替。插入多條數據須要執行多條upsert語句,沒辦法將全部的數據都寫到一個「values」後面。

upsert into hbase_test values('1','zhangyy','hive','zookeeper');

image_1cfkr3trs7131qa8eoq1u861j1t.png-221.8kB

image_1cfkr5fkt1iaaggd9nnmieb1s2a.png-140.9kB

三:使用Phoenix bulkload數據到HBase

3.1 準備測試文件

準備 導入的 測試文件
ls -ld ithbase.csv

head -n 1 ithbase.csv

image_1cfkrrlhcjm0o6c1etvhp11iek2n.png-94.3kB

上傳到hdfs 
su - hdfs

hdfs dfs -mkdir /flyfish

hdfs dfs -put ithbase.csv /flyfish

hdfs dfs -ls /flyfish

image_1cfks2hvl17lv16rk1bs41bbr18sg34.png-129.4kB

3.2 經過Phoenix建立表

create table ithbase
(
i_item_sk varchar not null primary key,
i_item_id varchar,
i_rec_start_varchar varchar,
i_rec_end_date varchar
);

image_1cfksa0be9tu8a16mtgs15rb3h.png-82.3kB

執行bulkload命令導入數據

HADOOP_CLASSPATH=/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol-1.2.0-cdh5.12.1.jar:/opt/cloudera/parcels/CDH/lib/hbase/conf hadoop jar /opt/cloudera/parcels/CLABS_PHOENIX/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool -t ithbase -i /flyfish/ithbase.csv

image_1cfksgohq156b1ov94rrfsea304u.png-380.7kB

image_1cfkshctuutq7to4oqmf612gk5b.png-260.6kB

image_1cfkshqi1onf7plfultp13jo5o.png-451.1kB

image_1cfksih0891ecgb135d548a9465.png-563.5kB

select * from ithbase

image_1cfksn5f2qvt17kl1vtqbvqfu66i.png-91.1kB

image_1cfksp0361vl51qtq1cj8mc9pj06v.png-275kB

四:使用Phoenix從HBase中導出數據到HDFS

cat export.pig 
----
REGISTER /opt/cloudera/parcels/CLABS_PHOENIX/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar;
rows = load 'hbase://query/SELECT * FROM ITHBASE' USING org.apache.phoenix.pig.PhoenixHBaseLoader('node-01.flyfish:2181');
STORE rows INTO 'flyfish1' USING PigStorage(',');
----
執行pig 

pig -x mapreduce export.pig

image_1cfktg35o9lc1a1d16vpia21ca490.png-366.7kB

image_1cfkth1c71e6g1ch01go518clodc9d.png-508.7kB

image_1cfkthnqigcj1a7l1rtt9gv7a79q.png-236.4kB

image_1cfktkoi710dv8bjesamjv1sr0a7.png-663.1kB

在hdfs 上面查看文件
hdfs dfs -ls /user/hdfs/flyfish1
hdfs dfs -cat /user/hdfs/flyfish1/part-m-00000

image_1cfktn9jv2m4sjp13r111sq11suak.png-125.5kB

相關文章
相關標籤/搜索