CDH + phoenix+ zeppelin

  • 內容概述

1.安裝及配置Phoenixhtml

2.Phoenix的基本操做git

3.使用Phoenix bulkload數據到HBasegithub

4.使用Phoenix從HBase中導出數據到HDFSspring

  • 測試環境

1.CDH5.11.2sql

2.RedHat7.2shell

3.Phoenix4.7.0數據庫

  • 前置條件

1.CDH集羣正常apache

2.HBase服務已經安裝並正常運行json

3.測試csv數據已準備瀏覽器

4.Redhat7中的httpd服務已安裝並使用正常

2.在CDH集羣中安裝Phoenix


1.到Cloudera官網下載Phoenix的Parcel,注意選擇與操做系統匹配的版本,由於本次測試使用的是Redhat7,因此選擇後綴名爲el7的文件。下載地址爲:

http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/

具體須要下載的三個文件地址爲:

http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel
http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel.sha1
http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/manifest.json

2.將下載好的文件發佈到httpd服務,能夠用瀏覽器打開頁面進行測試。

[ec2-user@ip-172-31-22-86 phoenix]$ pwd
/var/www/html/phoenix
[ec2-user@ip-172-31-22-86 phoenix]$ ll
total 192852
-rw-r--r-- 1 root root        41 Jun 24  2016 CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel.sha1
-rw-r--r-- 1 root root 197466534 Jun 24  2016 CLABS_PHOENIX-4.7.0-1.clabs_phoenix1.3.0.p0.000-el7.parcel
-rw-r--r-- 1 root root      4687 Jun 24  2016 manifest.json
[ec2-user@ip-172-31-22-86 phoenix]$

3.從Cloudera Manager點擊「Parcel」進入Parcel管理頁面

點擊「配置」,輸入Phoenix的Parcel包http地址。

點擊「保存更改「回到Parcel管理頁面,發現CM已發現Phoenix的Parcel。

點擊「下載」->「分配」->「激活」

4.回到CM主頁,發現HBase服務須要部署客戶端配置以及重啓

重啓HBase服務

安裝完成。

3.如何在CDH集羣中使用Phoenix

3.1Phoenix的基本操做


進入Phoenix的腳本命令目錄

[ec2-user@ip-172-31-22-86 bin]$ cd /opt/cloudera/parcels/CLABS_PHOENIX/bin
[ec2-user@ip-172-31-22-86 bin]$ ll
total 16
-rwxr-xr-x 1 root root 672 Jun 24  2016 phoenix-performance.py
-rwxr-xr-x 1 root root 665 Jun 24  2016 phoenix-psql.py
-rwxr-xr-x 1 root root 668 Jun 24  2016 phoenix-sqlline.py
-rwxr-xr-x 1 root root 674 Jun 24  2016 phoenix-utils.py

使用Phoenix登陸HBase

[ec2-user@ip-172-31-22-86 bin]$ ./phoenix-sqlline.py
Zookeeper not specified. 
Usage: sqlline.py <zookeeper> <optional_sql_file> 
Example: 
 1. sqlline.py localhost:2181:/hbase 
 2. sqlline.py localhost:2181:/hbase ../examples/stock_symbol.sql

須要指定Zookeeper

[ec2-user@ip-172-31-22-86 bin]$ ./phoenix-sqlline.py ip-172-31-21-45:2181:/hbase
...
sqlline version 1.1.8
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> !tables
+------------+--------------+-------------+---------------+----------+------------+--------------------+
| TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  |  TABLE_TYPE   | REMARKS  | TYPE_NAME  | SELF_REFERENCING_C |
+------------+--------------+-------------+---------------+----------+------------+--------------------+
|            | SYSTEM       | CATALOG     | SYSTEM TABLE  |          |            |                    |
|            | SYSTEM       | FUNCTION    | SYSTEM TABLE  |          |            |                    |
|            | SYSTEM       | SEQUENCE    | SYSTEM TABLE  |          |            |                    |
|            | SYSTEM       | STATS       | SYSTEM TABLE  |          |            |                    |
|            |              | ITEM        | TABLE         |          |            |                    |
+------------+--------------+-------------+---------------+----------+------------+--------------------+
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

建立一張測試表

注意:建表必須指定主鍵。

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> create table hbase_test
. . . . . . . . . . . . . . . . . . . . . .> (
. . . . . . . . . . . . . . . . . . . . . .>     s1 varchar not null primary key,
. . . . . . . . . . . . . . . . . . . . . .>     s2 varchar,
. . . . . . . . . . . . . . . . . . . . . .>     s3 varchar,
. . . . . . . . . . . . . . . . . . . . . .>     s4 varchar
. . . . . . . . . . . . . . . . . . . . . .> );
No rows affected (1.504 seconds)

在hbase shell中進行檢查

插入一行數據。注意:Phoenix中沒有insert語法,用upsert代替。參考:http://phoenix.apache.org/language/index.html

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test values('1','testname','testname1','testname2');
1 row affected (0.088 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from hbase_test;
+-----+-----------+------------+------------+
| S1  |    S2     |     S3     |     S4     |
+-----+-----------+------------+------------+
| 1   | testname  | testname1  | testname2  |
+-----+-----------+------------+------------+
1 row selected (0.049 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

在hbase shell中進行檢查

刪除這行數據,delete測試

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> delete from hbase_test where s1='1';
1 row affected (0.018 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from hbase_test;
+-----+-----+-----+-----+
| S1  | S2  | S3  | S4  |
+-----+-----+-----+-----+
+-----+-----+-----+-----+
No rows selected (0.045 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

在hbase shell中進行檢查

更新數據測試,注意Phoenix中沒有update語法,用upsert代替。插入多條數據須要執行多條upsert語句,沒辦法將全部的數據都寫到一個「values」後面。

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test values('1','testname','testname1','testname2');
1 row affected (0.017 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test values('2','testname','testname1','testname2');
1 row affected (0.007 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test values('3','testname','testname1','testname2');
1 row affected (0.008 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from hbase_test;
+-----+-----------+------------+------------+
| S1  |    S2     |     S3     |     S4     |
+-----+-----------+------------+------------+
| 1   | testname  | testname1  | testname2  |
| 2   | testname  | testname1  | testname2  |
| 3   | testname  | testname1  | testname2  |
+-----+-----------+------------+------------+
3 rows selected (0.067 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test values('1','fayson','testname1','testname2');
1 row affected (0.009 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from hbase_test;
+-----+-----------+------------+------------+
| S1  |    S2     |     S3     |     S4     |
+-----+-----------+------------+------------+
| 1   | fayson    | testname1  | testname2  |
| 2   | testname  | testname1  | testname2  |
| 3   | testname  | testname1  | testname2  |
+-----+-----------+------------+------------+
3 rows selected (0.037 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

在hbase shell中進行檢查

批量更新測試,建立另一張表hbase_test1,表結構與hbase_test同樣,並插入五條,有兩條是hbase_test中沒有的(主鍵爲4,5),有一條與hbase_test中的數據不同(主鍵爲1),有兩條是徹底同樣(主鍵爲2,3)。

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> create table hbase_test1
. . . . . . . . . . . . . . . . . . . . . .> (
. . . . . . . . . . . . . . . . . . . . . .>     s1 varchar not null primary key,
. . . . . . . . . . . . . . . . . . . . . .>     s2 varchar,
. . . . . . . . . . . . . . . . . . . . . .>     s3 varchar,
. . . . . . . . . . . . . . . . . . . . . .>     s4 varchar
. . . . . . . . . . . . . . . . . . . . . .> );
No rows affected (1.268 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> 
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test1 values('1','fayson','testname1','testname2');
1 row affected (0.031 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test1 values('2','testname','testname1','testname2');
1 row affected (0.006 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test1 values('3','testname','testname1','testname2');
1 row affected (0.005 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test1 values('4','testname','testname1','testname2');
1 row affected (0.005 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test1 values('5','testname','testname1','testname2');
1 row affected (0.007 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from hbase_test1;
+-----+-----------+------------+------------+
| S1  |    S2     |     S3     |     S4     |
+-----+-----------+------------+------------+
| 1   | fayson    | testname1  | testname2  |
| 2   | testname  | testname1  | testname2  |
| 3   | testname  | testname1  | testname2  |
| 4   | testname  | testname1  | testname2  |
| 5   | testname  | testname1  | testname2  |
+-----+-----------+------------+------------+
5 rows selected (0.038 seconds)

批量更新,咱們用hbase_test1中的數據去更新hbase_test。

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> upsert into hbase_test select * from hbase_test1;
5 rows affected (0.03 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from hbase_test;
+-----+-----------+------------+------------+
| S1  |    S2     |     S3     |     S4     |
+-----+-----------+------------+------------+
| 1   | fayson    | testname1  | testname2  |
| 2   | testname  | testname1  | testname2  |
| 3   | testname  | testname1  | testname2  |
| 4   | testname  | testname1  | testname2  |
| 5   | testname  | testname1  | testname2  |
+-----+-----------+------------+------------+
5 rows selected (0.039 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

批量更新發現對於已有的數據,若是值不同,會覆蓋,對於相同的數據會保持不變,對於沒有的數據會直接做爲新的數據插入。

3.2使用Phoenix bulkload數據到HBase


準備須要批量導入的測試數據,這裏使用TPC_DS的item表數據。

[ec2-user@ip-172-31-22-86 ~]$ ll item.dat
-rw-r--r-- 1 root root 28855325 Oct  3 10:23 item.dat
[ec2-user@ip-172-31-22-86 ~]$ head -1 item.dat
1|AAAAAAAABAAAAAAA|1997-10-27||Powers will not get influences. Electoral ports should show low, annual chains. Now young visitors may pose now however final pages. Bitterly right children suit increasing, leading el|27.02|23.23|5003002|exportischolar #2|3|pop|5|Music|52|ableanti|N/A|3663peru009490160959|spring|Tsp|Unknown|6|ought|

由於Phoenix的bulkload只能導入csv,因此咱們先把該數據的分隔符修改成逗號,而且後綴名改成.csv

[ec2-user@ip-172-31-22-86 ~]$ sed -i 's/|/,/g' item.dat
[ec2-user@ip-172-31-22-86 ~]$ mv item.dat item.csv
[ec2-user@ip-172-31-22-86 ~]$ ll item.csv 
-rw-r--r-- 1 ec2-user ec2-user 28855325 Oct  3 10:26 item.csv
[ec2-user@ip-172-31-22-86 ~]$ head -1 item.csv 
1,AAAAAAAABAAAAAAA,1997-10-27,,Powers will not get influences. Electoral ports should show low, annual chains. Now young visitors may pose now however final pages. Bitterly right children suit increasing, leading el,27.02,23.23,5003002,exportischolar #2,3,pop,5,Music,52,ableanti,N/A,3663peru009490160959,spring,Tsp,Unknown,6,ought,

上傳該文件到HDFS

[ec2-user@ip-172-31-22-86 ~]$ hadoop fs -mkdir /fayson
[ec2-user@ip-172-31-22-86 ~]$ hadoop fs -put item.csv /fayson
[ec2-user@ip-172-31-22-86 ~]$ hadoop fs -ls /fayson
Found 1 items
-rw-r--r--   3 ec2-user supergroup   28855325 2017-10-03 10:28 /fayson/item.csv
[ec2-user@ip-172-31-22-86 ~]$

經過Phoenix建立item表,注意爲了方便閱讀,只建立了4個字段

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> create table item
. . . . . . . . . . . . . . . . . . . . . .> (
. . . . . . . . . . . . . . . . . . . . . .>     i_item_sk varchar not null primary key,
. . . . . . . . . . . . . . . . . . . . . .>     i_item_id varchar,
. . . . . . . . . . . . . . . . . . . . . .>     i_rec_start_varchar varchar,
. . . . . . . . . . . . . . . . . . . . . .>     i_rec_end_date varchar
. . . . . . . . . . . . . . . . . . . . . .> );
No rows affected (1.268 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

執行bulkload命令導入數據

[ec2-user@ip-172-31-22-86 ~]$ HADOOP_CLASSPATH=/opt/cloudera/parcels/CDH/lib/hbase/hbase-protocol-1.2.0-cdh5.12.1.jar:/opt/cloudera/parcels/CDH/lib/hbase/conf hadoop jar /opt/cloudera/parcels/CLABS_PHOENIX/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool -t item -i /fayson/item.csv
17/10/03 10:32:24 INFO util.QueryUtil: Creating connection with the jdbc url: jdbc:phoenix:ip-172-31-21-45.ap-southeast-1.compute.internal,ip-172-31-22-86.ap-southeast-1.compute.internal,ip-172-31-26-102.ap-southeast-1.compute.internal:2181:/hbase;
...
17/10/03 10:32:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-172-31-21-45.ap-southeast-1.compute.internal:2181,ip-172-31-22-86.ap-southeast-1.compute.internal:2181,ip-172-31-26-102.ap-southeast-1.compute.internal:2181 sessionTimeout=60000 watcher=hconnection-0x7a9c0c6b0x0, quorum=ip-172-31-21-45.ap-southeast-1.compute.internal:2181,ip-172-31-22-86.ap-southeast-1.compute.internal:2181,ip-172-31-26-102.ap-southeast-1.compute.internal:2181, baseZNode=/hbase
17/10/03 10:32:24 INFO zookeeper.ClientCnxn: Opening socket connection to server ip-172-31-21-45.ap-southeast-1.compute.internal/172.31.21.45:2181. Will not attempt to authenticate using SASL (unknown error)
...
17/10/03 10:32:30 INFO mapreduce.Job: Running job: job_1507035313248_0001
17/10/03 10:32:38 INFO mapreduce.Job: Job job_1507035313248_0001 running in uber mode : false
17/10/03 10:32:38 INFO mapreduce.Job:  map 0% reduce 0%
17/10/03 10:32:52 INFO mapreduce.Job:  map 100% reduce 0%
17/10/03 10:33:01 INFO mapreduce.Job:  map 100% reduce 100%
17/10/03 10:33:01 INFO mapreduce.Job: Job job_1507035313248_0001 completed successfully
17/10/03 10:33:01 INFO mapreduce.Job: Counters: 50
...
17/10/03 10:33:01 INFO mapreduce.AbstractBulkLoadTool: Loading HFiles from /tmp/fef0045b-8a31-4d95-985a-bee08edf2cf9
...

在Phoenix中查詢該表

0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase> select * from item limit 10;
+------------+-------------------+----------------------+-----------------+
| I_ITEM_SK  |     I_ITEM_ID     | I_REC_START_VARCHAR  | I_REC_END_DATE  |
+------------+-------------------+----------------------+-----------------+
| 1          | AAAAAAAABAAAAAAA  | 1997-10-27           |                 |
| 10         | AAAAAAAAKAAAAAAA  | 1997-10-27           | 1999-10-27      |
| 100        | AAAAAAAAEGAAAAAA  | 1997-10-27           | 1999-10-27      |
| 1000       | AAAAAAAAIODAAAAA  | 1997-10-27           | 1999-10-27      |
| 10000      | AAAAAAAAABHCAAAA  | 1997-10-27           | 1999-10-27      |
| 100000     | AAAAAAAAAKGIBAAA  | 1997-10-27           | 1999-10-27      |
| 100001     | AAAAAAAAAKGIBAAA  | 1999-10-28           | 2001-10-26      |
| 100002     | AAAAAAAAAKGIBAAA  | 2001-10-27           |                 |
| 100003     | AAAAAAAADKGIBAAA  | 1997-10-27           |                 |
| 100004     | AAAAAAAAEKGIBAAA  | 1997-10-27           | 2000-10-26      |
+------------+-------------------+----------------------+-----------------+
10 rows selected (0.054 seconds)
0: jdbc:phoenix:ip-172-31-21-45:2181:/hbase>

在hbase shell中查詢該表

hbase(main):002:0> scan 'ITEM', LIMIT => 10
ROW                         COLUMN+CELL                                                                 
 1                          column=0:I_ITEM_ID, timestamp=1507041176470, value=AAAAAAAABAAAAAAA         
 1                          column=0:I_REC_START_VARCHAR, timestamp=1507041176470, value=1997-10-27     
 1                          column=0:_0, timestamp=1507041176470, value=                                
 10                         column=0:I_ITEM_ID, timestamp=1507041176470, value=AAAAAAAAKAAAAAAA         
 10                         column=0:I_REC_END_DATE, timestamp=1507041176470, value=1999-10-27          
 10                         column=0:I_REC_START_VARCHAR, timestamp=1507041176470, value=1997-10-27     
 10                         column=0:_0, timestamp=1507041176470, value=                                
...
 100004                     column=0:I_REC_START_VARCHAR, timestamp=1507041176470, value=1997-10-27     
 100004                     column=0:_0, timestamp=1507041176470, value=                                
10 row(s) in 0.2360 seconds

入庫條數檢查

條數相等,所有入庫成功。

3.3使用Phoenix從HBase中導出數據到HDFS


Phoenix還提供了使用MapReduce導出數據到HDFS的功能,以pig的腳本執行。首先準備pig腳本。

[ec2-user@ip-172-31-22-86 ~]$ cat export.pig 
REGISTER /opt/cloudera/parcels/CLABS_PHOENIX/lib/phoenix/phoenix-4.7.0-clabs-phoenix1.3.0-client.jar;
rows = load 'hbase://query/SELECT * FROM ITEM' USING org.apache.phoenix.pig.PhoenixHBaseLoader('ip-172-31-21-45:2181');
STORE rows INTO 'fayson1' USING PigStorage(',');
[ec2-user@ip-172-31-22-86 ~]$

執行該腳本

[ec2-user@ip-172-31-22-86 ~]$ pig -x mapreduce export.pig 
...
Counters:
Total records written : 102000
Total bytes written : 4068465
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0

Job DAG:
job_1507035313248_0002

2017-10-03 10:45:38,905 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Success!

導出成功後檢查HDFS中的數據

[ec2-user@ip-172-31-22-86 ~]$ hadoop fs -ls /user/ec2-user/fayson1
Found 2 items
-rw-r--r--   3 ec2-user supergroup          0 2017-10-03 10:45 /user/ec2-user/fayson1/_SUCCESS
-rw-r--r--   3 ec2-user supergroup    4068465 2017-10-03 10:45 /user/ec2-user/fayson1/part-m-00000
[ec2-user@ip-172-31-22-86 ~]$ hadoop fs -cat /user/ec2-user/fayson1/part-m-00000 | head -2
1,AAAAAAAABAAAAAAA,1997-10-27,
10,AAAAAAAAKAAAAAAA,1997-10-27,1999-10-27
cat: Unable to write to output stream.
[ec2-user@ip-172-31-22-86 ~]$

檢查條數爲10200與原始數據一致,所有導出成功。

4.總結


  • 使用Cloudera提供的Phoenix Parcel,能夠很方便的安裝Phoenix。
  • 使用Phoenix能夠對HBase進行建表,刪除,更新等操做,都是以你們熟悉的SQL方式操做。
  • Phoenix提供了批量導入/導出數據的方式。批量導入只支持csv格式,分隔符爲逗號。
  • Phoenix中的SQL操做,能夠立刻同步到HBase,經過hbase shell檢查都成功。
  • 目前Cloudera官方提供的Phoenix版本較舊,爲4.7.0,社區最新版本爲4.11.0
  • Phoenix提供的SQL語法較爲簡陋,沒有insert/update,一概用upsert代替。
  • 使用upsert插入數據時,只能一條一條插入,無法將所有字段值寫到一個「values」後面。

 

一、下載

https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.8-HBase-1.2-cdh5.8

 

 

 

 

 

二、編譯(編譯時間較長,耐心等待)

mvn clean package -DskipTests

 

 

 

 

 

 

 

 

 

 

三、解壓

  將編譯好的phoenix-4.8.0-cdh5.8.0.tar.gz解壓出來

複製代碼

[root@cmbigdata1 phoenix]# tar -zxvf  phoenix-4.8.0-cdh5.8.0.tar.gz
[root@cmbigdata1 phoenix]# cd phoenix-4.8.0-cdh5.8.0
[root@cmbigdata1 phoenix-4.8.0-cdh5.8.0]# ll
total 166152
drwxr-xr-x 2 root root      4096 Apr 18 16:41 bin
-rw-r--r-- 1 root root      1930 Aug  8  2016 build.txt
drwxr-xr-x 3 root root      4096 Aug  8  2016 dev
drwxr-xr-x 2 root root      4096 Aug  8  2016 docs
drwxr-xr-x 3 root root      4096 Aug  8  2016 examples
drwxr-xr-x 2 root root      4096 Apr 18 16:40 lib
-rw-r--r-- 1 root root 113247548 Apr 18 14:43 phoenix-4.8.0-cdh5.8.0-client.jar
-rw-r--r-- 1 root root   6619716 Apr 18 14:30 phoenix-4.8.0-cdh5.8.0-queryserver.jar
-rw-r--r-- 1 root root  22498517 Apr 18 14:43 phoenix-4.8.0-cdh5.8.0-server.jar
-rw-r--r-- 1 root root  27739579 Apr 18 14:29 phoenix-4.8.0-cdh5.8.0-thin-client.jar

複製代碼

 

 

 

 

 

 

四、將phoenix-4.8.0-cdh5.8.0-server.jar拷貝到每個RegionServer下

[root@cmbigdata2~]# find / -name 'phoenix-4.8.0-cdh5.8.0-server.jar'
/soft/bigdata/clouderamanager/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hbase/lib/phoenix-4.8.0-cdh5.8.0-server.jar

   cmbigdata2和cmbigdata3和cmbigdata4同樣。

 

 

 

 

五、增長hbase-site.xml 配置

<property>
<name>hbase.table.sanity.checks</name>
<value>false</value>
</property>

 

 

 

 

 

 

 

 

 CDH修改方法:
  在集羣管理頁面點擊Hbase,進入Hbase管理界面

 

 

 

 

 

 

點擊配置:

                

 

 


選擇高級:

                  

 

 


增長以下配置:

        

 

 

 

 

 

 

六、重啓Hbase  

    這個很簡單,很少說,會玩cloudermanager的人都知道。

七、登陸phoenix

  進入phoenix-4.8.0-cdh5.8.0/bin目錄執行。

 

一,Phoenix的介紹 
1,Phoenix, (「鳳凰」),它至關於一個Java中間件,提供jdbc鏈接,操做hbase數據表。

2,Apache Phoenix是構建在HBase之上的關係型數據庫層,做爲內嵌的客戶端JDBC驅動用以對HBase中的數據進行低延遲訪問。

二,Phoenix的下載 
1,官網上下載的Phoenix的都會在文件名上標註須要搭配的hbase版本號,注意要一致。 
2,要注意在官網上http://apache.fayea.com/phoenix/ 下載,若是本身電腦上的安裝的hbase版本是cdh的話,則這二者會衝突,使用sqlline.py鏈接hbase時候會報相似如下錯誤:

出錯緣由:phoenix官方版本pom文件裏的hbase依賴並非使用cdh版本的。 
解決的方法: 因此,爲了可以使得phoenix與cdh對應,咱們須要從phoenix官網下載對應版本(4.6.0)的phoenix源碼,修改pom文件依賴以及部分源碼,並從新編譯,獲得適配於cdh5.4 hbase1.0.0 的phoenix。

三,解決的步驟 
1,下載cdh版本的Phoenix,注意它須要搭配的hbase版本是hbase1.2版本。 
https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.8-HBase-1.2-cdh5.8

2,而後把該文件夾(phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8)拷貝解壓到以下路徑:

D:\Software\Phoenix\phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8

3,利用maven對該文件夾(phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8)進行從新編譯,具體操做以下:

(1),首先電腦 要安裝maven包,安裝過程網上本身百度一下,再也不介紹了

(2), 而後在window終端裏,進入該文件夾路徑下(phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8):

D:\Software\Phoenix\phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8>

(3),而後輸入以下命令:

D:\Software\Phoenix\phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8> mvn clean package -DskipTests -Dcdh.flume.version=1.6.0

(4), 最後若是顯示:

則說明編譯成功。

(5) 將編譯打包好後的\Software\Phoenix\phoenix-for-cloudera-4.8-Hbase-1.2-cdh5.8\phoenix-assembly\target\phoenix-4.8.0-cdh5.8.0.tar.gz進行解壓phoenix-4.8.0-cdh5.8.0,解壓後的文件能夠放在當前路徑上 。

4,接下來把編譯後的整個文件夾(phoenix-for-cloudera-4.8-Hbase-1.2-cdh5.8)上傳到集羣上。

5, 將phoenix-4.8.0-cdh5.8.0中的phoenix-4.8.0-cdh5.8.0-server.jar拷貝到每個RegionServer下/opt/cloudera/parcels/CDH/lib/hbase/lib

6,最後一步重啓hbase集羣。

7,進入集羣中phoenix文件夾下的bin子文件夾下輸入以下命令來開啓phoenix了:

./sqlline.py dsbbzx1,dsbbzx4,dsbbzx5:2181

出現以下結果: 


則說明Phoenix在集羣上安裝成功了,接下來就可使用Phoenix了。
-------------------

Apache Phoenix

Phoenix supports thick and thin connection types:

Use the appropriate default.driverdefault.url, and the dependency artifact for your connection type.

Thick client connection

Properties

Name Value
default.driver org.apache.phoenix.jdbc.PhoenixDriver
default.url jdbc:phoenix:localhost:2181:/hbase-unsecure
default.user phoenix_user
default.password phoenix_password

Dependencies

Artifact Excludes
org.apache.phoenix:phoenix-core:4.4.0-HBase-1.0  

Maven Repository: org.apache.phoenix:phoenix-core

Thin client connection

Properties

Name Value
default.driver org.apache.phoenix.queryserver.client.Driver
default.url jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
default.user phoenix_user
default.password phoenix_password

Dependencies

Before Adding one of the below dependencies, check the Phoenix version first.

Artifact Excludes Description
org.apache.phoenix:phoenix-server-client:4.7.0-HBase-1.1   For Phoenix 4.7
org.apache.phoenix:phoenix-queryserver-client:4.8.0-HBase-1.2   For Phoenix 4.8+

Maven Repository: org.apache.phoenix:phoenix-queryserver-client

詳見:http://zeppelin.apache.org/docs/0.7.1/interpreter/jdbc.html#apache-phoenix

相關文章
相關標籤/搜索