禁止轉載 我所碰見的世界會更美好html
HBase shell 、寫和讀HDFS、HBase建表、編程、開啓debug模式java
1.2 使用Hadoop shell命令導入和導出數據到HDFS(1)node
HDFS提供了許多shell命令來實現訪問文件系統的功能,這些命令都是構建在HDFS FileSystem API之上的。Hadoop自帶的shell腳本是經過命令行來執行全部操做的。這個腳本的名稱叫作hadoop,一般安裝在$HADOOP_BIN目錄下,其中$HADOOP_BIN是Hadoopbin文件完整的安裝目錄,同時有必要將$HADOOP_BIN配置到$PATH環境變量中,這樣全部的命令均可以經過hadoop fs -command這樣的形式來執行。linux
若是須要獲取文件系統的全部命令,能夠運行hadoop命令傳遞不帶參數的選項fs。
web
這些按照功能進行命名的命令的名稱與Unix shell命令很是類似。使用help選項能夠得到某個具體命令的詳細說明。
redis
這些shell命令和其簡要的參數描述可在官方在線文檔http://hadoop.apache.org/docs/r1.0.4/file_system_shell.html中進行查閱。算法
在這一節中,咱們將使用Hadoop shell命令將數據導入HDFS中,以及將數據從HDFS中導出。這些命令更多地用於加載數據,下載處理過的數據,管理文件系統,以及預覽相關數據。掌握這些命令是高效使用HDFS的前提。shell
準備工做數據庫
你須要在http://www.packtpub.com/support這個網站上下載數據集weblog_ entries.txt。apache
操做步驟
完成如下步驟,實現將weblog_entries.txt文件從本地文件系統複製到HDFS上的一個指定文件夾下。 在HDFS中建立一個新文件夾,用於保存weblog_entries.txt文件:
將weblog_entries.txt文件從本地文件系統複製到HDFS剛建立的新文件夾下:
列出HDFS上weblog_entries.txt文件的信息:
在Hadoop處理的一些結果數據可能會直接被外部系統使用,可能須要其餘系統作更進一步的處理,或者MapReduce處理框架根本就不符合該場景,任何相似的狀況下都須要從HDFS上導出數據。下載數據最簡單的辦法就是使用Hadoop shell。
//將本地文件上傳到hdfs。
String target="hdfs://localhost:9000/user/Administrator/geoway_portal/tes2.dmp";
FileInputStream fis=new FileInputStream(new File("C:\\tes2.dmp"));//讀取本地文件
Configuration config=new Configuration();
FileSystem fs=FileSystem.get(URI.create(target), config);
OutputStream os=fs.create(new Path(target));
//copy
IOUtils.copyBytes(fis, os, 4096, true);
System.out.println("拷貝完成...");
查看上傳的hdfs的文件信息:
使用Hadoop fs -ls shell命令查詢geoway_portal下的文件狀況:
如下代碼實現將本地文件拷到HDFS集羣中
package com.njupt.Hadoop;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class CopyToHDFS {
public static void main(String[] args) throws Exception{
Configuration conf = new Configuration();
FileSystem fs = FileSystem.get(conf);
Path source = new Path("/home/hadoop/word.txt");
Path dst = new Path("/user/root/njupt/");
fs.copyFromLocalFile(source,dst);
}
}
使用HDFS java api 下載文件到本地的代碼以下:
String file="hdfs://localhost:9000/user/Administrator/fooo/j-spatial.zip";//hdfs文件 地址
Configuration config=new Configuration();
FileSystem fs=FileSystem.get(URI.create(file),config);//構建FileSystem
InputStream is=fs.open(new Path(file));//讀取文件
IOUtils.copyBytes(is, new FileOutputStream(new File("c:\\likehua.zip")),2048, true);//保存到本地 最後 關閉輸入輸出流
./start
關於配置環境變量
./hbase shell
會了和能作項目是兩個層次,徹底不一樣的概念的撒。
時間戳能夠變成時間表那樣??
slave2:
slave1:
[root@slave1 ~]# ll
總用量 52
-rw-------. 1 root root 950 10月 14 16:02 anaconda-ks.cfg
-rw-r--r--. 1 root root 12081 5月 22 21:31 post-install
-rw-r--r--. 1 root root 552 5月 22 21:31 post-install.log
drwxr-xr-x. 2 root root 4096 10月 22 17:43 公共的
drwxr-xr-x. 2 root root 4096 10月 22 17:43 模板
drwxr-xr-x. 2 root root 4096 10月 22 17:43 視頻
drwxr-xr-x. 2 root root 4096 10月 22 17:43 圖片
drwxr-xr-x. 2 root root 4096 10月 22 17:43 文檔
drwxr-xr-x. 2 root root 4096 10月 22 17:43 下載
drwxr-xr-x. 2 root root 4096 10月 22 17:43 音樂
drwxr-xr-x. 2 root root 4096 10月 22 17:43 桌面
[root@slave1 ~]# cd /etc/sysconfig/network-scripts/
如何從master面板快速切換到slave2? shift+tab
[root@slave1 network-scripts]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME=eth0
UUID=16a631ff-0a3b-4915-9d6d-99743e5b0b56
ONBOOT=yes
HWADDR=00:0C:29:66:10:7b
IPADDR=192.168.150.4
PREFIX=24
GATEWAY=192.168.150.2
DNS1=192.168.150.2
LAST_CONNECT=1477239855
[root@slave1 network-scripts]# su hadoop
[hadoop@slave1 network-scripts]$ cd
[hadoop@slave1 ~]$ ll
總用量 68
drwxrwxr-x 5 hadoop hadoop 4096 11月 8 20:31 soft
-rw-rw-r-- 1 hadoop hadoop 60181 11月 9 11:44 zookeeper.out
[hadoop@slave1 ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave1 ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader
[hadoop@slave1 ~]$ jps
2971 Jps
2787 QuorumPeerMain
2907 HRegionServer
2588 NodeManager
2517 DataNode
slave2:
[root@slave2 ~]# ifconfig
[root@slave2 ~]# su hadoop
[hadoop@slave2 ~]$ zkServer.
zkServer.cmd zkServer.sh
[hadoop@slave2 ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave2 ~]$ jps
2609 NodeManager
2797 QuorumPeerMain
2534 DataNode
2824 Jps
[hadoop@slave2 ~]$
master:
[root@master ~]# su hadoop
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/11/19 10:55:16 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/soft/hadoop-2.6.4/logs/hadoop-hadoop-namenode-master.out
slave2: starting datanode, logging to /home/hadoop/soft/hadoop-2.6.4/logs/hadoop-hadoop-datanode-slave2.out
slave1: starting datanode, logging to /home/hadoop/soft/hadoop-2.6.4/logs/hadoop-hadoop-datanode-slave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/soft/hadoop-2.6.4/logs/hadoop-hadoop-secondarynamenode-master.out
16/11/19 10:55:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/soft/hadoop-2.6.4/logs/yarn-hadoop-resourcemanager-master.out
slave2: starting nodemanager, logging to /home/hadoop/soft/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-slave2.out
slave1: starting nodemanager, logging to /home/hadoop/soft/hadoop-2.6.4/logs/yarn-hadoop-nodemanager-slave1.out
[hadoop@master ~]$ jps
5623 SecondaryNameNode
5466 NameNode
6050 Jps
5793 ResourceManager
[hadoop@master ~]$ zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@master ~]$ zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/soft/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[hadoop@master ~]$ jps
5623 SecondaryNameNode
5466 NameNode
6117 QuorumPeerMain
5793 ResourceManager
6174 Jps
[hadoop@master ~]$ start-hbase.sh
starting master, logging to /home/hadoop/soft/hbase-1.2.4/logs/hbase-hadoop-master-master.out
192.168.150.3: starting regionserver, logging to /home/hadoop/soft/hbase-1.2.4/logs/hbase-hadoop-regionserver-master.out
192.168.150.5: starting regionserver, logging to /home/hadoop/soft/hbase-1.2.4/logs/hbase-hadoop-regionserver-slave2.out
192.168.150.4: starting regionserver, logging to /home/hadoop/soft/hbase-1.2.4/logs/hbase-hadoop-regionserver-slave1.out
[hadoop@master ~]$ jps
jps^H^H5623 SecondaryNameNode
6437 HRegionServer
5466 NameNode
6117 QuorumPeerMain
5793 ResourceManager
6547 Jps
6304 HMaster
[hadoop@master ~]$ hdfs dfs -ls /
16/11/19 11:01:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your pla
Found 3 items
drwxr-xr-x - hadoop supergroup 0 2016-11-19 10:59 /hbase
drwxr-xr-x - hadoop supergroup 0 2016-10-25 20:40 /tmp
drwxr-xr-x - hadoop supergroup 0 2016-10-25 19:43 /user
[hadoop@master ~]$ free
total used free shared buffers cached
Mem: 1012076 899848 112228 8 872 36104
-/+ buffers/cache: 862872 149204
Swap: 2031612 152652 1878960
[hadoop@master ~]$ free -m
total used free shared buffers cached
Mem: 988 880 107 0 0 35
-/+ buffers/cache: 844 144
Swap: 1983 148 1835
[hadoop@master ~]$ hbase shell
2016-11-19 11:02:55,144 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop librar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/or
SLF4J: Found binding in [jar:file:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016
hbase(main):001:0> helo
NameError: undefined local variable or method `helo' for #<Object:0x45b4d4a>
hbase(main):002:0> list
TABLE
0 row(s) in 2.3030 seconds
=> []
hbase(main):003:0> creard
NameError: undefined local variable or method `creard' for #<Object:0x45b4d4a>
hbase(main):004:0> put "''
hbase(main):005:0" put
hbase(main):006:0" [hadoop@master ~]$
[hadoop@master ~]$
[hadoop@master ~]$ hbase shell
2016-11-19 11:12:23,455 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016
hbase(main):001:0> list
TABLE
0 row(s) in 1.0970 seconds
=> []
hbase(main):002:0>
hbase刪除輸入錯誤:
執行刪除操做時,須要使用組合鍵:Ctrl+Back Space同時按下
HBase: 靈活的,若是網頁更改,適用於結構化、半結構化;只須要更改列和限定符就行。 以列的方法存儲 ,
非稀疏,容易造成數據塊。
傳統的數據庫:稀疏的(就是有空格,有空缺的元素留空白)
邏輯結構 --> 物理結構
第一個表格中每個時間戳的增長,會可能帶來三個新元素,而後找到第二個表的相對應行增長相應行的子行
hbasecontents:html = "<html>..."_百度搜索
https://www.baidu.com/s?wd=hbasecontents%3Ahtml%20%3D%20%22%3Chtml%3E...%22&rsv_spt=1&rsv_iqid=0xe281c553000aa0bf&issp=1&f=3&rsv_bp=1&rsv_idx=2&ie=utf-8&rqlang=cn&tn=site888_3_pg&rsv_enter=1&oq=contents%3Ahtml%20%3D%20%26quot%3B%26lt%3Bhtml%26gt%3B...%26quot%3B&inputT=2068&rsv_t=0cfeUlBQhS2%2B9h%2BkqlPLaajeoQzElnvneQ8%2BdFYDgKQeMeNR3FouOAsDE6nbEiGcSW1s&rsv_sug3=4&rsv_pq=f0944adf000e0101&rsv_sug2=0&prefixsug=hbasecontents%3Ahtml%20%3D%20%22%3Chtml%3E...%22&rsp=6&rsv_sug9=es_0_1&rsv_sug4=3285&rsv_sug=9
HBase使用教程 - 小碼哥BASE64 - ITeye技術網站
http://coderbase64.iteye.com/blog/2074601
hbase(main):後面的>變成‘’_百度搜索
https://www.baidu.com/s?wd=hbase(main)%3A%E5%90%8E%E9%9D%A2%E7%9A%84%3E%E5%8F%98%E6%88%90%E2%80%98%E2%80%99&rsv_spt=1&rsv_iqid=0xf52382ec000cdcb3&issp=1&f=3&rsv_bp=1&rsv_idx=2&ie=utf-8&rqlang=cn&tn=site888_3_pg&rsv_enter=1&rsv_t=9ecfwiGV7s4Gk1e0lIpu67VyNgGsC%2BiNO1tA%2FuEb4dWNkbY3Y6CqoOSE838ePQLnjfLO&oq=hbase(main)%3A005%3A0%26quot%3Bhbase(main)%3A005%3A0%26quot%3B&rsv_sug3=32&rsv_pq=8550b5d3000a62b9&rsv_sug2=0&prefixsug=hbase(main)%3A%E5%90%8E%E9%9D%A2%E7%9A%84%3E%E5%8F%98%E6%88%90%E2%80%98%E2%80%99&rsp=7&rsv_sug9=es_0_1&inputT=49183&rsv_sug4=49743&rsv_sug=9
HBase是按列存儲的,稀疏(列族少,數據吸取)矩陣。
HBase是列存儲,以列族爲存儲單位,傳統的數據庫是按行存儲。
即便行、列相同,獲得的值也是不同的(有時間戳)。
回收策略:數據的壓縮和整理,沒用的都會刪掉。
HBase 優化須要的源代碼
一個物理機有且僅有一個HRegionServer 、可能含有一個HBase(集羣中只有一個物理機含有)
見圖 HRegionServer 、HBase之間的關係至關於虛擬機的master、slave一、slave2....n
這個架構也相似於hadoop自己的架構,由master和namenode存儲元數據,真正的數據存儲在datanode上(會有多個datanode)。
- Hmaster負責元數據存儲,維護整個hbase集羣的狀態。
- HRegionServer是真正管理數據的增刪改查(數據庫的功能),用戶提交的數據請求都是經過HRegionServer協調處理。
² 每個物理機器節點只能有一個HRegionServer。
² 在HRegionServer下管理HRegion。
HRegion中存放的真正存儲單元。
HRegion就是咱們建立表的時候建立的,在默認狀況下創建的表都有一個HRegion(最基本的存儲單元,個數由列族決定),region根據行鍵劃分,對應部分範圍放到一塊兒。
HRegion到底交個哪一個HRegionServer管理,是由Hmaster管理,它會考慮負載均衡和資源管理。
HRegion中有一系列的結構。
HLog (hbase log)日誌記錄文件。至關於oracle中的重作日誌文件;Redo log記錄增刪改查的每一筆記錄。
Store單元存放真實的數據放。
數據以列族爲單位存放,列族就存放在store中。
Store又分爲兩個部分:MemStore內存存儲;StoreFile 存儲文件,是以HFile存在的,存在於hdfs系統中datanode上。
在HRegionServer中 HRegion會有多個,隨着數據不斷增長就會產生新的HRegion,hlog只有一個。
在hadoop中,存儲是均勻的,hbase的hfile就會分佈在datanode上。總體上數據進行了兩次分佈。在hbase和hadoop上進行兩次分佈。
第一次分佈:HBase --》HRegionServicer
第二次分佈: storefile --》 HDFS
一次一個塊讀取效率高,因此會有memstore(內存緩存區);memstore多了在以數據塊的方式存在
最終落實在磁盤上時是HDFS
一個數據塊64
memstore -》 storefile以Hfile的形式才能發給 -》HDFS
這裏HBase是虛的,不幹具體的實事,找到一個空閒即負載均衡的HRegionServicer去作具體的工做,
HRegionServicer至關於數據庫服務器 數據管理單位HRegion
Store真正數據存放的地方,以文件多個HFile的方式持久化存儲,MemStore用來存放零碎的,存到必定的數量再存放到StoreFile
再通過客戶端存、取
HLog用日誌來數據持久化
HBase的訪問方式 見1.4文檔詳細內容
thrift 序列化技術
每個store 對應一個列族
2個映像是爲了備份HDFS中的文件都要備份? 其餘的地方不須要備份嗎?
阻抗失諧
Redis 教程 | 菜鳥教程
http://www.runoob.com/redis/redis-tutorial.html
針對快照:
拷貝給別人會生成一份日誌,要刪掉;從別人那裏拷貝過來要調網絡配置
如今安裝的 密碼是root
老師的是打開的 是fdq , 進去是111111
跟教授稀裏糊塗的學的一塌糊塗
Hadoop、hbase、Mapreduce
hadoop的安裝 - [ hadoop學習 ]
http://www.kancloud.cn/zizhilong/hadoop/224101
蝦皮工做室 - 贈人玫瑰,手留餘香。
http://www.xiapistudio.com/
hadoop中典型Writable類詳解 - wuyudong - 博客園
http://www.cnblogs.com/wuyudong/p/hadoop-writable.html
Hadoop學習筆記(1):WordCount程序的實現與總結 - JD_Beatles - 博客園
http://www.cnblogs.com/pengyingzhi/p/5361008.html#3401994
Hadoop示例程序WordCount詳解及實例 - xwdreamer - 博客園
http://www.cnblogs.com/xwdreamer/archive/2011/01/04/2297049.html
Hadoop WordCount解讀 - 犀利的代碼老是很精煉 - ITeye技術網站
http://a123159521.iteye.com/blog/1226924
import java.io.IOException;
import java.net.URI;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
/**
*
*
* 小結:Hadoop程序處理流程
*
* 1. 分析輸入文件
* 2. 文件裏面的每一行都會執行一次 TokenizerMapper.map() 分割成單詞
* 3. 每個單詞都會被送到 KeyCombiner.reduce() 進行處理,統計次數
* 4. 當全部的單詞都統計完成之後,會合併到一塊兒而後送到 IntSumReducer.reduce() 進行最後的處理
*
*/
public class WordCount {
private static Configuration conf;
private static String hdfsPath = "hdfs://192.168.150.3:9000"; //hdfs 主機
private static Path inputPath = new Path(hdfsPath + "/tmp/wordcount/in"); //hdfs完整輸入路徑
private static String outPathStr = "/tmp/wordcount/out"; //hdfs輸出路徑
private static Path outPath = new Path(hdfsPath + outPathStr); //hdfs完整輸出路徑
/**
* Mapper 處理
*
* 四個參數的含義,前兩個是輸入的值 Key/Value的類型,後兩個是輸出的值Key/Value的類型 , 此處的輸出類型須要KeyCombiner的輸入參數保持一致
*/
public static class TokenizerMapper extends Mapper<Object,Text,Text,IntWritable>{
/**
* one 詞的個數,剛開始都爲1,也能夠不定義,直接context.write(keytext, 1);
*/
private final static IntWritable one=new IntWritable(1);
/**
* word 用於臨時保存分隔 出 的 單詞
*/
private Text word =new Text();
/**
* @param key 即行偏移量,做用不大
* @param value 匹配到的當前行的內容
* @param context
*/
public void map(Object key,Text value,Context context) throws IOException,InterruptedException{
System.out.println("mapper --- 行內容:"+value);
StringTokenizer itr=new StringTokenizer(value.toString()); //把內容行按空格進行分隔
while (itr.hasMoreTokens()) { //返回是否還有分隔符。
word.set(itr.nextToken()); //返回從當前位置到下一個分隔符的字符串。
//System.out.println("mapper --- 分割成單詞:"+word);
//以key/value的形式發送給reduce
context.write(word, one);
}
}
}
/**
* Reducer 處理
*
* 四個參數的含義,前兩個是輸入的值 Key/Value的類型,後兩個是輸出的值Key/Value的類型
*
* 此處的【輸入參數類型】須要和 KeyCombiner 的【輸出參數類型】保持一致
*
*/
public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
/**
* result 用於記錄單詞的出現數量
*/
private IntWritable result = new IntWritable();
/**
* @param key 單詞名稱
* @param values 單詞對應的值的集合
* @param context
*/
public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) { //遍歷值集合
int getval = val.get(); //獲得值
sum += getval; //累加數量
//System.out.println("reduce --- getval:"+getval);
}
System.out.println("reduce --- key:"+key +",sum:"+sum);
result.set(sum); //保存累加的數值
context.write(key, result); //將累加後的數值
}
}
/**
* Combiner 處理
*
* 四個參數的含義,前兩個是輸入的值 Key/Value的類型,後兩個是輸出的值Key/Value的類型
*
* 此處的【輸入參數類型】須要和 TokenizerMapper 的【輸出參數類型】保持一致
*
* 此處的【輸出參數類型】須要和 IntSumReducer 的【輸入參數類型】保持一致
*
*/
public static class KeyCombiner extends Reducer<Text,IntWritable,Text,IntWritable> {
/**
* result 用於記錄單詞的出現數量
*/
private IntWritable result = new IntWritable();
/**
* @param key 單詞名稱
* @param values 單詞對應的值的集合
* @param context
*/
public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) { //遍歷值集合
int getval = val.get(); //獲得值
sum += getval; //累加數量
//System.out.println("reduce --- getval:"+getval);
}
System.out.println("combiner --- key:"+key+",sum:"+sum);
result.set(sum); //保存累加的數值
context.write(key, result); //將結果傳遞給 IntSumReducer
}
}
/**
*
* 執行流程
*
* 輸入文件 -> TokenizerMapper -> KeyCombiner -> IntSumReducer -> 輸出到文件
*
* @param args
* @throws Exception
*/
public static void main(String[] args) throws Exception {
System.setProperty("hadoop.home.dir", "D:\\soft\\hadoop-2.6.4");
//刪除臨時輸出目錄
WordCount.deleteDir();
conf = new Configuration();
Job job = Job.getInstance(); //獲得 job的實例
job.setJobName("word count"); //設置任務名稱
job.setJarByClass(WordCount.class); //設置類名
job.setMapperClass(TokenizerMapper.class); //設置用於處理maper的類
job.setCombinerClass(KeyCombiner.class); //用於合併處理的類
job.setReducerClass(IntSumReducer.class); //設置用於處理reduce的類
job.setOutputKeyClass(Text.class); //設置輸出KEY的類
job.setOutputValueClass(IntWritable.class); //設置用於輸出VALUE的類
FileInputFormat.addInputPath(job, inputPath); //設置輸入參數路徑
FileOutputFormat.setOutputPath(job, outPath); //設置持久化輸出位置
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
/**
* 刪除臨時輸出目錄
*/
private static void deleteDir(){
Configuration conf = new Configuration();
FileSystem fs;
try {
fs = FileSystem.get(URI.create(hdfsPath),conf);
Path p=new Path(outPathStr);
boolean a=fs.delete(p,true);
System.out.println(a);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.out.println("delete Dir faild!");
}
}
}
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class WordCount { public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } } public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
tar jxvf 是解壓指今
解壓以bzip2壓縮的文件
-j 解壓命令(*.bz)
-x 釋放
-v 釋放時的信息
-f 指定解壓文件
看看下面這些
tar -cvf a.tar a 建立文件a的tar包
tar -tvf a.tar 查看tar包包含的文件
tar -xvf a.tar 釋放tar包文件
tar -rvf a.tar b 追加文件b到tar包a.tar
tar -Avf a.tar c.tar 追加c.tar包到a.tar包
tar -zcvf a.tar.gz a 建立文件a的gzip壓縮的tar包
tar -ztvf a.tar.gz 查看文件a的tar壓縮包內容
tar -zxvf aa.tar.gz 釋放aa.tar.gz包的內容
tar -jcvf aa.tar.bz2 aa 建立文件a的bzip壓縮的tar包
tar -jtvf aa.tar.bz2 查看文件a的tar壓縮包內容
tar -jxvf aa.tar.bz2 釋放aa.tar.bz2包的內
StringTokenizer類_Techome_新浪博客
http://blog.sina.com.cn/s/blog_45c6564d0100kmee.html
start-hbase.sh
stop-hbase.sh
status
指定列族刪除
11.22.2016
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:USER=hadoop
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:QTLIB=/usr/lib64/qt-3.3/lib
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:KDE_IS_PRELINKED=1
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:HOME=/home/hadoop
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_START_FILE=/tmp/hbase-hadoop-regionserver.autorestart
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-hadoop-regionserver-master.log
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:LANG=zh_CN.UTF-8
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hadoop-regionserver-master
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hadoop
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04
2016-11-22 17:33:08,111 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx8G, -XX:+UseConcMarkSweepGC, -XX:PermSize=128m, -XX:MaxPermSize=128m, -Dhbase.log.dir=/home/hadoop/soft/hbase-1.2.4/logs, -Dhbase.log.file=hbase-hadoop-regionserver-master.log, -Dhbase.home.dir=/home/hadoop/soft/hbase-1.2.4, -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/home/hadoop/soft/hadoop-2.6.4/lib/native, -Dhbase.security.logger=INFO,RFAS]
2016-11-22 17:33:08,594 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-22 17:33:09,651 INFO [main] regionserver.RSRpcServices: regionserver/master/192.168.150.3:16020 server-side HConnection retries=350
2016-11-22 17:33:11,094 INFO [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=3
2016-11-22 17:33:11,261 INFO [main] ipc.RpcServer: regionserver/master/192.168.150.3:16020: started 10 reader(s) listening on port=16020
2016-11-22 17:33:12,096 INFO [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2016-11-22 17:33:12,567 INFO [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-11-22 17:33:12,567 INFO [main] impl.MetricsSystemImpl: HBase metrics system started
2016-11-22 17:33:16,916 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2016-11-22 17:33:17,793 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:16020 connecting to ZooKeeper ensemble=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=master
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_67/jre
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/soft/hbase-1.2.4/conf:/usr/java/jdk1.7.0_67/lib/tools.jar:/home/hadoop/soft/hbase-1.2.4:/home/hadoop/soft/hbase-1.2.4/lib/activation-1.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/aopalliance-1.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/soft/hbase-1.2.4/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/soft/hbase-1.2.4/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop/soft/hbase-1.2.4/lib/api-util-1.0.0-M20.jar:/home/hadoop/soft/hbase-1.2.4/lib/asm-3.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/avro-1.7.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/aws-java-sdk-core-1.10.77.jar:/home/hadoop/soft/hbase-1.2.4/lib/aws-java-sdk-s3-1.11.34.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-beanutils-1.7.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-cli-1.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-codec-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-collections-3.2.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-configuration-1.6.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-daemon-1.0.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-digester-1.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-el-1.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-httpclient-3.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-io-2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-lang-2.6.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-logging-1.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-math-2.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-math3-3.1.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-net-3.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/disruptor-3.3.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/findbugs-annotations-1.3.9-1.jar:/home/hadoop/soft/hbase-1.2.4/lib/guava-12.0.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/guice-3.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/guice-servlet-3.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-annotations-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-ant-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-archives-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-archives-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-archives-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-auth-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-aws-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-datajoin-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-datajoin-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-datajoin-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-distcp-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-distcp-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-distcp-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-extras-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-extras-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-extras-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-gridmix-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-gridmix-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-gridmix-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-nfs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-kms-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-plugins-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-plugins-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-plugins-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-examples-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-examples-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-examples-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-nfs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-openstack-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-rumen-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-rumen-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-rumen-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-sls-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-sls-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-sls-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-streaming-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-streaming-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-streaming-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-api-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-api-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-distributedshell-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-distributedshell-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-distributedshell-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-client-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-client-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-client-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-registry-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-applicationhistoryservice-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-applicationhistoryservice-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-applicationhistoryservice-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-nodemanager-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-nodemanager-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-nodemanager-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-resourcemanager-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-resourcemanager-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-resourcemanager-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-web-proxy-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-web-proxy-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-web-proxy-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-annotations-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-annotations-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-client-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-common-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-common-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-examples-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-external-blockcache-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-hadoop2-compat-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-hadoop-compat-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-it-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-it-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-prefix-tree-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-procedure-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-protocol-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-resource-bundle-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-rest-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-server-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-server-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-shell-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-thrift-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/soft/hbase-1.2.4/lib/httpclient-4.2.5.jar:/home/hadoop/soft/hbase-1.2.4/lib/httpcore-4.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-xc-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jamon-runtime-2.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/jasper-compiler-5.5.23.jar:/home/hadoop/soft/hbase-1.2.4/lib/jasper-runtime-5.5.23.jar:/home/hadoop/soft/hbase-1.2.4/lib/javax.inject-1.jar:/home/hadoop/soft/hbase-1.2.4/lib/java-xmlbuilder-0.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/jaxb-api-2.2.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/soft/hbase-1.2.4/lib/jcodings-1.0.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-client-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-core-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-guice-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-json-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-server-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jets3t-0.9.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/jettison-1.3.3.jar:/home/hadoop/soft/hbase-1.2.4/lib/jetty-6.1.26.jar:/home/hadoop/soft/hbase-1.2.4/lib/jetty-sslengine-6.1.26.jar:/home/hadoop/soft/hbase-1.2.4/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hbase-1.2.4/lib/joni-2.1.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/jruby-complete-1.6.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/jsch-0.1.42.jar:/home/hadoop/soft/hbase-1.2.4/lib/jsp-2.1-6.1.14.jar:/home/hadoop/soft/hbase-1.2.4/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/soft/hbase-1.2.4/lib/junit-4.12.jar:/home/hadoop/soft/hbase-1.2.4/lib/leveldbjni-all-1.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/libthrift-0.9.3.jar:/home/hadoop/soft/hbase-1.2.4/lib/log4j-1.2.17.jar:/home/hadoop/soft/hbase-1.2.4/lib/metrics-core-2.2.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/netty-all-4.0.23.Final.jar:/home/hadoop/soft/hbase-1.2.4/lib/paranamer-2.3.jar:/home/hadoop/soft/hbase-1.2.4/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/soft/hbase-1.2.4/lib/servlet-api-2.5.jar:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-api-1.7.7.jar:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/soft/hbase-1.2.4/lib/snappy-java-1.0.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/spymemcached-2.11.6.jar:/home/hadoop/soft/hbase-1.2.4/lib/xmlenc-0.52.jar:/home/hadoop/soft/hbase-1.2.4/lib/xz-1.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/zookeeper-3.4.6.jar:/home/hadoop/soft/hadoop-2.6.4/etc/hadoop:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/gson-2.2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/hadoop-annotations-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/hadoop-auth-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/junit-4.11.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/hadoop-nfs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4-tests.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4-tests.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-registry-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-client-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-api-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4-tests.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/contrib/capacity-scheduler/*.jar
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/soft/hadoop-2.6.4/lib/native
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-642.el6.x86_64
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/soft/hbase-1.2.4
2016-11-22 17:33:17,844 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181 sessionTimeout=90000 watcher=regionserver:160200x0, quorum=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181, baseZNode=/hbase
2016-11-22 17:33:17,953 INFO [main-SendThread(192.168.150.5:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.150.5/192.168.150.5:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-22 17:33:17,998 INFO [main-SendThread(192.168.150.5:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.150.5/192.168.150.5:2181, initiating session
2016-11-22 17:33:18,035 INFO [main-SendThread(192.168.150.5:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.150.5/192.168.150.5:2181, sessionid = 0x3588b8bd1a70002, negotiated timeout = 40000
2016-11-22 17:33:18,201 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2016-11-22 17:33:18,202 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: starting
2016-11-22 17:33:18,659 INFO [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-11-22 17:33:18,670 INFO [main] http.HttpRequestLog: Http request log for http.requests.regionserver is not defined
2016-11-22 17:33:18,717 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2016-11-22 17:33:18,718 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
2016-11-22 17:33:18,726 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2016-11-22 17:33:18,726 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-11-22 17:33:18,726 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-11-22 17:33:18,810 INFO [main] http.HttpServer: Jetty bound to port 16030
2016-11-22 17:33:18,810 INFO [main] mortbay.log: jetty-6.1.26
2016-11-22 17:33:21,264 INFO [main] mortbay.log: Started SelectChannelConnector@0.0.0.0:16030
2016-11-22 17:33:21,631 INFO [regionserver/master/192.168.150.3:16020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66bc473e connecting to ZooKeeper ensemble=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181
2016-11-22 17:33:21,632 INFO [regionserver/master/192.168.150.3:16020] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181 sessionTimeout=90000 watcher=hconnection-0x66bc473e0x0, quorum=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181, baseZNode=/hbase
2016-11-22 17:33:21,637 INFO [regionserver/master/192.168.150.3:16020-SendThread(192.168.150.4:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.150.4/192.168.150.4:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-22 17:33:21,641 INFO [regionserver/master/192.168.150.3:16020-SendThread(192.168.150.4:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.150.4/192.168.150.4:2181, initiating session
2016-11-22 17:33:21,648 INFO [regionserver/master/192.168.150.3:16020-SendThread(192.168.150.4:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.150.4/192.168.150.4:2181, sessionid = 0x2588b290c690004, negotiated timeout = 40000
2016-11-22 17:33:22,008 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: ClusterId : 3c1b7721-de8f-42a6-90a9-8c0a694da768
2016-11-22 17:33:22,216 INFO [regionserver/master/192.168.150.3:16020] regionserver.MemStoreFlusher: globalMemStoreLimit=3.2 G, globalMemStoreLimitLowMark=3.0 G, maxHeap=8.0 G
2016-11-22 17:33:22,241 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2016-11-22 17:33:22,362 INFO [regionserver/master/192.168.150.3:16020] regionserver.RegionServerCoprocessorHost: System coprocessor loading is enabled
2016-11-22 17:33:22,362 INFO [regionserver/master/192.168.150.3:16020] regionserver.RegionServerCoprocessorHost: Table coprocessor loading is enabled
2016-11-22 17:33:22,376 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: reportForDuty to master=master,16000,1479807189530 with port=16020, startcode=1479807192653
2016-11-22 17:33:22,996 WARN [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying.
2016-11-22 17:33:26,000 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: reportForDuty to master=master,16000,1479807189530 with port=16020, startcode=1479807192653
2016-11-22 17:33:26,084 INFO [regionserver/master/192.168.150.3:16020] hfile.CacheConfig: Allocating LruBlockCache size=3.20 GB, blockSize=64 KB
2016-11-22 17:33:26,103 INFO [regionserver/master/192.168.150.3:16020] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=3520936, freeSize=3428966488, maxSize=3432487424, heapSize=3520936, minSize=3260862976, minFactor=0.95, multiSize=1630431488, multiFactor=0.5, singleSize=815215744, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-11-22 17:33:26,336 INFO [regionserver/master/192.168.150.3:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-11-22 17:33:26,390 INFO [regionserver/master/192.168.150.3:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=master%2C16020%2C1479807192653.default, suffix=, logDir=hdfs://192.168.150.3:9000/hbase/WALs/master,16020,1479807192653, archiveDir=hdfs://192.168.150.3:9000/hbase/oldWALs
2016-11-22 17:33:26,655 INFO [regionserver/master/192.168.150.3:16020] wal.FSHLog: Slow sync cost: 199 ms, current pipeline: []
2016-11-22 17:33:26,656 INFO [regionserver/master/192.168.150.3:16020] wal.FSHLog: New WAL /hbase/WALs/master,16020,1479807192653/master%2C16020%2C1479807192653.default.1479807206391
2016-11-22 17:33:26,742 INFO [regionserver/master/192.168.150.3:16020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2016-11-22 17:33:26,761 INFO [regionserver/master/192.168.150.3:16020] regionserver.ReplicationSourceManager: Current list of replicators: [master,16020,1479807192653, slave1,16020,1479712512545, master,16020,1479712420627] other RSs: [master,16020,1479807192653, slave2,16020,1479813638869, slave1,16020,1479807188273]
2016-11-22 17:33:27,468 INFO [main-EventThread] replication.ReplicationTrackerZKImpl: /hbase/rs/slave2,16020,1479813638869 znode expired, triggering replicatorRemoved event
2016-11-22 17:33:27,490 INFO [SplitLogWorker-master:16020] regionserver.SplitLogWorker: SplitLogWorker master,16020,1479807192653 starting
2016-11-22 17:33:27,495 INFO [regionserver/master/192.168.150.3:16020] regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
2016-11-22 17:33:27,518 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: Serving as master,16020,1479807192653, RpcServer on master/192.168.150.3:16020, sessionid=0x3588b8bd1a70002
2016-11-22 17:33:27,542 INFO [regionserver/master/192.168.150.3:16020] quotas.RegionServerQuotaManager: Quota support disabled
[hadoop@master soft]$ hbase shell
2016-11-22 17:34:14,980 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016
hbase(main):001:0> list
TABLE
0 row(s) in 0.3460 seconds
=> []
hbase(main):002:0> create 'scores','grad','course'
0 row(s) in 1.3900 seconds
=> Hbase::Table - scores
hbase(main):003:0> put 'scores','zkd','grad:','5'
0 row(s) in 0.3950 seconds
hbase(main):004:0> list
TABLE
scores
1 row(s) in 0.0220 seconds
=> ["scores"]
hbase(main):005:0> put 'scores','zkd','course:math','97'
0 row(s) in 0.0260 seconds
hbase(main):006:0> get 'scores','zkd'
COLUMN CELL
course:math timestamp=1479807433445, value=97
grad: timestamp=1479807365109, value=5
2 row(s) in 0.0730 seconds
hbase(main):007:0> scan 'scores'
ROW COLUMN+CELL
zkd column=course:math, timestamp=1479807433445, value=97
zkd column=grad:, timestamp=1479807365109, value=5
1 row(s) in 0.0450 seconds
hbase(main):008:0> list
TABLE
scores
1 row(s) in 0.0180 seconds
=> ["scores"]
hbase(main):009:0> disable 'menmber
hbase(main):010:0' disable 'menmber'
hbase(main):011:0' disable 'member'
hbase(main):012:0'
hbase(main):013:0' disable 'scores'
hbase(main):014:0' '
SyntaxError: (hbase):10: syntax error, unexpected tIDENTIFIER
disable 'menmber'
^
hbase(main):015:0> disable 'scores'
0 row(s) in 2.3940 seconds
hbase(main):016:0> drop 'scores'
0 row(s) in 1.3230 seconds
hbase(main):017:0> describe 'member'
ERROR: Unknown table member!
Here is some help for this command:
Describe the named table. For example:
hbase> describe 't1'
hbase> describe 'ns1:t1'
Alternatively, you can use the abbreviated 'desc' for the same thing.
hbase> desc 't1'
hbase> desc 'ns1:t1'
hbase(main):018:0> create 'member','member_id','address','info'
0 row(s) in 1.2870 seconds
=> Hbase::Table - member
hbase(main):019:0> list
TABLE
member
1 row(s) in 0.0120 seconds
=> ["member"]
hbase(main):020:0> describe 'member'
Table member is ENABLED
member
COLUMN FAMILIES DESCRIPTION
{NAME => 'address', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS =>
'0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
{NAME => 'info', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '0'
, TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
{NAME => 'member_id', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS =
> '0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
3 row(s) in 0.0720 seconds
hbase(main):021:0> disable 'member'
0 row(s) in 2.3090 seconds
hbase(main):022:0> list
TABLE
member
1 row(s) in 0.0170 seconds
=> ["member"]
hbase(main):023:0> is_disabled 'member'
true
0 row(s) in 0.0220 seconds
hbase(main):024:0> is_disabled 'scores'
ERROR: Unknown table scores!
Here is some help for this command:
Is named table disabled? For example:
hbase> is_disabled 't1'
hbase> is_disabled 'ns1:t1'
hbase(main):025:0> enable 'member'
0 row(s) in 1.3500 seconds
hbase(main):026:0> create 'tmp_member','member'
0 row(s) in 1.2740 seconds
=> Hbase::Table - tmp_member
hbase(main):027:0> describe 'tmp_member'
Table tmp_member is ENABLED
tmp_member
COLUMN FAMILIES DESCRIPTION
{NAME => 'member', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW', REPLICATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'NONE', MIN_VERSIONS => '
0', TTL => 'FOREVER', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
1 row(s) in 0.0320 seconds
hbase(main):028:0> list
TABLE
member
tmp_member
2 row(s) in 0.0160 seconds
=> ["member", "tmp_member"]
hbase(main):029:0> diable 'tmp_member'
NoMethodError: undefined method `diable' for #<Object:0x64d4ebfb>
hbase(main):030:0> disable 'tmp_member'
0 row(s) in 2.2810 seconds
hbase(main):031:0> put 'member','xueba','info:age','25'
0 row(s) in 0.0250 seconds
hbase(main):032:0> put 'member','xueba','info:birthday','1989-06-19'
0 row(s) in 0.0070 seconds
hbase(main):033:0> put 'member','xueba','info:company','tecent'
0 row(s) in 0.0070 seconds
hbase(main):034:0> put 'member','xueba','address:contry','china'
0 row(s) in 0.0070 seconds
hbase(main):035:0> put 'member','xueba','address:province','guangdong'
0 row(s) in 0.0080 seconds
hbase(main):036:0> put 'member','xueba','address:city','shenzhen'
0 row(s) in 0.0080 seconds
hbase(main):037:0> put 'member','xiaoming','info:age','24'
0 row(s) in 0.0070 seconds
hbase(main):038:0> put 'member','xiaoming','info:birthday','1990-03-22'
0 row(s) in 0.0060 seconds
hbase(main):039:0> put 'member','xiaoming','info:company','tecent'
0 row(s) in 0.0120 seconds
hbase(main):040:0> put 'member','xiaoming','info:favorite','movie'
0 row(s) in 0.0070 seconds
hbase(main):041:0> put 'member','xiaoming','address:contry','china'
0 row(s) in 0.0060 seconds
hbase(main):042:0> put 'member','xiaoming','address:province','guangdong'
0 row(s) in 0.0070 seconds
hbase(main):043:0> put 'member','xiaoming','address:city','guangzhou'
0 row(s) in 0.0100 seconds
hbase(main):044:0> scan 'member'
ROW COLUMN+CELL
xiaoming column=address:city, timestamp=1479808315381, value=guangzhou
xiaoming column=address:contry, timestamp=1479808315320, value=china
xiaoming column=address:province, timestamp=1479808315354, value=guangdong
xiaoming column=info:age, timestamp=1479808315155, value=24
xiaoming column=info:birthday, timestamp=1479808315187, value=1990-03-22
xiaoming column=info:company, timestamp=1479808315230, value=tecent
xiaoming column=info:favorite, timestamp=1479808315283, value=movie
xueba column=address:city, timestamp=1479808315123, value=shenzhen
xueba column=address:contry, timestamp=1479808315056, value=china
xueba column=address:province, timestamp=1479808315089, value=guangdong
xueba column=info:age, timestamp=1479808314937, value=25
xueba column=info:birthday, timestamp=1479808314975, value=1989-06-19
xueba column=info:company, timestamp=1479808315013, value=tecent
2 row(s) in 0.0960 seconds
hbase(main):045:0> list
TABLE
member
tmp_member
2 row(s) in 0.0220 seconds
=> ["member", "tmp_member"]
hbase(main):046:0> create 'scores','grad','course'
0 row(s) in 2.3230 seconds
=> Hbase::Table - scores
hbase(main):047:0> put 'scores','zkd','grade','5'
ERROR: Unknown column family! Valid column names: course:*, grad:*
Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}
The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:
hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase(main):048:0> put 'scores','zkd','course:math','97'
0 row(s) in 0.0290 seconds
hbase(main):049:0> get 'scores','zkd'
COLUMN CELL
course:math timestamp=1479808523355, value=97
1 row(s) in 0.0310 seconds
hbase(main):050:0> scan
ERROR: wrong number of arguments (0 for 1)
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS, ALL_METRICS or METRICS
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
If you wish to see metrics regarding the execution of the scan, the
ALL_METRICS boolean should be set to true. Alternatively, if you would
prefer to see only a subset of the metrics, the METRICS array can be
defined to include the names of only the metrics you care about.
Some examples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ALL_METRICS => true}
hbase> scan 't1', {METRICS => ['RPC_RETRIES', 'ROWS_FILTERED']}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated:
1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.
Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family.
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
hbase(main):051:0> scan 'member'
ROW COLUMN+CELL
xiaoming column=address:city, timestamp=1479808315381, value=guangzhou
xiaoming column=address:contry, timestamp=1479808315320, value=china
xiaoming column=address:province, timestamp=1479808315354, value=guangdong
xiaoming column=info:age, timestamp=1479808315155, value=24
xiaoming column=info:birthday, timestamp=1479808315187, value=1990-03-22
xiaoming column=info:company, timestamp=1479808315230, value=tecent
xiaoming column=info:favorite, timestamp=1479808315283, value=movie
xueba column=address:city, timestamp=1479808315123, value=shenzhen
xueba column=address:contry, timestamp=1479808315056, value=china
xueba column=address:province, timestamp=1479808315089, value=guangdong
xueba column=info:age, timestamp=1479808314937, value=25
xueba column=info:birthday, timestamp=1479808314975, value=1989-06-19
xueba column=info:company, timestamp=1479808315013, value=tecent
2 row(s) in 0.1080 seconds
hbase(main):052:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0150 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):053:0> get 'member','xueba'
COLUMN CELL
address:city timestamp=1479808315123, value=shenzhen
address:contry timestamp=1479808315056, value=china
address:province timestamp=1479808315089, value=guangdong
info:age timestamp=1479808314937, value=25
info:birthday timestamp=1479808314975, value=1989-06-19
info:company timestamp=1479808315013, value=tecent
6 row(s) in 0.0310 seconds
hbase(main):054:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0200 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):055:0> get 'member','xueba'
COLUMN CELL
address:city timestamp=1479808315123, value=shenzhen
address:contry timestamp=1479808315056, value=china
address:province timestamp=1479808315089, value=guangdong
info:age timestamp=1479808314937, value=25
info:birthday timestamp=1479808314975, value=1989-06-19
info:company timestamp=1479808315013, value=tecent
6 row(s) in 0.0280 seconds
hbase(main):056:0> get 'member','xueba'
COLUMN CELL
address:city timestamp=1479808315123, value=shenzhen
address:contry timestamp=1479808315056, value=china
address:province timestamp=1479808315089, value=guangdong
info:age timestamp=1479808314937, value=25
info:birthday timestamp=1479808314975, value=1989-06-19
info:company timestamp=1479808315013, value=tecent
6 row(s) in 0.0190 seconds
hbase(main):057:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0230 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):058:0> scan 'member'
ROW COLUMN+CELL
xiaoming column=address:city, timestamp=1479808315381, value=guangzhou
xiaoming column=address:contry, timestamp=1479808315320, value=china
xiaoming column=address:province, timestamp=1479808315354, value=guangdong
xiaoming column=info:age, timestamp=1479808315155, value=24
xiaoming column=info:birthday, timestamp=1479808315187, value=1990-03-22
xiaoming column=info:company, timestamp=1479808315230, value=tecent
xiaoming column=info:favorite, timestamp=1479808315283, value=movie
xueba column=address:city, timestamp=1479808315123, value=shenzhen
xueba column=address:contry, timestamp=1479808315056, value=china
xueba column=address:province, timestamp=1479808315089, value=guangdong
xueba column=info:age, timestamp=1479808314937, value=25
xueba column=info:birthday, timestamp=1479808314975, value=1989-06-19
xueba column=info:company, timestamp=1479808315013, value=tecent
2 row(s) in 0.1080 seconds
hbase(main):059:0> get 'member','xueba'
COLUMN CELL
address:city timestamp=1479808315123, value=shenzhen
address:contry timestamp=1479808315056, value=china
address:province timestamp=1479808315089, value=guangdong
info:age timestamp=1479808314937, value=25
info:birthday timestamp=1479808314975, value=1989-06-19
info:company timestamp=1479808315013, value=tecent
6 row(s) in 0.0330 seconds
hbase(main):060:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0230 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):061:0> get 'member','xueba'
COLUMN CELL
address:city timestamp=1479808315123, value=shenzhen
address:contry timestamp=1479808315056, value=china
address:province timestamp=1479808315089, value=guangdong
info:age timestamp=1479808314937, value=25
info:birthday timestamp=1479808314975, value=1989-06-19
info:company timestamp=1479808315013, value=tecent
6 row(s) in 0.0350 seconds
hbase(main):062:0> get 'member','xueba','info'
COLUMN CELL
info:age timestamp=1479808314937, value=25
info:birthday timestamp=1479808314975, value=1989-06-19
info:company timestamp=1479808315013, value=tecent
3 row(s) in 0.0240 seconds
hbase(main):063:0> get 'member','xueba','info:birthday'
COLUMN CELL
info:birthday timestamp=1479808314975, value=1989-06-19
1 row(s) in 0.0130 seconds
hbase(main):064:0> put 'member','xueba','info:age','26'
0 row(s) in 0.0120 seconds
hbase(main):065:0> put 'member','xueba','info:birthday','1990-06-22'
0 row(s) in 0.0140 seconds
hbase(main):066:0> get 'member','xueba','info:birthday'
COLUMN CELL
info:birthday timestamp=1479808994771, value=1990-06-22
1 row(s) in 0.0160 seconds
hbase(main):067:0> delete 'member','xueba','info:age'
0 row(s) in 0.0390 seconds
hbase(main):068:0> get 'member','xueba','info:age'
COLUMN CELL
0 row(s) in 0.0170 seconds
hbase(main):069:0> get 'member','xueba','info'
COLUMN CELL
info:birthday timestamp=1479808994771, value=1990-06-22
info:company timestamp=1479808315013, value=tecent
2 row(s) in 0.0190 seconds
hbase(main):070:0> get 'member','xueba'
COLUMN CELL
address:city timestamp=1479808315123, value=shenzhen
address:contry timestamp=1479808315056, value=china
address:province timestamp=1479808315089, value=guangdong
info:birthday timestamp=1479808994771, value=1990-06-22
info:company timestamp=1479808315013, value=tecent
5 row(s) in 0.0330 seconds
hbase(main):071:0> deleteall 'member','xiaoming'
0 row(s) in 0.0150 seconds
hbase(main):072:0> get 'member','xiaoming'
COLUMN CELL
0 row(s) in 0.0180 seconds
hbase(main):073:0> scan
ERROR: wrong number of arguments (0 for 1)
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, ROWPREFIXFILTER, TIMESTAMP,
MAXLENGTH or COLUMNS, CACHE or RAW, VERSIONS, ALL_METRICS or METRICS
If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family'.
The filter can be specified in two ways:
1. Using a filterString - more information on this is available in the
Filter Language document attached to the HBASE-4176 JIRA
2. Using the entire package name of the filter.
If you wish to see metrics regarding the execution of the scan, the
ALL_METRICS boolean should be set to true. Alternatively, if you would
prefer to see only a subset of the metrics, the METRICS array can be
defined to include the names of only the metrics you care about.
Some examples:
hbase> scan 'hbase:meta'
hbase> scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}
hbase> scan 'ns1:t1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
hbase> scan 't1', {REVERSED => true}
hbase> scan 't1', {ALL_METRICS => true}
hbase> scan 't1', {METRICS => ['RPC_RETRIES', 'ROWS_FILTERED']}
hbase> scan 't1', {ROWPREFIXFILTER => 'row2', FILTER => "
(QualifierFilter (>=, 'binary:xyz')) AND (TimestampsFilter ( 123, 456))"}
hbase> scan 't1', {FILTER =>
org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
For setting the Operation Attributes
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], ATTRIBUTES => {'mykey' => 'myvalue'}}
hbase> scan 't1', { COLUMNS => ['c1', 'c2'], AUTHORIZATIONS => ['PRIVATE','SECRET']}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Also for experts, there is an advanced option -- RAW -- which instructs the
scanner to return all cells (including delete markers and uncollected deleted
cells). This option cannot be combined with requesting specific COLUMNS.
Disabled by default. Example:
hbase> scan 't1', {RAW => true, VERSIONS => 10}
Besides the default 'toStringBinary' format, 'scan' supports custom formatting
by column. A user can define a FORMATTER by adding it to the column name in
the scan specification. The FORMATTER can be stipulated:
1. either as a org.apache.hadoop.hbase.util.Bytes method name (e.g, toInt, toString)
2. or as a custom class followed by method name: e.g. 'c(MyFormatterClass).format'.
Example formatting cf:qualifier1 and cf:qualifier2 both as Integers:
hbase> scan 't1', {COLUMNS => ['cf:qualifier1:toInt',
'cf:qualifier2:c(org.apache.hadoop.hbase.util.Bytes).toInt'] }
Note that you can specify a FORMATTER by column only (cf:qualifier). You cannot
specify a FORMATTER for all columns of a column family.
Scan can also be used directly from a table, by first getting a reference to a
table, like such:
hbase> t = get_table 't'
hbase> t.scan
Note in the above situation, you can still provide all the filtering, columns,
options, etc as described above.
hbase(main):074:0> count 'member'
1 row(s) in 0.0410 seconds
=> 1
hbase(main):075:0> scan 'member'
ROW COLUMN+CELL
xueba column=address:city, timestamp=1479808315123, value=shenzhen
xueba column=address:contry, timestamp=1479808315056, value=china
xueba column=address:province, timestamp=1479808315089, value=guangdong
xueba column=info:birthday, timestamp=1479808994771, value=1990-06-22
xueba column=info:company, timestamp=1479808315013, value=tecent
1 row(s) in 0.0410 seconds
hbase(main):076:0> truncate 'member'
Truncating 'member' table (it may take a while):
- Disabling table...
- Truncating table...
0 row(s) in 4.0050 seconds
hbase(main):077:0> scan 'member'
ROW COLUMN+CELL
0 row(s) in 0.1440 seconds
hbase(main):078:0> lisyt
NameError: undefined local variable or method `lisyt' for #<Object:0x64d4ebfb>
hbase(main):079:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0200 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):080:0> disabled 'score'
NoMethodError: undefined method `disabled' for #<Object:0x64d4ebfb>
hbase(main):081:0> drop 'score'
ERROR: Table score does not exist.
Here is some help for this command:
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1'
hbase(main):082:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0220 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):083:0> drop 'scores'
ERROR: Table scores is enabled. Disable it first.
Here is some help for this command:
Drop the named table. Table must first be disabled:
hbase> drop 't1'
hbase> drop 'ns1:t1'
hbase(main):084:0> list
TABLE
member
scores
tmp_member
3 row(s) in 0.0190 seconds
=> ["member", "scores", "tmp_member"]
hbase(main):085:0> disabled 'scores'
NoMethodError: undefined method `disabled' for #<Object:0x64d4ebfb>
hbase(main):086:0> dis
disable disable_all disable_peer disable_table_replication display
hbase(main):086:0> disable 'scores'
0 row(s) in 2.3300 seconds
hbase(main):087:0> drop 'scores'
0 row(s) in 1.3060 seconds
hbase(main):088:0> list
TABLE
member
tmp_member
2 row(s) in 0.0200 seconds
=> ["member", "tmp_member"]
hbase(main):089:0> create 'scores','grad','course'
0 row(s) in 1.3220 seconds
=> Hbase::Table - scores
hbase(main):090:0> put 'scores','Tom','grade','5'
ERROR: Unknown column family! Valid column names: course:*, grad:*
Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}
The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:
hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase(main):091:0> put 'scores','Tom','course:math','97'
0 row(s) in 0.0210 seconds
hbase(main):092:0> put 'scores','Tom','course:art','87'
0 row(s) in 0.0060 seconds
hbase(main):093:0> put 'scores','Jim','grade','4'
ERROR: Unknown column family! Valid column names: course:*, grad:*
Here is some help for this command:
Put a cell 'value' at specified table/row/column and optionally
timestamp coordinates. To put a cell value into table 'ns1:t1' or 't1'
at row 'r1' under column 'c1' marked with the time 'ts1', do:
hbase> put 'ns1:t1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value'
hbase> put 't1', 'r1', 'c1', 'value', ts1
hbase> put 't1', 'r1', 'c1', 'value', {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase> put 't1', 'r1', 'c1', 'value', ts1, {VISIBILITY=>'PRIVATE|SECRET'}
The same commands also can be run on a table reference. Suppose you had a reference
t to table 't1', the corresponding command would be:
hbase> t.put 'r1', 'c1', 'value', ts1, {ATTRIBUTES=>{'mykey'=>'myvalue'}}
hbase(main):094:0> put 'scores','Jim','course:english','88'
0 row(s) in 0.0110 seconds
hbase(main):095:0> put 'scores','Jim','course:chinese','88'
0 row(s) in 0.0050 seconds
hbase(main):096:0> scan 'scores'
ROW COLUMN+CELL
Jim column=course:chinese, timestamp=1479809653115, value=88
Jim column=course:english, timestamp=1479809653084, value=88
Tom column=course:art, timestamp=1479809653019, value=87
Tom column=course:math, timestamp=1479809652982, value=97
2 row(s) in 0.0270 seconds
hbase(main):097:0> get 'scores','Tom'
COLUMN CELL
course:art timestamp=1479809653019, value=87
course:math timestamp=1479809652982, value=97
2 row(s) in 0.0210 seconds
hbase(main):098:0> put 'scores','Tom','course:math','100'
0 row(s) in 0.0100 seconds
hbase(main):099:0> get 'scores','Tom'
COLUMN CELL
course:art timestamp=1479809653019, value=87
course:math timestamp=1479809698610, value=100
2 row(s) in 0.0210 seconds
hbase(main):100:0> get 'scores','Tom',{COLUMN=>'course:math',VERSIONS=>3}
COLUMN CELL
course:math timestamp=1479809698610, value=100
1 row(s) in 0.0250 seconds
hbase(main):101:0> quit
[hadoop@master soft]$
[root@slave1 ~]# su hadoop
[hadoop@slave1 root]$ cd ~
[hadoop@slave1 ~]$ zkServer.sh start
[hadoop@slave1 ~]$ start-all.sh
[hadoop@slave1 ~]$ start-hbase.sh
[hadoop@slave1 ~]$ hbase shell
hbase(main):002:0> version
1.2.4, r67592f3d062743907f8c5ae00dbbe1ae4f69e5af, Tue Oct 25 18:10:20 CDT 2016
hbase(main):003:0> exit
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:USER=hadoop
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:QTLIB=/usr/lib64/qt-3.3/lib
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:KDE_IS_PRELINKED=1
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:HOME=/home/hadoop
2016-11-22 17:33:08,101 INFO [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_START_FILE=/tmp/hbase-hadoop-regionserver.autorestart
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-hadoop-regionserver-master.log
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:LANG=zh_CN.UTF-8
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:SSH_ASKPASS=/usr/libexec/openssh/gnome-ssh-askpass
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hadoop-regionserver-master
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hadoop
2016-11-22 17:33:08,102 INFO [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=24.65-b04
2016-11-22 17:33:08,111 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Xmx8G, -XX:+UseConcMarkSweepGC, -XX:PermSize=128m, -XX:MaxPermSize=128m, -Dhbase.log.dir=/home/hadoop/soft/hbase-1.2.4/logs, -Dhbase.log.file=hbase-hadoop-regionserver-master.log, -Dhbase.home.dir=/home/hadoop/soft/hbase-1.2.4, -Dhbase.id.str=hadoop, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=/home/hadoop/soft/hadoop-2.6.4/lib/native, -Dhbase.security.logger=INFO,RFAS]
2016-11-22 17:33:08,594 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-11-22 17:33:09,651 INFO [main] regionserver.RSRpcServices: regionserver/master/192.168.150.3:16020 server-side HConnection retries=350
2016-11-22 17:33:11,094 INFO [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=3
2016-11-22 17:33:11,261 INFO [main] ipc.RpcServer: regionserver/master/192.168.150.3:16020: started 10 reader(s) listening on port=16020
2016-11-22 17:33:12,096 INFO [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2016-11-22 17:33:12,567 INFO [main] impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-11-22 17:33:12,567 INFO [main] impl.MetricsSystemImpl: HBase metrics system started
2016-11-22 17:33:16,916 INFO [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2016-11-22 17:33:17,793 INFO [main] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:16020 connecting to ZooKeeper ensemble=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:host.name=master
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.version=1.7.0_67
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_67/jre
2016-11-22 17:33:17,835 INFO [main] zookeeper.ZooKeeper: Client environment:java.class.path=/home/hadoop/soft/hbase-1.2.4/conf:/usr/java/jdk1.7.0_67/lib/tools.jar:/home/hadoop/soft/hbase-1.2.4:/home/hadoop/soft/hbase-1.2.4/lib/activation-1.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/aopalliance-1.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/soft/hbase-1.2.4/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/soft/hbase-1.2.4/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop/soft/hbase-1.2.4/lib/api-util-1.0.0-M20.jar:/home/hadoop/soft/hbase-1.2.4/lib/asm-3.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/avro-1.7.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/aws-java-sdk-core-1.10.77.jar:/home/hadoop/soft/hbase-1.2.4/lib/aws-java-sdk-s3-1.11.34.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-beanutils-1.7.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-cli-1.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-codec-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-collections-3.2.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-configuration-1.6.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-daemon-1.0.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-digester-1.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-el-1.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-httpclient-3.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-io-2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-lang-2.6.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-logging-1.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-math-2.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-math3-3.1.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/commons-net-3.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/disruptor-3.3.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/findbugs-annotations-1.3.9-1.jar:/home/hadoop/soft/hbase-1.2.4/lib/guava-12.0.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/guice-3.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/guice-servlet-3.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-annotations-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-ant-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-archives-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-archives-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-archives-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-auth-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-aws-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-datajoin-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-datajoin-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-datajoin-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-distcp-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-distcp-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-distcp-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-extras-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-extras-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-extras-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-gridmix-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-gridmix-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-gridmix-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-hdfs-nfs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-kms-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-plugins-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-plugins-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-hs-plugins-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-examples-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-examples-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-mapreduce-examples-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-nfs-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-openstack-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-rumen-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-rumen-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-rumen-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-sls-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-sls-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-sls-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-streaming-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-streaming-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-streaming-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-api-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-api-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-distributedshell-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-distributedshell-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-distributedshell-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-client-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-client-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-client-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-registry-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-applicationhistoryservice-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-applicationhistoryservice-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-applicationhistoryservice-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-common-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-common-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-common-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-nodemanager-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-nodemanager-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-nodemanager-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-resourcemanager-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-resourcemanager-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-resourcemanager-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-tests-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-web-proxy-2.6.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-web-proxy-2.6.4-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hadoop-yarn-server-web-proxy-2.6.4-test-sources.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-annotations-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-annotations-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-client-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-common-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-common-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-examples-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-external-blockcache-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-hadoop2-compat-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-hadoop-compat-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-it-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-it-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-prefix-tree-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-procedure-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-protocol-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-resource-bundle-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-rest-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-server-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-server-1.2.4-tests.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-shell-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/hbase-thrift-1.2.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/htrace-core-3.1.0-incubating.jar:/home/hadoop/soft/hbase-1.2.4/lib/httpclient-4.2.5.jar:/home/hadoop/soft/hbase-1.2.4/lib/httpcore-4.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jackson-xc-1.9.13.jar:/home/hadoop/soft/hbase-1.2.4/lib/jamon-runtime-2.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/jasper-compiler-5.5.23.jar:/home/hadoop/soft/hbase-1.2.4/lib/jasper-runtime-5.5.23.jar:/home/hadoop/soft/hbase-1.2.4/lib/javax.inject-1.jar:/home/hadoop/soft/hbase-1.2.4/lib/java-xmlbuilder-0.4.jar:/home/hadoop/soft/hbase-1.2.4/lib/jaxb-api-2.2.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/soft/hbase-1.2.4/lib/jcodings-1.0.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-client-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-core-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-guice-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-json-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jersey-server-1.9.jar:/home/hadoop/soft/hbase-1.2.4/lib/jets3t-0.9.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/jettison-1.3.3.jar:/home/hadoop/soft/hbase-1.2.4/lib/jetty-6.1.26.jar:/home/hadoop/soft/hbase-1.2.4/lib/jetty-sslengine-6.1.26.jar:/home/hadoop/soft/hbase-1.2.4/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hbase-1.2.4/lib/joni-2.1.2.jar:/home/hadoop/soft/hbase-1.2.4/lib/jruby-complete-1.6.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/jsch-0.1.42.jar:/home/hadoop/soft/hbase-1.2.4/lib/jsp-2.1-6.1.14.jar:/home/hadoop/soft/hbase-1.2.4/lib/jsp-api-2.1-6.1.14.jar:/home/hadoop/soft/hbase-1.2.4/lib/junit-4.12.jar:/home/hadoop/soft/hbase-1.2.4/lib/leveldbjni-all-1.8.jar:/home/hadoop/soft/hbase-1.2.4/lib/libthrift-0.9.3.jar:/home/hadoop/soft/hbase-1.2.4/lib/log4j-1.2.17.jar:/home/hadoop/soft/hbase-1.2.4/lib/metrics-core-2.2.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/netty-all-4.0.23.Final.jar:/home/hadoop/soft/hbase-1.2.4/lib/paranamer-2.3.jar:/home/hadoop/soft/hbase-1.2.4/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/servlet-api-2.5-6.1.14.jar:/home/hadoop/soft/hbase-1.2.4/lib/servlet-api-2.5.jar:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-api-1.7.7.jar:/home/hadoop/soft/hbase-1.2.4/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/soft/hbase-1.2.4/lib/snappy-java-1.0.4.1.jar:/home/hadoop/soft/hbase-1.2.4/lib/spymemcached-2.11.6.jar:/home/hadoop/soft/hbase-1.2.4/lib/xmlenc-0.52.jar:/home/hadoop/soft/hbase-1.2.4/lib/xz-1.0.jar:/home/hadoop/soft/hbase-1.2.4/lib/zookeeper-3.4.6.jar:/home/hadoop/soft/hadoop-2.6.4/etc/hadoop:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/gson-2.2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/hadoop-annotations-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/hadoop-auth-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/junit-4.11.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/hadoop-nfs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4-tests.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4-tests.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-registry-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-client-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-api-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4-tests.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.4.jar:/home/hadoop/soft/hadoop-2.6.4/contrib/capacity-scheduler/*.jar
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/soft/hadoop-2.6.4/lib/native
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-642.el6.x86_64
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:user.name=hadoop
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
2016-11-22 17:33:17,836 INFO [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop/soft/hbase-1.2.4
2016-11-22 17:33:17,844 INFO [main] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181 sessionTimeout=90000 watcher=regionserver:160200x0, quorum=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181, baseZNode=/hbase
2016-11-22 17:33:17,953 INFO [main-SendThread(192.168.150.5:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.150.5/192.168.150.5:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-22 17:33:17,998 INFO [main-SendThread(192.168.150.5:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.150.5/192.168.150.5:2181, initiating session
2016-11-22 17:33:18,035 INFO [main-SendThread(192.168.150.5:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.150.5/192.168.150.5:2181, sessionid = 0x3588b8bd1a70002, negotiated timeout = 40000
2016-11-22 17:33:18,201 INFO [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2016-11-22 17:33:18,202 INFO [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: starting
2016-11-22 17:33:18,659 INFO [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-11-22 17:33:18,670 INFO [main] http.HttpRequestLog: Http request log for http.requests.regionserver is not defined
2016-11-22 17:33:18,717 INFO [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2016-11-22 17:33:18,718 INFO [main] http.HttpServer: Added global filter 'clickjackingprevention' (class=org.apache.hadoop.hbase.http.ClickjackingPreventionFilter)
2016-11-22 17:33:18,726 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2016-11-22 17:33:18,726 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-11-22 17:33:18,726 INFO [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-11-22 17:33:18,810 INFO [main] http.HttpServer: Jetty bound to port 16030
2016-11-22 17:33:18,810 INFO [main] mortbay.log: jetty-6.1.26
2016-11-22 17:33:21,264 INFO [main] mortbay.log: Started SelectChannelConnector@0.0.0.0:16030
2016-11-22 17:33:21,631 INFO [regionserver/master/192.168.150.3:16020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66bc473e connecting to ZooKeeper ensemble=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181
2016-11-22 17:33:21,632 INFO [regionserver/master/192.168.150.3:16020] zookeeper.ZooKeeper: Initiating client connection, connectString=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181 sessionTimeout=90000 watcher=hconnection-0x66bc473e0x0, quorum=192.168.150.3:2181,192.168.150.4:2181,192.168.150.5:2181, baseZNode=/hbase
2016-11-22 17:33:21,637 INFO [regionserver/master/192.168.150.3:16020-SendThread(192.168.150.4:2181)] zookeeper.ClientCnxn: Opening socket connection to server 192.168.150.4/192.168.150.4:2181. Will not attempt to authenticate using SASL (unknown error)
2016-11-22 17:33:21,641 INFO [regionserver/master/192.168.150.3:16020-SendThread(192.168.150.4:2181)] zookeeper.ClientCnxn: Socket connection established to 192.168.150.4/192.168.150.4:2181, initiating session
2016-11-22 17:33:21,648 INFO [regionserver/master/192.168.150.3:16020-SendThread(192.168.150.4:2181)] zookeeper.ClientCnxn: Session establishment complete on server 192.168.150.4/192.168.150.4:2181, sessionid = 0x2588b290c690004, negotiated timeout = 40000
2016-11-22 17:33:22,008 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: ClusterId : 3c1b7721-de8f-42a6-90a9-8c0a694da768
2016-11-22 17:33:22,216 INFO [regionserver/master/192.168.150.3:16020] regionserver.MemStoreFlusher: globalMemStoreLimit=3.2 G, globalMemStoreLimitLowMark=3.0 G, maxHeap=8.0 G
2016-11-22 17:33:22,241 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2016-11-22 17:33:22,362 INFO [regionserver/master/192.168.150.3:16020] regionserver.RegionServerCoprocessorHost: System coprocessor loading is enabled
2016-11-22 17:33:22,362 INFO [regionserver/master/192.168.150.3:16020] regionserver.RegionServerCoprocessorHost: Table coprocessor loading is enabled
2016-11-22 17:33:22,376 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: reportForDuty to master=master,16000,1479807189530 with port=16020, startcode=1479807192653
2016-11-22 17:33:22,996 WARN [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: reportForDuty failed; sleeping and then retrying.
2016-11-22 17:33:26,000 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: reportForDuty to master=master,16000,1479807189530 with port=16020, startcode=1479807192653
2016-11-22 17:33:26,084 INFO [regionserver/master/192.168.150.3:16020] hfile.CacheConfig: Allocating LruBlockCache size=3.20 GB, blockSize=64 KB
2016-11-22 17:33:26,103 INFO [regionserver/master/192.168.150.3:16020] hfile.CacheConfig: Created cacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=3520936, freeSize=3428966488, maxSize=3432487424, heapSize=3520936, minSize=3260862976, minFactor=0.95, multiSize=1630431488, multiFactor=0.5, singleSize=815215744, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-11-22 17:33:26,336 INFO [regionserver/master/192.168.150.3:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-11-22 17:33:26,390 INFO [regionserver/master/192.168.150.3:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=master%2C16020%2C1479807192653.default, suffix=, logDir=hdfs://192.168.150.3:9000/hbase/WALs/master,16020,1479807192653, archiveDir=hdfs://192.168.150.3:9000/hbase/oldWALs
2016-11-22 17:33:26,655 INFO [regionserver/master/192.168.150.3:16020] wal.FSHLog: Slow sync cost: 199 ms, current pipeline: []
2016-11-22 17:33:26,656 INFO [regionserver/master/192.168.150.3:16020] wal.FSHLog: New WAL /hbase/WALs/master,16020,1479807192653/master%2C16020%2C1479807192653.default.1479807206391
2016-11-22 17:33:26,742 INFO [regionserver/master/192.168.150.3:16020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2016-11-22 17:33:26,761 INFO [regionserver/master/192.168.150.3:16020] regionserver.ReplicationSourceManager: Current list of replicators: [master,16020,1479807192653, slave1,16020,1479712512545, master,16020,1479712420627] other RSs: [master,16020,1479807192653, slave2,16020,1479813638869, slave1,16020,1479807188273]
2016-11-22 17:33:27,468 INFO [main-EventThread] replication.ReplicationTrackerZKImpl: /hbase/rs/slave2,16020,1479813638869 znode expired, triggering replicatorRemoved event
2016-11-22 17:33:27,490 INFO [SplitLogWorker-master:16020] regionserver.SplitLogWorker: SplitLogWorker master,16020,1479807192653 starting
2016-11-22 17:33:27,495 INFO [regionserver/master/192.168.150.3:16020] regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
2016-11-22 17:33:27,518 INFO [regionserver/master/192.168.150.3:16020] regionserver.HRegionServer: Serving as master,16020,1479807192653, RpcServer on master/192.168.150.3:16020, sessionid=0x3588b8bd1a70002
2016-11-22 17:33:27,542 INFO [regionserver/master/192.168.150.3:16020] quotas.RegionServerQuotaManager: Quota support disabled
11.26.2016
1.新建項目
2.導包;
3.建項目內容
package abcd;
import java.io.IOException;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HConnection;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
import org.apache.hadoop.hbase.util.Bytes;
public class HbaseConnection {
private Configuration conf;
//private HConnection hConn = null;
private Connection connection = null;
public HbaseConnection(String zkServer) throws IOException {
conf = HBaseConfiguration.create();
conf.set("hbase.zookeeper.quorum", zkServer);
//hConn = HConnectionManager.createConnection(conf); //過期的方法
connection = ConnectionFactory.createConnection(conf);
/*
*
*/
}
public void createTable(String tableName, List<String> cols) {
try {
//HBaseAdmin admin = new HBaseAdmin(conf);
Admin admin = connection.getAdmin();
//if (admin.tableExists(tableName)) {
if (admin.tableExists(TableName.valueOf(tableName))) {
throw new IOException("table exists");
} else {
//HTableDescriptor tableDesc = new HTableDescriptor(tableName);
HTableDescriptor tableDesc = new HTableDescriptor(TableName.valueOf(tableName));
for (String col : cols) {
HColumnDescriptor colDesc = new HColumnDescriptor(col);
colDesc.setCompressionType(Algorithm.GZ);//進行compaction的時候使用壓縮算法
colDesc.setDataBlockEncoding(DataBlockEncoding.DIFF);////壓縮內存和存儲中的數據,內存緊張的時候設置
tableDesc.addFamily(colDesc);
}
admin.createTable(tableDesc);
}
} catch (MasterNotRunningException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public void saveData(String tableName, List<Put> puts) {
try {
//HTableInterface table = hConn.getTable(tableName);
//Admin admin = connection.getAdmin();
Table table = connection.getTable(TableName.valueOf(tableName));
table.put(puts);
//table.setAutoFlush(false);
//table.flushCommits();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public Result getData(String tablename, String rowkey) {
try {
// HTableInterface table = hConn.getTable(tablename);
Table table = connection.getTable(TableName.valueOf(tablename));
Get get = new Get(Bytes.toBytes(rowkey));
return table.get(get);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return null;
}
public void format(Result result) {
for (Cell rowKV : result.rawCells()) {
System.out.print("Row Name: " + new String(CellUtil.cloneRow(rowKV)) + " ");
System.out.print("Timestamp: " + rowKV.getTimestamp() + " ");
System.out.print("column Family: " + new String(CellUtil.cloneFamily(rowKV)) + " ");
System.out.print("column qualifier: " + new String(CellUtil.cloneQualifier(rowKV)) + " ");
System.out.println("Value: " + new String(CellUtil.cloneValue(rowKV)) + " ");
}
}
public void hbaseScan(String tableName){
Scan scan = new Scan();
scan.setCaching(1000);
try {
//HTableInterface table = hConn.getTable(tableName);
Table table = connection.getTable(TableName.valueOf(tableName));
ResultScanner scanner = table.getScanner(scan);
for (Result res : scanner) {
format(res);
}
table.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public static void main(String[] args) throws IOException {
// HbaseConnection conn = new HbaseConnection("slave1,slave2");
HbaseConnection conn = new HbaseConnection("192.168.150.3,192.168.150.4,192.168.150.5");
ArrayList<String> list = new ArrayList<String>();//列族名列表
list.add("basicInfo");
list.add("moreInfo");
conn.createTable("person", list);
// List<Put> puts = new LinkedList<Put>();
// Put put1 = new Put("Tom".getBytes());
// // put1.add(Bytes.toBytes("basicInfo"), Bytes.toBytes("age"),Bytes.toBytes("27"));
// // put1.add(Bytes.toBytes("moreInfo"), Bytes.toBytes("tel"),Bytes.toBytes("111"));
// put1.addColumn(Bytes.toBytes("basicInfo"), Bytes.toBytes("age"),
// Bytes.toBytes("27"));
// put1.addColumn(Bytes.toBytes("moreInfo"), Bytes.toBytes("tel"),
// Bytes.toBytes("111"));
//
// Put put2 = new Put("Jim".getBytes());
// // put2.add(Bytes.toBytes("basicInfo"), Bytes.toBytes("age"),Bytes.toBytes("28"));
// // put2.add(Bytes.toBytes("moreInfo"), Bytes.toBytes("tel"),Bytes.toBytes("112"));
// put2.addColumn(Bytes.toBytes("basicInfo"), Bytes.toBytes("age"),
// Bytes.toBytes("28"));
// put2.addColumn(Bytes.toBytes("moreInfo"), Bytes.toBytes("tel"),
// Bytes.toBytes("112"));
// puts.add(put1);
// puts.add(put2);
// conn.saveData("person", puts);
//
//
// Result result = conn.getData("person", "Tom");
// conn.format(result);
//
// conn.hbaseScan("person");
}
}
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
192.168.150.3 master
192.168.150.4 slave1
192.168.150.5 slave2
在windows 的C:\Windows\System32\drivers\etc\hosts目錄下 在文檔末尾上加上
這是兩個步驟
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
[hadoop@master ~]$ hbase shell
[hadoop@master ~]$ jps
2675 SecondaryNameNode
2375 QuorumPeerMain
3462 Jps
2845 ResourceManager
2517 NameNode
3380 HRegionServer
3246 HMaster
http://www.cnblogs.com/shishanyuan/p/4172806.html
http://blog.csdn.net/whuqin/article/details/6728603
http://wenku.baidu.com/link?url=iTigQgQkM6lgpf3HKPuJcS4_X2_WPNoUoUF0jEQnTQzhgjKYmyrDJs-gCwzPDdOPawmqGzqGCNZSie3lcZqBaQcSSxvDcF4HqrWGgkCtCbW