1. 集羣規劃html
ip地址 | 機器名 | 角色 |
192.168.1.101 | palo101 | hadoop namenode, hadoop datanode, yarn nodeManager, zookeeper, hive, hbase master,hbase region server |
192.168.1.102 | palo102 | hadoop namenode, hadoop datanode, yarn nodeManager, yarn resource manager, zookeeper, hive, hbase master,hbase region server |
192.168.1.103 | palo103 | hadoop namenode, hadoop datanode, yarn nodeManager, zookeeper, hive,hbase region server,mysql |
2. 環境準備java
安裝JDKnode
安裝zookeeper3.4.12mysql
安裝mysql5.7sql
安裝hive2.3.4shell
安裝hadoop2.7.3apache
hbase和hadoop存在版本依賴關係,全部安裝以前請先肯定好hbase和hadoop是否支持,具體版本支持關係能夠到hbase官方頁面上查看: https://hbase.apache.org/book.html#basic.prerequisites, 在頁面中搜索: Hadoop version support matrix 便可。vim
當前的hadoop和hbase的版本關係以下;
瀏覽器
3. 下載hbase1.3.3服務器
注意:3,4兩步都在192.168.1.101上操做,配置好hbase後,經過scp複製到其餘兩臺機器上去
wget https://mirrors.tuna.tsinghua.edu.cn/apache/hbase/1.3.3/hbase-1.3.3-bin.tar.gz mkdir -p /usr/local/hbase/ tar xzvf hbase-1.3.3-bin.tar.gz mv hbase-1.3.3-bin /usr/local/hbase/hbase-1.3.3
4. 配置hbase
4.1 編輯hbase.env.sh
vim /usr/local/hbase/hbase-1.3.3/conf/hbase-env.sh
#加入JAVA_HOME的路徑,本機中的JAVA_HOME路徑爲,
export JAVA_HOME=/usr/java/jdk1.8.0_172-amd64
#關閉HBase自帶的Zookeeper,使用Zookeeper集羣
export HBASE_MANAGES_ZK=false
HBASE_MANAGES_ZK變量, 此變量默認爲true,告訴HBase是否啓動/中止ZooKeeper集合服務器做爲HBase啓動/中止的一部分。若是爲true,這Hbase把zookeeper啓動,中止做爲自身啓動和中止的一部分。若是設置爲false,則表示獨立的Zookeeper管理。
#設置HBASE_PID_DIR目錄
export HBASE_PID_DIR=/var/hadoop/pids
註釋掉下面兩行
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m" # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m"
這兩個配置是用於jdk1.7的,本例中用的是jdk1.8,因此去掉這兩個配置項;
#修改Hbase堆設置,將其設置成4G,
export HBASE_HEAPSIZE=4G
總體內容以下:
#JAVA_HOME export JAVA_HOME=/usr/java/jdk1.8.0_172-amd64 #關閉HBase自帶的Zookeeper,使用外部Zookeeper集羣 export HBASE_MANAGES_ZK=false #設置HBASE_PID_DIR目錄 export HBASE_PID_DIR=/var/hadoop/pids #Hbase日誌目錄 export HBASE_LOG_DIR=/usr/local/hbase/hbase-1.3.3/logs #HBASE_CLASSPATH export HBASE_CLASSPATH=/usr/local/hbase/hbase-1.3.3/conf #HBASE_HOME export HBASE_HOME=/usr/local/hbase/hbase-1.3.3 #HADOOP_HOME export HADOOP_HOME=/opt/softwre/hadoop-2.7.3 #set hbase heep size export HBASE_HEAPSIZE=4G
4.2 編輯hbase-site.xml
vim /usr/local/hbase/hbase-1.3.3/conf/hbase-site.xml
加入如下配置信息
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://192.168.1.101:9000/hbase</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>192.168.1.101,192.168.1.102,192.168.1.103</value> </property> <property> <name>hbase.zookeeper.property.dataDir</name> <value>/opt/software/zookeeper-3.4.12/data</value> </property> <property> <name>hbase.master.maxclockskew</name> <value>120000</value> </property> </configuration>
注意:
a) hbase.zookeeper.property.dataDir,即以前安裝的zookeeper集羣的數據目錄,本例中是/opt/software/zookeeper-3.4.12/data
b) 必須指定zookeeper的端口號,若是不添加hbase.zookeeper.property.clientPort配置節,也能夠直接在hbase.zookeeper.quorum配置節中指定,形如192.168.1.101:2181,192.168.1.102:2181,多個機器之間用逗號分隔
c) hbase.rootdir配置節這個參數是用來設置RegionServer 的共享目錄,用來存放HBase數據。格式必須是hdfs://{HDFS_NAME_NODE_IP}:{HDFS_NAME_NODE_PORT}/{HBASE_HDFS_ROOT_DIR_PATH},不然,hbase將訪問不到hdfs,安裝失敗(hbase依賴hdfs作存儲).本例中hdfs namenode部署在192.168.1.101上,端口爲默認端口9000,hbase的根路徑是/hbase,因此配置爲hdfs://192.168.1.101:9000/hbase
e) hbase.cluster.distributed: HBase 的運行模式。爲 false 表示單機模式,爲 true 表示分佈式模式。
4.3 編輯regionservers
vim /usr/local/hbase/hbase-1.3.3/conf/regionservers
添加如下內容:
192.168.1.101 192.168.1.102 192.168.1.103
5. 將配置好的hbase複製到其餘兩臺機器
將配置好的hbase複製到192.168.1.102,192.168.1.103
scp -r /usr/local/hbase 192.168.1.102:/usr/local/ scp -r /usr/local/hbase 192.168.1.103:/usr/local/
6.配置環境變量
vim /etc/profile
在文件末尾添加
#hbase export HBASE_HOME=/usr/local/hbase/hbase-1.3.3 export PATH=$HBASE_HOME/bin:$PATH
:wq保存退出.
在終端輸入 source /etc/profile使環境變量生效。
注意:每臺機器都須要操做
7. 配置時間同步
咱們在使用HDFS的時候常常會出現一些莫名奇妙的問題,一般多是因爲多臺服務器的時間不一樣步形成的。咱們可使用網絡時間服務器進行同步。
yum -y install ntp ntpdate #安裝ntpdate時間同步工具 ntpdate cn.pool.ntp.org #設置時間同步 hwclock --systohc #將系統時間寫入硬件時間 timedatectl #查看系統時間
注意:上面操做每臺機器都須要作.
8. 啓動hbase
在192.168.1.101上啓動hbase
$HBASE_HOME/bin/start-hbase.sh
輸出以下:
[root@palo101 conf]# $HBASE_HOME/bin/start-hbase.sh palo101: starting zookeeper, logging to /usr/local/hbase/hbase-1.3.3/bin/../logs/hbase-root-zookeeper-palo103.out palo102: starting zookeeper, logging to /usr/local/hbase/hbase-1.3.3/bin/../logs/hbase-root-zookeeper-palo101.out palo103: starting zookeeper, logging to /usr/local/hbase/hbase-1.3.3/bin/../logs/hbase-root-zookeeper-palo102.out starting master, logging to /usr/local/hbase/hbase-1.3.3/logs/hbase-root-master-palo102.out Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 192.168.1.101: starting regionserver, logging to /usr/local/hbase/hbase-1.3.3/bin/../logs/hbase-root-regionserver-palo101.out 192.168.1.103: starting regionserver, logging to /usr/local/hbase/hbase-1.3.3/bin/../logs/hbase-root-regionserver-palo103.out 192.168.1.102: starting regionserver, logging to /usr/local/hbase/hbase-1.3.3/bin/../logs/hbase-root-regionserver-palo102.out 192.168.1.101: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 192.168.1.101: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 192.168.1.103: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 192.168.1.103: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 192.168.1.102: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 192.168.1.102: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
9. 查看hbase版本
在任意機器上輸入
$HBASE_HOME/bin/hbase shell
[root@palo103 conf]# $HBASE_HOME/bin/hbase shell SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hbase/hbase-1.3.3/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/workspace/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/workspace/apache-hive-2.3.4-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] HBase Shell; enter 'help<RETURN>' for list of supported commands. Type "exit<RETURN>" to leave the HBase Shell Version 1.3.3, rfd0d55b1e5ef54eb9bf60cce1f0a8e4c1da073ef, Sat Nov 17 21:43:34 CST 2018 hbase(main):001:0> version 1.3.3, rfd0d55b1e5ef54eb9bf60cce1f0a8e4c1da073ef, Sat Nov 17 21:43:34 CST 2018
10. hbase監控界面
經過瀏覽器打開http://{HBASE_MASTER_IP}:16010能夠查看hbase的監控信息,本例中是http://192.168.1.101:16010/,打開後截圖以下:
安裝完成!