徹底分佈式集羣(五)Hbase-1.2.6.1安裝配置

環境信息

徹底分佈式集羣(一)集羣基礎環境及zookeeper-3.4.10安裝部署html

hadoop集羣安裝配置過程

安裝hive前須要先部署hadoop集羣java

徹底分佈式集羣(二)hadoop2.6.5安裝部署node

Hbase集羣安裝部署

下載hbase-1.2.6.1-bin.tar.gz並經過FTP工具上傳至服務器,解壓web

[root@node222 ~]# ls /home/hadoop/hbase-1.2.6.1-bin.tar.gz
/home/hadoop/hbase-1.2.6.1-bin.tar.gz
[root@node222 ~]# gtar -zxf /home/hadoop/hbase-1.2.6.1-bin.tar.gz -C /usr/local/
[root@node222 ~]# ls /usr/local/hbase-1.2.6.1/
bin  CHANGES.txt  conf  docs  hbase-webapps  LEGAL  lib  LICENSE.txt  NOTICE.txt  README.txt

配置Hbaseshell

一、配置hbase-env.shapache

# 去掉對應環境變量前的註釋符號「#」,根據服務器環境狀況修改JAVA_HOME信息
export JAVA_HOME=/usr/local/jdk1.8.0_66
# 關閉hbase內置zookeeper
export HBASE_MANAGES_ZK=false

二、建立hbase臨時文件目錄bash

[root@node222 ~]# mkdir -p /usr/local/hbase-1.2.6.1/hbaseData

三、配置hbase-site.xml,配置時須要參照hadoop的core-site.xml和zk的zoo.cfg文件服務器

<configuration>
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://ns1/user/hbase/hbase_db</value>
         <!-- hbase存放數據目錄 配置爲主節點的機器名端口參照core-site.xml -->
  </property>
    <property>
       <name>hbase.tmp.dir</name>
       <value>/usr/local/hbase-1.2.6.1/hbaseData</value>
    </property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
        <!-- 是否分佈式部署 -->
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node222:2181,node224:2181,node225:2181</value>
        <!-- 指定zookeeper集羣個節點地址端口 -->
  </property>    
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/usr/local/zookeeper-3.4.10/zkdata</value>
        <!-- zookooper配置、日誌等的存儲位置 參照zoo.cfg -->
  </property>
</configuration>

以上配置完成後,由於本次是集羣環境部署,需將hadoop集羣的core-site.xml 和 hdfs-site.xml 複製到hbase/conf目錄下,用戶hbase正確解析hdfs集羣目錄,不然會產生啓動後,主節點正常啓動,Hregionserver未正常啓動問題,報錯日誌以下:app

2018-10-16 11:40:46,252 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2682)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2697)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2680)
        ... 5 more
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
        at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:258)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1003)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:609)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:564)
        ... 10 more
Caused by: java.net.UnknownHostException: ns1
        ... 25 more

配置regionservers,配置須要做爲Hregionserver節點的節點主機名webapp

# 本次將node224,node225做爲HregionServer節點
[root@node222 ~]# vi /usr/local/hbase-1.2.6.1/conf/regionservers
node224
node225

配置backup-master節點

當前配置backup-master節點有兩種方式,一種是配置時不配置backup-master文件,待集羣配置完成後,在某節點上經過

# 不用配置back-masters文件,直接在節點上啓動,start 後的編號即 web端口的末尾
[hadoop@node225 ~]$ /usr/local/hbase-1.2.6.1/bin/local-master-backup.sh  start 1
starting master, logging to /usr/local/hbase-1.2.6.1/logs/hbase-hadoop-1-master-node225.out
[hadoop@node225 ~]$ jps
2883 JournalNode
2980 NodeManager
2792 DataNode
88175 HMaster
88239 Jps
87966 HRegionServer

爲方便多master節點的統一管理,本次採用在%HBASE_HOME/conf目錄下增長backup-masters文件(默認沒有該文件),文件中設置須要做爲backup-master節點的服務器

[root@node222 ~]# vi /usr/local/hbase-1.2.6.1/conf/backup-masters
node225

以上配置完成後,將Hbase的目錄拷貝至其餘兩節點

[root@node222 ~]# scp -r /usr/local/hbase-1.2.6.1 root@node224:/usr/local
[root@node222 ~]# scp -r /usr/local/hbase-1.2.6.1 root@node225:/usr/local

配置環境變量,3節點都操做

vi /etc/profile
# 增長以下內容
export HBASE_HOME=/usr/local/hbase-1.2.6.1
export PATH=$HBASE_HOME/bin:$PATH
# 使配置生效
source /etc/profile

將hbase目錄受權給hadoop用戶(由於hadoop集羣配置的免密碼登陸是hadoop用戶),3節點都操做

[root@node222 ~]# chown -R hadoop:hadoop /usr/local/hbase-1.2.6.1
[root@node224 ~]# chown -R hadoop:hadoop /usr/local/hbase-1.2.6.1
[root@node225 ~]# chown -R hadoop:hadoop /usr/local/hbase-1.2.6.1

主節點上啓動hbase,並檢查各節點上啓動hbase服務進程

[hadoop@node222 ~]$ /usr/local/hbase-1.2.6.1/bin/start-hbase.sh
starting master, logging to /usr/local/hbase-1.2.6.1/logs/hbase-hadoop-master-node222.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
node224: starting regionserver, logging to /usr/local/hbase-1.2.6.1/bin/../logs/hbase-hadoop-regionserver-node224.out
node225: starting regionserver, logging to /usr/local/hbase-1.2.6.1/bin/../logs/hbase-hadoop-regionserver-node225.out
node224: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
node224: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
node225: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
node225: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
node225: starting master, logging to /usr/local/hbase-1.2.6.1/bin/../logs/hbase-hadoop-master-node225.out
node225: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
node225: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
[hadoop@node222 ~]$ jps
4037 NameNode
4597 NodeManager
4327 JournalNode
129974 HMaster
4936 DFSZKFailoverController
4140 DataNode
4495 ResourceManager
130206 Jps

[hadoop@node224 ~]$ jps
3555 JournalNode
3683 NodeManager
3907 DFSZKFailoverController
116853 HRegionServer
3464 DataNode
45724 ResourceManager
3391 NameNode
117006 Jps

[hadoop@node225 ~]$ jps
88449 HRegionServer
88787 Jps
2883 JournalNode
2980 NodeManager
88516 HMaster
2792 DataNode

經過hbase shell查看當前集羣的狀態,1 active master, 1 backup masters, 2 servers

[hadoop@node222 ~]$ /usr/local/hbase-1.2.6.1/bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase-1.2.6.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6.1, rUnknown, Sun Jun  3 23:19:26 CDT 2018

hbase(main):001:0> status
1 active master, 1 backup masters, 2 servers, 0 dead, 1.0000 average load

經過web界面查看集羣的狀態

注意集羣中配置可能遇到的問題

好多網上資料介紹的hbase-site.xml文件中的hbase.rootdir配置爲master節點的機器名,正常狀況不會有問題,但當hadoop集羣的hdfs active和standby發生切換時,就會致使hbase啓動後,Hmaster沒法正常啓動,HregionServer啓動正常,錯誤信息以下。解決方法可參照本文中的配置說明。

activeMasterManager] master.HMaster: Failed to become active master

SLF4J: Class path contains multiple SLF4J bindings.多SLF4J問題,本次修改hbase文件,以hadoop的爲準

[hadoop@node222 ~]$ /usr/local/hbase-1.2.6.1/bin/hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase-1.2.6.1/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6.1, rUnknown, Sun Jun  3 23:19:26 CDT 2018

hbase(main):001:0> exit
# 屏蔽hbase自身的jar包
[hadoop@node222 ~]$ mv  /usr/local/hbase-1.2.6.1/lib/slf4j-log4j12-1.7.5.jar  /usr/local/hbase-1.2.6.1/lib/slf4j-log4j12-1.7.5.jar.bak
[hadoop@node222 ~]$ /usr/local/hbase-1.2.6.1/bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6.1, rUnknown, Sun Jun  3 23:19:26 CDT 2018

Hbase 啓動以下警告處理

starting master, logging to /usr/local/hbase-1.2.6.1/logs/hbase-root-master-node222.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
starting regionserver, logging to /usr/local/hbase-1.2.6.1/logs/hbase-root-1-regionserver-node222.out

修改hbase-env.sh註釋以下兩行export
[root@node222 ~]# vi /usr/local/hbase-1.2.6.1/conf/hbase-env.sh

# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
#export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"

[root@node222 ~]# start-hbase.sh starting master, logging to /usr/local/hbase-1.2.6.1/logs/hbase-root-master-node222.out starting regionserver, logging to /usr/local/hbase-1.2.6.1/logs/hbase-root-1-regionserver-node222.out  

相關文章
相關標籤/搜索