Hbase0.98.4/Hadoop2.4.1整合小結【原創】

  設定hbase的數據目錄,修改conf/hbase-site.xmlshell

<configuration>    
    <property>
        <name>hbase.cluster.distributed</name>
        <value>true</value>
        <description>The mode the clusterwill be in. Possible values are
            false: standalone and pseudo-distributedsetups with managed Zookeeper
            true: fully-distributed with unmanagedZookeeper Quorum (see hbase-env.sh)
        </description>
    </property>
    <property>
        <name>hbase.rootdir</name>
        <value>hdfs://Master:9000/hbase</value>
        <description>The directory shared byRegionServers.
        </description>
    </property>
    <property>
        <name>hbase.zookeeper.property.clientPort</name>
        <value>2222</value>
        <description>Property fromZooKeeper's config zoo.cfg.
        The port at which the clients willconnect.
        </description>
    </property>
    <property>
        <name>hbase.zookeeper.quorum</name>
        <value>Master</value><!--有多臺就填多臺主機名-->
        <description>Comma separated listof servers in the ZooKeeper Quorum.
        For example,"host1.mydomain.com,host2.mydomain.com,host3.mydomain.com".
        By default this is set to localhost forlocal and pseudo-distributed modes
        of operation. For a fully-distributedsetup, this should be set to a full
        list of ZooKeeper quorum servers. IfHBASE_MANAGES_ZK is set in hbase-env.sh
        this is the list of servers which we willstart/stop ZooKeeper on.
        </description>
    </property>
    <property>
        <name>hbase.zookeeper.property.dataDir</name>
        <value>/usr/local/hbase/zookeeper</value>
        <description>Property fromZooKeeper's config zoo.cfg.
        The directory where the snapshot isstored.
        </description>
    </property>
</configuration>

  修改conf/regionservers,和hadoop的slaves同樣的操做,我仍是要把localhost幹掉的。同樣的,要配多個就放多個。dom

Master

  替換hbase安裝目錄下的lib中使用的hadoop的jar包,改爲一致的。oop

  原先hadoop相關的包爲this

  在hbase中lib目錄下創建一個sh(免得直接用命令把雜七雜八的所有複製過來了),而後跑一下,把必須的那些jar包從hadoop安裝目錄導過來。spa

find -name "hadoop*jar" | sed 's/2.2.0/2.4.1/g' | sed 's/.\///g' > f.log
rm ./hadoop*jar
cat ./f.log | while read Line
do
find /usr/local/hadoop/share/hadoop -name "$Line" | xargs -i cp {} ./
done日誌

rm ./f.logcode

  lib包下面有個slf4j-log4j12-XXX.jar,在機器有裝hadoop時,因爲classpath中會有hadoop中的這個jar包,會有衝突,直接刪除掉,沒有就不用刪除了。在HBase的shell運行時,就能看到信息了,固然,查日誌也行。server

  完了直接啓動hbase,hadoop的dfs那邊會有反應了:xml

  剩下的隨便折騰了。blog

相關文章
相關標籤/搜索