最近在狂啃hadoop的書籍,這部《hbase:權威指南》就進入個人視野裏面了,啃吧,由於是英文的書籍,有些我的理解不對的地方,歡迎各位拍磚。bash
hbase的配置中有一些和hdfs關聯的配置,當hdfs中修改了,可是hbase中修改了,hbase中是不會知道的,好比dfs.replication,有時候咱們想增長備份的數量,在hdfs中設置爲5了,可是hbase中默認爲3,這樣hbase仍是隻保存3份。ssh
那麼有什麼方法能夠使他們的配置文件同步,有三種方法:oop
(1)在hbase-env.sh的HBASE_CLASSPATH環境變量增長HADOOP_CONF_DIR。spa
(2)在${HBASE_HOME}/conf下放一份hadoop的配置文件hdfs-site.xml (or hadoop-site.xml)。code
(3)直接在hbase-site.xml中添加。orm
從上述三種方法當中,目測是第一種方法比較靠譜,固然要同步配置文件還有別的方法,後續再進行介紹。server
如下這兩個腳本均可以實現集羣的hbase配置文件同步,第二個還帶有刪除以前配置文件的方法,用的時候注意一些xml
#!/bin/bash # Rsyncs HBase files across all slaves. Must run on master. Assumes # all files are located in /usr/local if [ "$#" != "2" ]; then echo "usage: $(basename $0) <dir-name> <ln-name>" echo " example: $(basename $0) hbase-0.1 hbase" exit 1 fi SRC_PATH="/usr/local/$1/conf/regionservers" for srv in $(cat $SRC_PATH); do echo "Sending command to $srv..."; rsync -vaz --exclude='logs/*' /usr/local/$1 $srv:/usr/local/ ssh $srv "rm -fR /usr/local/$2 ; ln -s /usr/local/$1 /usr/local/$2" done echo "done."
另外一個腳本,一樣的功能,這個更簡單些 hadoop
#!/bin/bash # Rsync's HBase config files across all region servers. Must run on master. for srv in $(cat /usr/local/hbase/conf/regionservers); do echo "Sending command to $srv..."; rsync -vaz --delete --exclude='logs/*' /usr/local/hadoop/ $srv:/usr/local/hadoop/ rsync -vaz --delete --exclude='logs/*' /usr/local/hbase/ $srv:/usr/local/hbase/ done echo "done."