這裏是當初在三個ECS節點上搭建hadoop+zookeeper+hbase+solr的主要步驟,文章內容未通過潤色,請參考的同窗搭配其餘博客一同使用,並記得根據實際狀況調整相關參數html
export HBASE_HOME=/opt/hbase/hbase-2.1.9 export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:$PATH source /etc/profile
export JAVA_HOME=/opt/jdk/jdk1.8.0_191 export HADOOP_HOME=/opt/hadoop/hadoop-2.7.7 export HBASE_HOME=/opt/hbase/hbase-2.1.9 export HBASE_CLASSPATH=ls /opt/hadoop/hadoop-2.7.7/etc/hadoop/ export HBASE_PID_DIR=/opt/DonotDelete/hbasepid export HBASE_MANAGES_ZK=false ### export HBASE_CLASSPATH-->hadoop配置文件的位置 HBASE_MANAGES_ZK=false-->不啓用HBase自帶的Zookeeper集羣 export HBASE_PID_DIR-->存儲pid,防止pid在tmp文件夾中被刪而形成沒法經過命令關閉進程 詳見: https://blog.csdn.net/xiao_jun_0820/article/details/35222699 https://www.cnblogs.com/qindongliang/p/4894572.html https://www.cnblogs.com/weiyiming007/p/12018288.html 一樣的,爲了hadoop的pid的安全 vi /opt/hadoop/hadoop-2.7.7/etc/hadoop/hadoop-env.sh export HADOOP_PID_DIR=/opt/DonotDelete/hadooppid 同理vi ~/spark-env.sh export SPARK_PID_DIR=/opt/DonotDelete/sparkpid
注:若是要指定HDFS上的目錄,端口號要與hdfs-site.xml中設爲一致 <configuration> <property> <name>hbase.rootdir</name> <value>hdfs://Gwj:8020/hbase</value> </property> <property> <name>hbase.zookeeper.property.clientPort</name> <value>2181</value> </property> <property> <name>zookeeper.session.timeout</name> <value>120000</value> </property> <property> <name>hbase.master.maxclockskew</name> <value>150000</value> </property> <property> <name>hbase.zookeeper.quorum</name> <value>Gwj,Ssj,Pyf</value> </property> <property> <name>hbase.tmp.dir</name> <value>/opt/hbase/temphbasedata</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.master</name> <value>Gwj:60000</value> </property> </configuration>
Ssj Pyf
/opt/hbase/hbase-2.1.9/bin/start-hbase.sh stop-hbase.sh status-hbase.sh
HBase Master---HMaster Slave---HRegionServer