主機名 IP 安裝的軟件 運行的進程 Hadoop1 192.168.111.143 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)、ResourceManager Hadoop2 192.168.111.144 jdk、hadoop NameNode、DFSZKFailoverController(zkfc)、ResourceManager Hadoop3 192.168.111.145 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain Hadoop4 192.168.111.146 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain Hadoop5 192.168.111.147 jdk、hadoop、zookeeper DataNode、NodeManager、JournalNode、QuorumPeerMain
tar -zxvf zookeeper-3.4.9.tar.gz -C /home/hbase
cd /home/hbase/zookeeper-3.4.9/conf/ cp zoo_sample.cfg zoo.cfg vim zoo.cfg
修改:html
dataDir=/home/hbase/zookeeper-3.4.9/tmp
在zoo.cfg最後添加: node
server.1=hadoop3:2888:3888 server.2=hadoop4:2888:3888 server.3=hadoop5:2888:3888
而後建立一個tmp文件夾 mkdir /home/hbase/zookeeper-3.4.9/tmp 再建立一個空文件 touch /home/hbase/zookeeper-3.4.9/tmp/myid 最後向該文件寫入ID echo 1 >> /home/hbase/zookeeper-3.4.9/tmp/myid
scp -r /home/hbase/zookeeper-3.4.9/ hadoop4: /home/hbase/ scp -r /home/hbase/zookeeper-3.4.9/ hadoop5: /home/hbase/
注意:修改hadoop四、hadoop5對應/home/hbase /zookeeper-3.4.9/tmp/myid內容 shell
hadoop4: echo 2 >> /home/hbase/zookeeper-3.4.9/tmp/myid hadoop5: echo 3 >> /home/hbase/zookeeper-3.4.9/tmp/myid
tar -zxvf hadoop-2.7.2.tar.gz -C /home/hbase/
#將hadoop添加到環境變量中 vim /etc/profile export JAVA_HOME=/home/habse/jdk/jdk1.7.0_79 export HADOOP_HOME=/home/habse/hadoop-2.7.2 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin #hadoop2.0的配置文件所有在$HADOOP_HOME/etc/hadoop下 cd /home/habse/hadoop-2.7.2/etc/hadoop
export JAVA_HOME=/home/hbase/jdk/jdk1.7.0_79
<configuration> <!-- 指定hdfs的nameservice爲ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <!-- 指定hadoop臨時目錄 --> <property> <name>hadoop.tmp.dir</name> <value>/home/habse/hadoop-2.7.2/tmp</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>hadoop3:2181,hadoop4:2181,hadoop5:2181</value> </property> </configuration>
<configuration> <!--指定hdfs的nameservice爲ns1,須要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有兩個NameNode,分別是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通訊地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop1:9000</value> </property> <!-- nn1的http通訊地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop1:50070</value> </property> <!-- nn2的RPC通訊地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop2:9000</value> </property> <!-- nn2的http通訊地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop2:50070</value> </property> <!-- 指定NameNode的元數據在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop3:8485;hadoop4:8485;hadoop5:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盤存放數據的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hbase/hadoop-2.7.2/journal</value> </property> <!-- 開啓NameNode失敗自動切換 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失敗自動切換實現方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔離機制方法,多個機制用換行分割,即每一個機制暫用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔離機制時須要ssh免登錄 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_rsa</value> </property> <!-- 配置sshfence隔離機制超時時間 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> </configuration>
<configuration> <!-- 指定mr框架爲yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
<configuration> <!-- Site specific YARN configuration properties --> <!-- 開啓RM高可靠 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分別指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>hadoop1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>hadoop2</value> </property> <!-- 指定zk集羣地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>hadoop3:2181,hadoop4:2181,hadoop5:2181</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
slaves是指定子節點的位置, hadoop1上的slaves文件指定的是datanode和nodemanager的位置 apache
hadoop3 hadoop4 hadoop5
#首先要配置hadoop1到hadoop二、hadoop三、hadoop四、hadoop5的免密碼登錄 #在hadoop1上生產一對鑰匙 ssh-keygen -t rsa #將公鑰拷貝到其餘節點,包括本身 ssh-coyp-id hadoop1 ssh-coyp-id hadoop2 ssh-coyp-id hadoop3 ssh-coyp-id hadoop4 ssh-coyp-id hadoop5 #注意:兩個namenode之間要配置ssh免密碼登錄,別忘了配置hadoop2到hadoop1的免登錄 在hadoop2上生產一對鑰匙 ssh-keygen -t rsa ssh-coyp-id -i hadoop1
scp -r /home/habse/hadoop-2.7.2/ root@hadoop2:/home/habse/ scp -r /home/habse/hadoop-2.7.2/ root@hadoop3:/home/habse / scp -r /home/habse/hadoop-2.7.2/ root@hadoop4:/home/habse / scp -r /home/habse/hadoop-2.7.2/ root@hadoop5:/home/habse /
cd /home/hbase/zookeeper-3.4.9/bin/ ./zkServer.sh start #查看狀態:一個leader,兩個follower ./zkServer.sh status
cd /home/habse/hadoop-2.7.2 sbin/hadoop-daemon.sh start journalnode #運行jps命令檢驗,hadoop三、hadoop四、hadoop5上多了JournalNode進程
#在hadoop1上執行命令: hdfs namenode -format hdfs namenode -bootstrapStandby
hdfs zkfc -formatZK
sbin/start-dfs.sh
注意:bootstrap
若是啓動datanode時遇到找不到datanode所在的主機,首先檢查slaves文件配置是否正確,若是沒問題的話刪除從新建立vim
sbin/start-yarn.sh
查看每臺機器的進程:瀏覽器
到此,hadoop-2.7.2配置完畢,能夠統計瀏覽器訪問:bash
http://192.168.111.143:50070框架
NameNode 'hadoop1:9000' (active)ssh
http://192.168.111.144:50070
NameNode 'hadoop2:9000' (standby)
Datanode:
因此hadoop集羣安裝完成後首先啓動zookeeper和journalnode,而後格式化HDFS和ZKFC,而後啓動namenode,resourcemanager,datanode
1. ./zkServer.sh start(hadoop三、hadoop四、hadoop5)
2. ./hadoop-daemon.sh start journalnode(hadoop三、hadoop四、hadoop5)
3. hdfs zkfc -formatZK(hadoop1)
4. hdfs namenode -bootstrapStandby(hadoop2)
5. hdfs zkfc -formatZK(hadoop1)
6. ./start-dfs.sh (hadoop1)
7. ./start-yarn.sh(hadoop1)
8. 若是哪一個進程沒有啓動,那麼單獨在那臺機器上執行啓動命令
9. ./yarn –daemon start proxyserver
10. ./mapred –daemon start historyserver
說明:
格式化工做僅在第一次啓動hadoop以前完成(步驟2,3,4,5),之後不用,若是之後啓動過程當中有問題能夠從新格式化
單獨啓動resourcemanager:./yarn-daemon.sh start resourcemanager
單獨啓動namnode:./hadoop-daemon.sh start namenode
單獨啓動zkfc:./yarn-daemon.sh start zkfc
1. ./stop-dfs.sh
2. ./stop-yarn.sh
3. ./yarn –daemon stop proxyserver
4. ./mapred –daemon stop historyserver
殺掉當前狀態爲active的hadoop1 的namenode進程,能夠看到hadoop2由standby變爲active,再啓動hadoop1的namenode則發現hadoop1的狀態爲standby