官方文檔:html
https://zookeeper.apache.org/doc/r3.5.5/zookeeperStarted.htmlnode
[test@hadoop102 opt]$ tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/apache
[test@hadoop102 module]$ xsync zookeeper-3.4.10/bootstrap
[test@hadoop102 zookeeper-3.4.10]$ mkdir zkDatavim
[test@hadoop102 zkData]$ touch myid服務器
[test@hadoop102 zkData]$ vim myidssh
修改 myid=2分佈式
一樣修改 hadoop103 myid=3 ;hadoop104 myid=4ide
[test@hadoop102 conf]$ mv zoo_sample.cfg zoo.cfgoop
[test@hadoop102 conf]$ vim zoo.cfg
修改:
dataDir=/opt/module/zookeeper-3.4.10/zkData
#######################cluster##########################
server.2=hadoop102:2888:3888
server.3=hadoop103:2888:3888
server.4=hadoop104:2888:3888
[test@hadoop102 conf]$ xsync zoo.cfg
server.A=B:C:D
A是一個數字,表示這個是第幾號服務器;
集羣模式下配置一個文件myid,這個文件在dataDir目錄下,這個文件裏面有一個數據就是A的值,Zookeeper啓動時讀取此文件,拿到裏面的數據與zoo.cfg裏面的配置信息比較從而判斷究竟是哪一個server。
B是這個服務器的地址;
C是這個服務器Follower與集羣中的Leader服務器交換信息的端口;
D是萬一集羣中的Leader服務器掛了,須要一個端口來從新進行選舉,選出一個新的Leader,而這個端口就是用來執行選舉時服務器相互通訊的端口。
啓動zookeeper服務端
[test@hadoop102 zookeeper-3.4.10]$ bin/zkServer.sh start
查看狀態
[test@hadoop102 zookeeper-3.4.10]$ bin/zkServer.sh status
啓動zookeeper客戶端
[test@hadoop102 zookeeper-3.4.10]$ bin/zkCli.sh start
官方文檔:
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
mkdir /opt/ha
cp -r hadoop-2.7.2/ /opt/ha/
cd /opt/module/hadoop-2.7.2/etc/hadoop
[test@hadoop102 hadoop]$ vim hadoop-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_144
cd /opt/module/hadoop-2.7.2/etc/hadoop
[test@hadoop102 hadoop]$ vim core-site.xml
<configuration>
<!-- 把兩個NameNode)的地址組裝成一個集羣mycluster -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<!-- 指定hadoop運行時產生文件的存儲目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/ha/hadoop-2.7.2/data/tmp</value>
</property>
<!-- 自動故障轉移 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>
</configuration>
cd /opt/module/hadoop-2.7.2/etc/hadoop
[test@hadoop102 hadoop]$ vim hdfs-site.xml
<configuration>
<!-- 徹底分佈式集羣名稱 -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!-- 集羣中NameNode節點都有哪些 -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop102:9000</value>
</property>
<!-- nn2的RPC通訊地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop103:9000</value>
</property>
<!-- nn1的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop102:50070</value>
</property>
<!-- nn2的http通訊地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop103:50070</value>
</property>
<!-- 指定NameNode元數據在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop102:8485;hadoop103:8485;hadoop104:8485/mycluster</value>
</property>
<!-- 配置隔離機制,即同一時刻只能有一臺服務器對外響應 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 使用隔離機制時須要ssh無祕鑰登陸-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/atguigu/.ssh/id_rsa</value>
</property>
<!-- 關閉權限檢查-->
<property>
<name>dfs.permissions.enable</name>
<value>false</value>
</property>
<!-- 訪問代理類:client,mycluster,active配置失敗自動切換實現方式-->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--自動故障轉移-->
<property> <name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
xsync /opt/module/ha
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start journalnode
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -format
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -bootstrapStandby
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemons.sh start datanode
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn1
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -getServiceState nn1
[test@hadoop102 hadoop-2.7.2]$ sbin/stop-dfs.sh
[test@hadoop102 zookeeper-3.4.10]$ bin/zkServer.sh start
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs zkfc -formatZK
[test@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
官方文檔:
http://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html
cd /opt/module/hadoop-2.7.2/etc/hadoop
[test@hadoop102 hadoop]$ vim yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--啓用resourcemanager ha-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--聲明兩臺resourcemanager的地址-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster-yarn1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop102</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop103</value>
</property>
<!--指定zookeeper集羣的地址-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop102:2181,hadoop103:2181,hadoop104:2181</value>
</property>
<!--啓用自動恢復-->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!--指定resourcemanager的狀態信息存儲在zookeeper集羣-->
<property>
<name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
</configuration>
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -format
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs namenode -bootstrapStandby
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start namenode
[test@hadoop102 hadoop-2.7.2]$ sbin/hadoop-daemons.sh start datanode
[test@hadoop102 hadoop-2.7.2]$ bin/hdfs haadmin -transitionToActive nn1
[test@hadoop102 hadoop-2.7.2]$ sbin/start-yarn.sh
[test@hadoop102 hadoop-2.7.2]$ sbin/yarn-daemon.sh start resourcemanager
[test@hadoop102 hadoop-2.7.2]$ bin/yarn rmadmin -getServiceState rm1