本教程用於搭建Hadoop HA集羣,關於HA集羣有如下幾點說明:java
software | version |
---|---|
OS | CentOS-7-x86_64-DVD-1810.iso |
Hadoop | hadoop-2.8.4 |
Zookeeper | zookeeper-3.4.10 |
node | actor |
---|---|
master1 | NameNode、DFSZKFailoverController(zkfc)、ResourceManager |
master2 | NameNode、DFSZKFailoverController(zkfc)、ResourceManager |
node1 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
node2 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
node3 | DataNode、NodeManager、JournalNode、QuorumPeerMain |
192.168.56.101 node1 192.168.56.102 node2 192.168.56.103 node3 192.168.56.201 master1 192.168.56.202 master2
useradd hadoop passwd hadoop
chmod -v u+w /etc/sudoers
vi /etc/sudoers hadoop ALL=(ALL) ALL
chmod -v u-w /etc/sudoers
hostnamectl set-hostname $hostname [master1|master2|node1|node2|node3]
systemctl reboot -i
ssh-keygen -t rsa cat .ssh/id_rsa.pub >> .ssh/authorized_keys scp .ssh/authorized_keys $next_node:~/.ssh/
sudo vi /etc/ssh/sshd_config RSAAuthentication yes StrictModes no PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys
systemctl restart sshd.service
sudo systemctl stop firewalld sudo firewall-cmd --state systemctl disable firewalld.service
vi /etc/selinux/config SELINUX=disabled
sudo mkdir -p /opt/env sudo chown -R hadoop:hadoop /opt/env tar -xvf jdk-8u121-linux-i586.tar.gz
sudo vi /etc/profile export JAVA_HOME=/opt/env/jdk1.8.0_121 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH
source /etc/profile
sudo mkdir -p /opt/zookeeper sudo chown -R hadoop:hadoop /opt/zookeeper tar -zxvf /tmp/zookeeper-3.4.10.tar.gz -C /opt/zookeeper/ sudo chown -R hadoop:hadoop /opt/zookeeper
vi conf/zoo.cfg dataDir=/opt/zookeeper/zookeeper-3.4.10/data ...... server.1=node1:2888:3888 server.2=node2:2888:3888 server.3=node3:2888:3888
mkdir data echo $zk_id [1|2|3] > data/myid
sudo mkdir -p /opt/hadoop/data sudo chown -R hadoop:hadoop /opt/hadoop/ tar -zxvf hadoop-2.8.4.tar.gz -C /opt/hadoop/ mkdir journaldata
vi core-site.xml <configuration> <!-- 指定hdfs的nameservice爲ns1 --> <property> <name>fs.defaultFS</name> <value>hdfs://ns1/</value> </property> <!-- 指定hadoop臨時目錄 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/hadoop/data</value> </property> <!-- 指定zookeeper地址 --> <property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>ha.zookeeper.session-timeout.ms</name> <value>3000</value> </property> </configuration>
vi hdfs-site.xml <configuration> <!--指定hdfs的nameservice爲ns1,須要和core-site.xml中的保持一致 --> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <!-- ns1下面有兩個NameNode,分別是nn1,nn2 --> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <!-- nn1的RPC通訊地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>master1:9000</value> </property> <!-- nn1的http通訊地址 --> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>master1:50070</value> </property> <!-- nn2的RPC通訊地址 --> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>master2:9000</value> </property> <!-- nn2的http通訊地址 --> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>master2:50070</value> </property> <!-- 指定NameNode的元數據在JournalNode上的存放位置 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node1:8485;node2:8485;node3:8485/ns1</value> </property> <!-- 指定JournalNode在本地磁盤存放數據的位置 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/hadoop/journaldata</value> </property> <!-- 開啓NameNode失敗自動切換 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 配置失敗自動切換實現方式 --> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔離機制方法,多個機制用換行分割,即每一個機制暫用一行--> <property> <name>dfs.ha.fencing.methods</name> <value> sshfence shell(/bin/true) </value> </property> <!-- 使用sshfence隔離機制時須要ssh免登錄 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> <!-- 這個是安裝用戶的ssh的id_rsa文件的路徑,我使用的是hadoop用戶 --> </property> <!-- 配置sshfence隔離機制超時時間 --> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>30000</value> </property> <!--指定namenode名稱空間的存儲地址 --> <property> <name>dfs.namenode.name.dir</name> <value>file:///opt/hadoop/hdfs/name</value> </property> <!--指定datanode數據存儲地址 --> <property> <name>dfs.datanode.data.dir</name> <value>file:///opt/hadoop/hdfs/data</value> </property> <!--指定數據冗餘份數 --> <property> <name>dfs.replication</name> <value>3</value> </property> </configuration>
vi mapred-site.xml <configuration> <!-- 指定mr框架爲yarn方式 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 配置 MapReduce JobHistory Server 地址 ,默認端口10020 --> <property> <name>mapreduce.jobhistory.address</name> <value>0.0.0.0:10020</value> </property> <!-- 配置 MapReduce JobHistory Server web ui 地址, 默認端口19888 --> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>0.0.0.0:19888</value> </property> </configuration>
vi yarn-site.xml <configuration> <!-- Site specific YARN configuration properties --> <!-- 開啓RM高可用 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!--開啓自動恢復功能 --> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <!-- 指定RM的cluster id --> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yrc</value> </property> <!-- 指定RM的名字 --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 分別指定RM的地址 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>master1</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>master2</value> </property> <property> <name>yarn.resourcemanager.ha.id</name> <value>$ResourceManager_Id [rm1|rm2]</value> <description>If we want to launch more than one RM in single node,we need this configuration</description> </property> <!-- 指定zk集羣地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <!--配置與zookeeper的鏈接地址--> <property> <name>yarn.resourcemanager.zk-state-store.address</name> <value>node1:2181,node2:2181,node3:2181</value> </property> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <property> <name>yarn.resourcemanager.ha.automatic-failover.zk-base-path</name> <value>/yarn-leader-election</value> <description>Optionalsetting.Thedefaultvalueis/yarn-leader-election</description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
vi hadoop-env.sh vi mapred-env.sh vi yarn-env.sh export JAVA_HOME=/opt/env/jdk1.8.0_121 export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib export HADOOP_HOME=/opt/hadoop/hadoop-2.8.4 export HADOOP_PID_DIR=/opt/hadoop/hadoop-2.8.4/pids export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="$HADOOP_OPTS-Djava.library.path=$HADOOP_HOME/lib/native" export HADOOP_PREFIX=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export HDFS_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
vi masters master2
vi slaves node1 node2 node3
./zkServer.sh start
[hadoop@node1 bin]$ ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: leader [hadoop@node2 bin]$ ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower [hadoop@node3 bin]$ ./zkServer.sh status ZooKeeper JMX enabled by default Using config: /opt/zookeeper/zookeeper-3.4.10/bin/../conf/zoo.cfg Mode: follower
bin/hdfs namenode -format
[hadoop@master2 ~]$ ll /opt/hadoop/data/dfs/name/current/ total 16 -rw-rw-r--. 1 hadoop hadoop 323 Jul 11 01:17 fsimage_0000000000000000000 -rw-rw-r--. 1 hadoop hadoop 62 Jul 11 01:17 fsimage_0000000000000000000.md5 -rw-rw-r--. 1 hadoop hadoop 2 Jul 11 01:17 seen_txid -rw-rw-r--. 1 hadoop hadoop 219 Jul 11 01:17 VERSION
bin/hdfs zkfc -formatZK
sbin/start-dfs.sh
[hadoop@master1 sbin]$ sh start-dfs.sh which: no start-dfs.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:00:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [master1 master2] master2: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master2.out master1: starting namenode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-namenode-master1.out node2: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node2.out node1: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node1.out node3: starting datanode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-datanode-node3.out Starting journal nodes [node1 node2 node3] node2: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node2.out node3: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node3.out node1: starting journalnode, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-journalnode-node1.out Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:01:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting ZK Failover Controllers on NN hosts [master1 master2] master2: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master2.out master1: starting zkfc, logging to /opt/hadoop/hadoop-2.8.4/logs/hadoop-hadoop-zkfc-master1.out [hadoop@master1 sbin]$ jps 5552 NameNode 5940 Jps 5869 DFSZKFailoverController
sbin/start-yarn.sh
[hadoop@master1 sbin]$ sh start-yarn.sh starting yarn daemons which: no start-yarn.sh in (/opt/env/jdk1.8.0_121/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/hadoop/.local/bin:/home/hadoop/bin) starting resourcemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-resourcemanager-master1.out node2: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node2.out node3: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node3.out node1: starting nodemanager, logging to /opt/hadoop/hadoop-2.8.4/logs/yarn-hadoop-nodemanager-node1.out [hadoop@master1 sbin]$ jps 5552 NameNode 5994 ResourceManager 6092 Jps 5869 DFSZKFailoverController
此時DataNodenode
[hadoop@node1 hadoop-2.8.4]$ jps 3808 QuorumPeerMain 5062 Jps 4506 DataNode 4620 JournalNode 4732 NodeManager
此時master2linux
[hadoop@master2 sbin]$ jps 6092 Jps 5869 DFSZKFailoverController
bin/hdfs namenode -bootstrapStandby
sbin/hadoop-daemon.sh start namenode
sbin/yarn-daemon.sh start resourcemanager
[hadoop@master2 hadoop-2.8.4]$ jps 4233 Jps 3885 DFSZKFailoverController 4189 ResourceManager 4030 NameNode
ResourceManager狀態web
[hadoop@master2 hadoop-2.8.4]$ bin/yarn rmadmin -getServiceState rm2 Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:48:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable active [hadoop@master2 hadoop-2.8.4]$ bin/yarn rmadmin -getServiceState rm1 Java HotSpot(TM) Client VM warning: You have loaded library /opt/hadoop/hadoop-2.8.4/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'. 19/07/25 01:48:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable standby
enjoy