說明
java
- 本文利用三臺機器,均安裝centos6
- 除了最後啓動和中止,全部操做均要在三臺機器上作
- 有些配置文件能夠先在一臺機器上完成配置,而後經過scp發送到另外的機器,以減小工做量
[root@hadoop1 ~]# vim /etc/sysconfig/network
[root@hadoop1 ~]# vim /etc/hosts
[root@hadoop3 ~]# useradd hadoop [root@hadoop2 ~]# passwd hadoop [root@hadoop3 ~]# vim /etc/sudoers
切換到hadoop用戶 [hadoop@hadoop1 ~]$ ssh-keygen
傳輸密鑰 [hadoop@hadoop1 ~]$ ssh-copy-id hadoop1 [hadoop@hadoop1 ~]$ ssh-copy-id hadoop2 [hadoop@hadoop1 ~]$ ssh-copy-id hadoop3
驗證ssh無需密碼 [hadoop@hadoop1 ~]$ ssh hadoop1 [hadoop@hadoop1 ~]$ ssh hadoop2 [hadoop@hadoop1 ~]$ ssh hadoop3
[hadoop@hadoop1 ~]$ tar -zxvf jdk-8u161-linux-x64.tar.gz [hadoop@hadoop1 ~]$ mv jdk1.8.0_161 jdk8
切換到root [root@hadoop1 ~]# vim /etc/profile 在文件尾加上 #Java export JAVA_HOME=/home/hadoop/jdk8 export PATH=$PATH:$JAVA_HOME/bin 切換到hadoop [hadoop@hadoop1 ~]$ source /etc/profile 驗證 [hadoop@hadoop1 ~]$ java -version
hdfs | yarn | |
---|---|---|
hadoop1 | namenode、datanode | nodemanager |
hadoop2 | datanode、secondarynamenode | nodemanager |
hadoop3 | datanode | resourcemanager、nodemanager |
[hadoop@hadoop1 ~]$ wget https://www-us.apache.org/dist/hadoop/common/hadoop-2.7.6/hadoop-2.7.6.tar.gz [hadoop@hadoop1 ~]$ tar -zxvf hadoop-2.7.6.tar.gz [root@hadoop1 ~]# vim /etc/profile [hadoop@hadoop1 ~]$ source /etc/profile 驗證 [hadoop@hadoop1 ~]$ hadoop version
#Java export JAVA_HOME=/home/hadoop/jdk8 export HADOOP_HOME=/home/hadoop/hadoop-2.7.6 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
配置文件目錄node
hadoop-2.7.6/etc/hadoop/
[hadoop@hadoop3 ~]$ cd hadoop-2.7.6/etc/hadoop/
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop1:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/hadoopdata/tmp</value> </property> </configuration>
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop2:50090</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/hadoopdata/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/hadoopdata/data</value> </property> </configuration>
<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop3</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop1:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop1:19888</value> </property> </configuration>
hadoop1 hadoop2 hadoop3
[hadoop@hadoop1 hadoop-2.7.6]$ hadoop namenode -format
[hadoop@hadoop1 hadoop-2.7.6]$ start-dfs.sh
注意,yarn最好在resourcemanager節點上啓動
linux
[hadoop@hadoop3 hadoop-2.7.6]$ start-yarn.sh
[root@hadoop1 ~]# date Thu Jan 17 00:10:16 EST 2019 //國外vps [root@hadoop1 ~]# rm -rf /etc/localtime [root@hadoop1 ~]# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime [root@hadoop1 ~]# yum -y install ntpdate ntp [root@hadoop1 ~]# ntpdate time.google.com 17 Jan 13:11:43 ntpdate[1691]: step time server 216.239.35.0 offset 2.120207 sec