1、硬件環境html
我使用的硬件是雲創的一個minicloud設備。由三個節點(每一個節點8GB內存+128GB SSD+3塊3TB SATA)和一個千兆交換機組成。java
2、安裝前準備node
1.在CentOS 7下新建hadoop用戶,官方推薦的是hadoop、mapreduce、yarn分別用不一樣的用戶安裝,這裏我爲了省事就所有在hadoop用戶下安裝了。linux
2.下載安裝包:web
1)JDK:jdk-8u112-linux-x64.rpmapache
下載地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
2)Hadoop-2.7.3:hadoop-2.7.3.tar.gz網絡
下載地址:http://archive.apache.org/dist/hadoop/common/stable2/
3.卸載CentOS 7自帶的OpenJDK(root權限下)oracle
1)首先查看系統已有的openjdkapp
rpm -qa|grep jdk
看到以下結果:less
[hadoop@localhost Desktop]$ rpm -qa|grep jdk java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64 java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64 java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64 java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64
2)卸載上述找到的openjdk包
yum -y remove java-1.7.0-openjdk-1.7.0.111-2.6.7.2.el7_2.x86_64 yum -y remove java-1.8.0-openjdk-headless-1.8.0.101-3.b13.el7_2.x86_64 yum -y remove java-1.8.0-openjdk-1.8.0.101-3.b13.el7_2.x86_64 yum -y remove java-1.7.0-openjdk-headless-1.7.0.111-2.6.7.2.el7_2.x86_64
4.安裝Oracle JDK(root權限下)
rpm -ivh jdk-8u112-linux-x64.rpm
安裝完畢後,jdk的路徑爲/usr/java/jdk1.8.0_112
接着將安裝的jdk的路徑添加至系統環境變量中:
vi /etc/profile
在文件末尾加上以下內容:
export JAVA_HOME=/usr/java/jdk1.8.0_112 export JRE_HOME=/usr/java/jdk1.8.0_112/jre export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
關閉profile文件,執行下列命令使配置生效:
source /etc/profile
此時咱們就能夠經過java -version命令檢查jdk路徑是否配置成功,以下所示:
[root@localhost jdk1.8.0_112]# java -version java version "1.8.0_112" Java(TM) SE Runtime Environment (build 1.8.0_112-b15) Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode) [root@localhost jdk1.8.0_112]#
5.關閉防火牆(root權限下)
執行下述命令關閉防火牆:
systemctl stop firewalld.service systemctl disable firewalld.service
在終端效果以下:
[root@localhost Desktop]# systemctl stop firewalld.service [root@localhost Desktop]# systemctl disable firewalld.service Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service. Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service. [root@localhost Desktop]#
6.修改主機名並配置相關網絡(root權限下)
1)修改主機名
在master主機上
hostnamectl set-hostname Master
在slave1主機上
hostnamectl set-hostname slave1
在slave2主機上
hostnamectl set-hostname slave2
2)配置網絡
以master主機爲例,演示如何配置靜態網絡及host文件。
個人機器每一個節點有兩塊網卡,我配置其中一塊網卡爲靜態IP做爲節點內部通訊使用。
vi /etc/sysconfig/network-scripts/ifcfg-enp7s0
(注:個人master機器上要配置的網卡名稱爲ifcfg-enp7s0)
ifcfg-enp7s0原始內容以下:
TYPE=Ethernet BOOTPROTO=dhcp DEFROUTE=yes PEERDNS=yes PEERROUTES=yes IPV4_FAILURE_FATAL=no IPV6INIT=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no NAME=enp7s0 UUID=914595f1-e6f9-4c9b-856a-c4bd79ffe987 DEVICE=enp7s0 ONBOOT=no
修改成:
TYPE=Ethernet ONBOOT=yes DEVICE=enp7s0 UUID=914595f1-e6f9-4c9b-856a-c4bd79ffe987 BOOTPROTO=static IPADDR=59.71.229.189 GATEWAY=59.71.229.254 DEFROUTE=yes IPV6INIT=no IPV4_FAILURE_FATAL=yes
3)修改/etc/hosts文件
vi /etc/hosts
加入如下內容:
59.71.229.189 master 59.71.229.190 slave1 59.71.229.191 slave2
爲集羣中全部節點執行上述的網絡配置及hosts文件配置。
7.配置集羣節點SSH免密碼登陸(hadoop權限下)
這裏我爲了方便,是配置的集羣中任意節點可以SSH免密碼登陸到集羣其餘任意節點。具體步驟以下:
1)對於每一臺機器,在hadoop用戶下執行如下指令:
ssh-keygen -t rsa -P ''
直接按Enter到底。
2)對於每臺機器,首先將本身的公鑰加到authorized_keys中,保證ssh localhost無密碼登陸:
cat id_rsa.pub >> authorized_keys
3)而後將本身的公鑰添加至其餘每臺機器的authorized_keys中,在此過程當中須要輸入其餘機器的密碼:
master:
scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/.ssh/id_rsa_master.pub scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/.ssh/id_rsa_master.pub
slave1:
scp /home/hadoop/.ssh/id_rsa.pub hadoop@master:/home/hadoop/.ssh/id_rsa_slave1.pub scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave2:/home/hadoop/.ssh/id_rsa_slave1.pub
slave2:
scp /home/hadoop/.ssh/id_rsa.pub hadoop@master:/home/hadoop/.ssh/id_rsa_slave2.pub scp /home/hadoop/.ssh/id_rsa.pub hadoop@slave1:/home/hadoop/.ssh/id_rsa_slave2.pub
4)分別進每一臺主機的/home/hadoop/.ssh/目錄下,將除本機產生的公鑰(id_rsa.pub)以外的其餘公鑰使用cat命令添加至authorized_keys中。添加完畢以後使用chmod命令給authorized_keys文件設置權限,而後使用rm命令刪除全部的公鑰:
master:
cat id_rsa_slave1.pub >> authorized_keys cat id_rsa_slave2.pub >> authorized_keys chmod 600 authorized_keys rm id_rsa*.pub
slave1:
cat id_rsa_master.pub >> authorized_keys cat id_rsa_slave2.pub >> authorized_keys chmod 600 authorized_keys rm id_rsa*.pub
slave2:
cat id_rsa_master.pub >> authorized_keys cat id_rsa_slave1.pub >> authorized_keys chmod 600 authorized_keys rm id_rsa*.pub
完成上述步驟,就能夠實現從任意一臺機器經過ssh命令免密碼登陸任意一臺其餘機器了。
3、安裝和配置Hadoop(下述步驟在hadoop用戶下執行)
1.將hadoop-2.7.3.tar.gz文件解壓至/home/hadoop/目錄下(在本文檔中,文件所在地是hadoop帳戶下桌面上)可經過下述命令先解壓至文件所在地:
tar -zxvf hadoop-2.7.3.tar.gz
而後將解壓的文件hadoop-2.7.3全部內容拷貝至/home/hadoop目錄下,拷貝以後刪除文件所在地的hadoop文件夾:
cp -r /home/hadoop/Desktop/hadoop-2.7.3 /home/hadoop/
2.具體配置過程:
1)在master上,首先/home/hadoop/目錄下建立如下目錄:
mkdir -p /home/hadoop/hadoopdir/name mkdir -p /home/hadoop/hadoopdir/data mkdir -p /home/hadoop/hadoopdir/temp mkdir -p /home/hadoop/hadoopdir/logs mkdir -p /home/hadoop/hadoopdir/pids
2)而後經過scp命令將hadoopdir目錄複製至其餘節點:
scp -r /home/hadoop/hadoopdir hadoop@slave1:/home/hadoop/ scp -r /home/hadoop/hadoopdir hadoop@slave2:/home/hadoop/
3)進入/home/hadoop/hadoop-2.7.3/etc/hadoop目錄下,修改如下文件:
hadoop-env.sh:
export JAVA_HOME=/usr/java/jdk1.8.0_112 export HADOOP_LOG_DIR=/home/hadoop/hadoopdir/logs export HADOOP_PID_DIR=/home/hadoop/hadoopdir/pids
mapred-env.sh:
export JAVA_HOME=/usr/java/jdk1.8.0_112 export HADOOP_MAPRED_LOG_DIR=/home/hadoop/hadoopdir/logs export HADOOP_MAPRED_PID_DIR=/home/hadoop/hadoopdir/pids
yarn-env.sh:
export JAVA_HOME=/usr/java/jdk1.8.0_112 YARN_LOG_DIR=/home/hadoop/hadoopdir/logs
Slaves文件:
#localhost slave1 slave2
(注意:若是slaves文件裏面不註釋localhost,意思是把本機也做爲一個DataNode節點)
core-site.xml:
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:///home/hadoop/hadoopdir/temp</value> </property> </configuration>
hdfs-site.xml:
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop/hadoopdir/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:///home/hadoop/hadoopdir/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.blocksize</name> <value>64m</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
mapred-site.xml:
cp mapred-site.xml.template mapred-site.xml vi mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <final>true</final> </property> <property> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobtracker.http.address</name> <value>master:50030</value> </property> <property> <name>mapred.job.tracker</name> <value>http://master:9001</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property> </configuration>
yarn-site.xml:
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property>
4)master機器下,將/home/hadoop/hadoop-2.7.3目錄裏面全部內容拷貝至其餘節點
scp -r /home/hadoop/hadoop-2.7.3 hadoop@slave1:/home/hadoop/ scp -r /home/hadoop/hadoop-2.7.3 hadoop@slave2:/home/hadoop/
5)進入/home/hadoop/hadoop-2.7.3/bin目錄,格式化文件系統:
./hdfs namenode -format
格式化文件系統會產生一系列的終端輸出,在輸出最後幾行看到STATUS=0表示格式化成功,若是格式化失敗請詳細查看日誌肯定錯誤緣由。
6)進入/home/hadoop/hadoop-2.7.3/sbin目錄:
./start-dfs.sh ./start-yarn.sh
上述命令就啓動了hdfs和yarn。hadoop集羣就跑起來了,若是要關閉,在sbin目錄下執行如下命令:
./stop-yarn.sh ./stop-dfs.sh
7)HDFS啓動示例
執行start-dfs.sh以後,能夠在master:50070網頁上看到以下結果,能夠看到集羣信息和datanode相關信息:
執行start-yarn.sh以後,能夠在master:8088網頁上看到以下結果,能夠看到集羣信息相關信息: