以前咱們已經介紹瞭如何在單機上搭建僞分佈式的Hadoop環境,而在實際狀況中,確定都是多機器多節點的分佈式集羣環境,因此本文將簡單介紹一下如何在多臺機器上搭建Hadoop的分佈式環境。html
我這裏準備了三臺機器,IP地址以下:java
首先在這三臺機器上編輯/etc/hosts
配置文件,修改主機名以及配置其餘機器的主機名node
[root@localhost ~]# vim /etc/hosts # 三臺機器都須要操做 192.168.77.128 hadoop000 192.168.77.130 hadoop001 192.168.77.134 hadoop002 [root@localhost ~]# reboot
三臺機器在集羣中所擔任的角色:mysql
集羣之間的機器須要相互通訊,因此咱們得先配置免密碼登陸。在三臺機器上分別運行以下命令,生成密鑰對:linux
[root@hadoop000 ~]# ssh-keygen -t rsa # 三臺機器都須要執行這個命令生成密鑰對 Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 0d:00:bd:a3:69:b7:03:d5:89:dc:a8:a2:ca:28:d6:06 root@hadoop000 The key's randomart image is: +--[ RSA 2048]----+ | .o. | | .. | | . *.. | | B +o | | = .S . | | E. * . | | .oo o . | |=. o o | |*.. . | +-----------------+ [root@hadoop000 ~]# ls .ssh/ authorized_keys id_rsa id_rsa.pub known_hosts [root@hadoop000 ~]#
以hadoop000爲主,執行如下命令,分別把公鑰拷貝到其餘機器上:web
[root@hadoop000 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop000 [root@hadoop000 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop001 [root@hadoop000 ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop002
注:其餘兩臺機器也須要執行以上這三條命令。sql
拷貝完成以後,測試可否正常進行免密登陸:shell
[root@hadoop000 ~]# ssh hadoop000 Last login: Mon Apr 2 17:20:02 2018 from localhost [root@hadoop000 ~]# ssh hadoop001 Last login: Tue Apr 3 00:49:59 2018 from 192.168.77.1 [root@hadoop001 ~]# 登出 Connection to hadoop001 closed. [root@hadoop000 ~]# ssh hadoop002 Last login: Tue Apr 3 00:50:03 2018 from 192.168.77.1 [root@hadoop002 ~]# 登出 Connection to hadoop002 closed. [root@hadoop000 ~]# 登出 Connection to hadoop000 closed. [root@hadoop000 ~]#
如上,hadoop000機器已經可以正常免密登陸其餘兩臺機器,那麼咱們的配置就成功了。apache
到Oracle官網拿到JDK的下載連接,我這裏用的是JDK1.8,地址以下:vim
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
使用wget命令將JDK下載到/usr/local/src/
目錄下,我這裏已經下載好了:
[root@hadoop000 ~]# cd /usr/local/src/ [root@hadoop000 /usr/local/src]# ls jdk-8u151-linux-x64.tar.gz [root@hadoop000 /usr/local/src]#
解壓下載的壓縮包,並將解壓後的目錄移動到/usr/local/
目錄下:
[root@hadoop000 /usr/local/src]# tar -zxvf jdk-8u151-linux-x64.tar.gz [root@hadoop000 /usr/local/src]# mv ./jdk1.8.0_151 /usr/local/jdk1.8
編輯/etc/profile
文件配置環境變量:
[root@hadoop000 ~]# vim /etc/profile # 增長以下內容 JAVA_HOME=/usr/local/jdk1.8/ JAVA_BIN=/usr/local/jdk1.8/bin JRE_HOME=/usr/local/jdk1.8/jre PATH=$PATH:/usr/local/jdk1.8/bin:/usr/local/jdk1.8/jre/bin CLASSPATH=/usr/local/jdk1.8/jre/lib:/usr/local/jdk1.8/lib:/usr/local/jdk1.8/jre/lib/charsets.jar export PATH=$PATH:/usr/local/mysql/bin/
使用source
命令加載配置文件,讓其生效,生效後執行java -version
命令便可看到JDK的版本:
[root@hadoop000 ~]# source /etc/profile [root@hadoop000 ~]# java -version java version "1.8.0_151" Java(TM) SE Runtime Environment (build 1.8.0_151-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode) [root@hadoop000 ~]#
在hadoop000上安裝完JDK後,經過rsync命令,將JDK以及配置文件都同步到其餘機器上:
[root@hadoop000 ~]# rsync -av /usr/local/jdk1.8 hadoop001:/usr/local [root@hadoop000 ~]# rsync -av /usr/local/jdk1.8 hadoop002:/usr/local [root@hadoop000 ~]# rsync -av /etc/profile hadoop001:/etc/profile [root@hadoop000 ~]# rsync -av /etc/profile hadoop002:/etc/profile
同步完成後,分別在兩臺機器上source配置文件,讓環境變量生效,生效後再執行java -version
命令測試JDK是否已安裝成功。
下載Hadoop 2.6.0-cdh5.7.0的tar.gz包並解壓:
[root@hadoop000 ~]# cd /usr/local/src/ [root@hadoop000 /usr/local/src]# wget http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz [root@hadoop000 /usr/local/src]# tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C /usr/local/
注:若是在Linux上下載得很慢的話,能夠在windows的迅雷上使用這個連接進行下載。而後再上傳到Linux中,這樣就會快一些。
解壓完後,進入到解壓後的目錄下,能夠看到hadoop的目錄結構以下:
[root@hadoop000 /usr/local/src]# cd /usr/local/hadoop-2.6.0-cdh5.7.0/ [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0]# ls bin cloudera examples include libexec NOTICE.txt sbin src bin-mapreduce1 etc examples-mapreduce1 lib LICENSE.txt README.txt share [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0]#
簡單說明一下其中幾個目錄存放的東西:
以上就算是把hadoop給安裝好了,接下來就是編輯配置文件,把JAVA_HOME配置一下:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0]# cd etc/ [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc]# cd hadoop [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim hadoop-env.sh export JAVA_HOME=/usr/local/jdk1.8/ # 根據你的環境變量進行修改 [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]#
而後將Hadoop的安裝目錄配置到環境變量中,方便以後使用它的命令:
[root@hadoop000 ~]# vim ~/.bash_profile # 增長如下內容 export HADOOP_HOME=/usr/local/hadoop-2.6.0-cdh5.7.0/ export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH [root@localhost ~]# source !$ source ~/.bash_profile [root@localhost ~]#
接着分別編輯core-site.xml
以及hdfs-site.xml
配置文件:
[root@hadoop000 ~]# cd $HADOOP_HOME [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0]# cd etc/hadoop [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim core-site.xml # 增長以下內容 <configuration> <property> <name>fs.default.name</name> <value>hdfs://hadoop000:8020</value> # 指定默認的訪問地址以及端口號 </property> </configuration> [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim hdfs-site.xml # 增長以下內容 <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/data/hadoop/app/tmp/dfs/name</value> # namenode臨時文件所存放的目錄 </property> <property> <name>dfs.datanode.data.dir</name> <value>/data/hadoop/app/tmp/dfs/data</value> # datanode臨時文件所存放的目錄 </property> </configuration> [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# mkdir -p /data/hadoop/app/tmp/dfs/name [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# mkdir -p /data/hadoop/app/tmp/dfs/data
接下來還須要編輯yarn-site.xml
配置文件:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim yarn-site.xml # 增長以下內容 <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop000</value> </property> </configuration> [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]#
拷貝並編輯MapReduce的配置文件:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# cp mapred-site.xml.template mapred-site.xml [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim !$ # 增長以下內容 <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]#
最後是配置從節點的主機名,若是沒有配置主機名的狀況下就使用IP:
[root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]# vim slaves hadoop000 hadoop001 hadoop002 [root@hadoop000 /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop]#
到此爲止,咱們就已經在hadoop000上搭建好了咱們主節點(master)的Hadoop集羣環境,可是還有其餘兩臺做爲從節點(slave)的機器沒配置Hadoop環境,因此接下來須要把hadoop000上的Hadoop安裝目錄以及環境變量配置文件分發到其餘兩臺機器上,分別執行以下命令:
[root@hadoop000 ~]# rsync -av /usr/local/hadoop-2.6.0-cdh5.7.0/ hadoop001:/usr/local/hadoop-2.6.0-cdh5.7.0/ [root@hadoop000 ~]# rsync -av /usr/local/hadoop-2.6.0-cdh5.7.0/ hadoop002:/usr/local/hadoop-2.6.0-cdh5.7.0/ [root@hadoop000 ~]# rsync -av ~/.bash_profile hadoop001:~/.bash_profile [root@hadoop000 ~]# rsync -av ~/.bash_profile hadoop002:~/.bash_profile
分發完成以後到兩臺機器上分別執行source命令以及建立臨時目錄:
[root@hadoop001 ~]# source .bash_profile [root@hadoop001 ~]# mkdir -p /data/hadoop/app/tmp/dfs/name [root@hadoop001 ~]# mkdir -p /data/hadoop/app/tmp/dfs/data [root@hadoop002 ~]# source .bash_profile [root@hadoop002 ~]# mkdir -p /data/hadoop/app/tmp/dfs/name [root@hadoop002 ~]# mkdir -p /data/hadoop/app/tmp/dfs/data
對NameNode作格式化,只須要在hadoop000上執行便可:
[root@hadoop000 ~]# hdfs namenode -format
格式化完成以後,就能夠啓動Hadoop集羣了:
[root@hadoop000 ~]# start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 18/04/02 20:10:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Starting namenodes on [hadoop000] hadoop000: starting namenode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-namenode-hadoop000.out hadoop000: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-datanode-hadoop000.out hadoop001: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-datanode-hadoop001.out hadoop002: starting datanode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-datanode-hadoop002.out Starting secondary namenodes [0.0.0.0] The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established. ECDSA key fingerprint is 4d:5a:9d:31:65:75:30:47:a3:9c:f5:56:63:c4:0f:6a. Are you sure you want to continue connecting (yes/no)? yes # 輸入yes便可 0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts. 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/hadoop-root-secondarynamenode-hadoop000.out 18/04/02 20:11:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/yarn-root-resourcemanager-hadoop000.out hadoop001: starting nodemanager, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/yarn-root-nodemanager-hadoop001.out hadoop002: starting nodemanager, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/yarn-root-nodemanager-hadoop002.out hadoop000: starting nodemanager, logging to /usr/local/hadoop-2.6.0-cdh5.7.0/logs/yarn-root-nodemanager-hadoop000.out [root@hadoop000 ~]# jps # 查看是否有如下幾個進程 6256 Jps 5538 DataNode 5843 ResourceManager 5413 NameNode 5702 SecondaryNameNode 5945 NodeManager [root@hadoop000 ~]#
到另外兩臺機器上檢查進程:
hadoop001:
[root@hadoop001 ~]# jps 3425 DataNode 3538 NodeManager 3833 Jps [root@hadoop001 ~]#
hadoop002:
[root@hadoop002 ~]# jps 3171 DataNode 3273 NodeManager 3405 Jps [root@hadoop002 ~]#
各機器的進程檢查完成,而且肯定沒有問題後,在瀏覽器上訪問主節點的50070端口,例如:192.168.77.128:50070
。會訪問到以下頁面:
點擊 」Live Nodes「 查看存活的節點:
如上,能夠訪問50070端口就表明集羣中的HDFS是正常的。
接下來咱們還須要訪問主節點的8088端口,這是YARN的web服務端口,例如:192.168.77.128:8088
。以下:
點擊 「Active Nodes」 查看存活的節點:
好了,到此爲止咱們的Hadoop分佈式集羣環境就搭建完畢了,就是這麼簡單。那麼啓動了集羣以後要如何關閉集羣呢?也很簡單,在主節點上執行以下命令便可:
[root@hadoop000 ~]# stop-all.sh
實際上分佈式環境下HDFS及YARN的使用和僞分佈式下是如出一轍的,例如HDFS的shell命令的使用方式依舊是和僞分佈式下同樣的。例如:
[root@hadoop000 ~]# hdfs dfs -ls / [root@hadoop000 ~]# hdfs dfs -mkdir /data [root@hadoop000 ~]# hdfs dfs -put ./test.sh /data [root@hadoop000 ~]# hdfs dfs -ls / Found 1 items drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data [root@hadoop000 ~]# hdfs dfs -ls /data Found 1 items -rw-r--r-- 3 root supergroup 68 2018-04-02 20:29 /data/test.sh [root@hadoop000 ~]#
在集羣中的其餘節點也能夠訪問HDFS,並且在集羣中HDFS是共享的,全部節點訪問的數據都是同樣的。例如我在hadoop001節點中,上傳一個目錄:
[root@hadoop001 ~]# hdfs dfs -ls / Found 1 items drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data [root@hadoop001 ~]# hdfs dfs -put ./logs / [root@hadoop001 ~]# hdfs dfs -ls / drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data drwxr-xr-x - root supergroup 0 2018-04-02 20:31 /logs [root@hadoop001 ~]#
而後再到hadoop002上查看:
[root@hadoop002 ~]# hdfs dfs -ls / Found 2 items drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data drwxr-xr-x - root supergroup 0 2018-04-02 20:31 /logs [root@hadoop002 ~]#
能夠看到,不一樣的節點,訪問的數據也是同樣的。因爲和僞分佈式下的操做是同樣的,我這裏就再也不過多演示了。
簡單演示了HDFS的操做以後,咱們再來運行一下Hadoop自帶的案例,看看YARN上是否能獲取到任務的執行信息。隨便在一個節點上執行以下命令:
[root@hadoop002 ~]# cd /usr/local/hadoop-2.6.0-cdh5.7.0/share/hadoop/mapreduce [root@hadoop002 /usr/local/hadoop-2.6.0-cdh5.7.0/share/hadoop/mapreduce]# hadoop jar ./hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar pi 3 4 [root@hadoop002 ~]#
申請資源:
執行任務:
然而我這不幸的執行失敗(容我喊一句當媽的撕高達):
能咋辦,只能排錯咯,查看到命令行終端的報錯信息以下:
Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 18/04/03 04:32:17 INFO mapreduce.Job: Task Id : attempt_1522671083370_0001_m_000002_0, Status : FAILED Container launch failed for container_1522671083370_0001_01_000004 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1522701136752 found 1522673393827 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 18/04/03 04:32:18 INFO mapreduce.Job: Task Id : attempt_1522671083370_0001_m_000001_1, Status : FAILED Container launch failed for container_1522671083370_0001_01_000005 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1522701157769 found 1522673395895 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 18/04/03 04:32:20 INFO mapreduce.Job: Task Id : attempt_1522671083370_0001_m_000001_2, Status : FAILED Container launch failed for container_1522671083370_0001_01_000007 : org.apache.hadoop.yarn.exceptions.YarnException: Unauthorized request to start container. This token is expired. current time is 1522701159832 found 1522673397934 Note: System times on machines may be out of sync. Check system time and time zones. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:159) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:379) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 18/04/03 04:32:23 INFO mapreduce.Job: map 33% reduce 100% 18/04/03 04:32:24 INFO mapreduce.Job: map 100% reduce 100% 18/04/03 04:32:24 INFO mapreduce.Job: Job job_1522671083370_0001 failed with state FAILED due to: Task failed task_1522671083370_0001_m_000001 Job failed as tasks failed. failedMaps:1 failedReduces:0 18/04/03 04:32:24 INFO mapreduce.Job: Counters: 12 Job Counters Killed map tasks=2 Launched map tasks=2 Other local map tasks=4 Data-local map tasks=3 Total time spent by all maps in occupied slots (ms)=10890 Total time spent by all reduces in occupied slots (ms)=0 Total time spent by all map tasks (ms)=10890 Total vcore-seconds taken by all map tasks=10890 Total megabyte-seconds taken by all map tasks=11151360 Map-Reduce Framework CPU time spent (ms)=0 Physical memory (bytes) snapshot=0 Virtual memory (bytes) snapshot=0 Job Finished in 23.112 seconds java.io.FileNotFoundException: File does not exist: hdfs://hadoop000:8020/user/root/QuasiMonteCarlo_1522701120069_2085123424/out/reduce-out at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1219) at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:1211) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1211) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1750) at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1774) at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314) at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71) at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
雖然報了一大串的錯誤信息,可是從報錯信息中,能夠看到第一句是System times on machines may be out of sync. Check system time and time zones.
,這是說機器上的系統時間可能不一樣步。讓咱們檢查系統時間和時區。而後我就檢查了集羣中全部機器的時間,的確是不一樣步的。那麼要如何同步時間呢?那就要使用到ntpdate
命令了,在全部機器上安裝ntp包,並執行同步時間的命令,以下:
[root@hadoop000 ~]# yum install -y ntp [root@hadoop000 ~]# ntpdate -u ntp.api.bz
完成以後再次執行以前的命令,此次任務執行成功:
在這以前用Hadoop寫了一個統計日誌數據的小項目,如今既然咱們的集羣搭建成功了,那麼固然是得拿上來跑一下看看。首先將日誌文件以及jar包上傳到服務器上:
[root@hadoop000 ~]# ls 10000_access.log hadoop-train-1.0-jar-with-dependencies.jar [root@hadoop000 ~]#
把日誌文件put到HDFS文件系統中:
[root@hadoop000 ~]# hdfs dfs -put ./10000_access.log / [root@hadoop000 ~]# hdfs dfs -ls / Found 5 items -rw-r--r-- 3 root supergroup 2769741 2018-04-02 21:13 /10000_access.log drwxr-xr-x - root supergroup 0 2018-04-02 20:29 /data drwxr-xr-x - root supergroup 0 2018-04-02 20:31 /logs drwx------ - root supergroup 0 2018-04-02 20:39 /tmp drwxr-xr-x - root supergroup 0 2018-04-02 20:39 /user [root@hadoop000 ~]#
執行如下命令,將項目運行在Hadoop集羣之上:
[root@hadoop000 ~]# hadoop jar ./hadoop-train-1.0-jar-with-dependencies.jar org.zero01.hadoop.project.LogApp /10000_access.log /browserout
到YARN上查看任務執行時的信息:
申請資源:
執行任務:
任務執行成功:
查看輸出文件內容:
[root@hadoop000 ~]# hdfs dfs -ls /browserout Found 2 items -rw-r--r-- 3 root supergroup 0 2018-04-02 21:22 /browserout/_SUCCESS -rw-r--r-- 3 root supergroup 56 2018-04-02 21:22 /browserout/part-r-00000 [root@hadoop000 ~]# hdfs dfs -text /browserout/part-r-00000 Chrome 2775 Firefox 327 MSIE 78 Safari 115 Unknown 6705 [root@hadoop000 ~]#
處理結果沒有問題,到此爲止,咱們的測試也完成了,接下來就能夠愉快的使用Hadoop集羣來幫咱們處理數據了(固然代碼你仍是得寫的)。
從整個Hadoop分佈式集羣環境的搭建到使用的過程當中,能夠看到除了搭建與僞分佈式有些許區別外,在使用上基本是如出一轍的。因此也建議在學習的狀況下使用僞分佈式環境便可,畢竟集羣的環境比較複雜,容易出現節點間通訊障礙的問題。若是卡在這些問題上,致使學習不成還氣得不行就得不償失了233。