Hadoop一般有三種運行模式:本地(獨立)模式、僞分佈式(Pseudo-distributed)模式和徹底分佈式(Fully distributed)模式。
安裝完成後,Hadoop的默認配置即爲本地模式,此時Hadoop使用本地文件系統而非分佈式文件系統,並且其也不會啓動任何Hadoop守護進程,Map和Reduce任務都做爲同一進程的不一樣部分來執行。
所以,本地模式下的Hadoop僅運行於本機。此模式僅用於開發或調試MapReduce應用程序但卻避免了複雜的後續操做。僞分佈式模式下,Hadoop將全部進程運行於同一臺主機上,但此時Hadoop將使用分佈式文件系統,並且各jobs也是由JobTracker服務管理的獨立進程。
同時,因爲僞分佈式的Hadoop集羣只有一個節點,所以HDFS的塊複製將限制爲單個副本,其secondary-master和slave也都將運行於本地主機。此種模式除了並不是真正意義的分佈式以外,其程序執行邏輯徹底相似於徹底分佈式,所以,經常使用於開發人員測試程序執行。要真正發揮Hadoop的威力,就得使用徹底分佈式模式。
因爲ZooKeeper實現高可用等依賴於奇數法定數目(an odd-numbered quorum),所以,徹底分佈式環境須要至少三個節點java
1.部署環境node
2.域名解析和關閉防火牆 (全部機器上)linux
/etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.250 master 192.168.1.249 slave1 192.168.1.248 slave2 關閉 selinux sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux setenforce 0 關閉 iptables service iptables stop chkconfig iptables off
3.同步時間,配置yum和epel源apache
4.配置全部機器ssh互信app
[root@master ~]# rpm -ivh jdk-8u25-linux-x64.rpm #安裝jdk Preparing... ########################################### [100%] 1:jdk1.8.0_25 ########################################### [100%] Unpacking JAR files... rt.jar... jsse.jar... charsets.jar... tools.jar... localedata.jar... jfxrt.jar... [root@master ~]# cat /etc/profile.d/java.sh #設置java環境變量 export JAVA_HOME=/usr/java/latest export CLASSPATH=$JAVA_HOME/lib/tools.jar export PATH=$JAVA_HOME/bin:$PATH [root@master ~]# . /etc/profile.d/java.sh [root@master ~]# java -version #查看java變量是否配置成功 java version "1.8.0_25" Java(TM) SE Runtime Environment (build 1.8.0_25-b17) Java HotSpot(TM) 64-Bit Server VM (build 25.25-b02, mixed mode) [root@master ~]# echo $JAVA_HOME /usr/java/latest [root@master ~]# echo $CLASSPATH /usr/java/latest/lib/tools.jar [root@master ~]# echo $PATH /usr/java/latest/bin:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin
1.安裝Hadoop並配置環境變量(master上)框架
[root@master ~]# tar xf hadoop-2.6.5.tar.gz -C /usr/local [root@master ~]# cd /usr/local [root@master local]# ln -sv hadoop-2.6.5 hadoop "hadoop" -> "hadoop-2.6.5" [root@master local]# ll [root@master local]# cat /etc/profile.d/hadoop.sh export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin [root@master local]# . /etc/profile.d/hadoop.sh
2.修改如下配置文件(全部文件均位於/usr/local/hadoop/etc/hadoop路徑下)dom
hadoop-env.shssh
24 # The java implementation to use. 25 export JAVA_HOME=/usr/java/latest #將JAVA_HOME改成固定路徑
core-site.xml分佈式
<configuration> <!-- 指定HDFS老大(namenode)的通訊地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> </property> <!-- 指定hadoop運行時產生文件的存儲路徑 --> <property> <name>hadoop.tmp.dir</name> <value>/Hadoop/tmp</value> </property> </configuration>
hdfs-site.xmloop
<configuration> <!-- 設置namenode的http通信地址 --> <property> <name>dfs.namenode.http-address</name> <value>master:50070</value> </property> <!-- 設置secondarynamenode的http通信地址 --> <property> <name>dfs.namenode.secondary.http-address</name> <value>slave1:50090</value> </property> <!-- 設置namenode存放的路徑 --> <property> <name>dfs.namenode.name.dir</name> <value>/Hadoop/name</value> </property> <!-- 設置hdfs副本數量 --> <property> <name>dfs.replication</name> <value>2</value> </property> <!-- 設置datanode存放的路徑 --> <property> <name>dfs.datanode.data.dir</name> <value>/Hadoop/data</value> </property> </configuration>
mapred-site.xml
[root@master hadoop]# mv mapred-site.xml.template mapred-site.xml
<configuration> <!-- 通知框架MR使用YARN --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
yarn-site.xml
<configuration> <!-- 設置 resourcemanager 在哪一個節點--> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> </property> <!-- reducer取數據的方式是mapreduce_shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
masters
slave1 #這裏指定的是secondary namenode 的主機
slaves
slave1 slave2
建立相關目錄
[root@master local]#mkdir -pv /Hadoop/{data,name,tmp}
3.複製Hadoop安裝目錄及環境配置文件到其餘主機
master上: [root@master local]# scp -r hadoop-2.6.5 slave1:/usr/local/ [root@master local]# scp -r hadoop-2.6.5 slave:/usr/local/ [root@master local]# scp -r /etc/profile.d/hadoop.sh salve1:/etc/profile.d/ [root@master local]# scp -r /etc/profile.d/hadoop.sh salve2:/etc/profile.d/ slave上(另外一臺相同操做) [root@slave1 ~]# cd /usr/local [root@slave1 local]# ln -sv hadoop-2.6.5 hadoop "hadoop" -> "hadoop-2.6.5" [root@slave1 local]# . /etc/profile.d/hadoop.sh
1.格式化名稱節點(master)
[root@master local]# hdfs namenode -format 18/03/28 16:34:20 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master/192.168.1.250 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.6.5 ............................................. ............................................. ............................................. 18/03/28 16:34:23 INFO common.Storage: Storage directory /Hadoop/name has been successfully formatted. #這行信息代表對應的存儲已經格式化成功。 18/03/28 16:34:23 INFO namenode.FSImageFormatProtobuf: Saving image file /Hadoop/name/current/fsimage.ckpt_0000000000000000000 using no compression 18/03/28 16:34:24 INFO namenode.FSImageFormatProtobuf: Image file /Hadoop/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds. 18/03/28 16:34:24 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 18/03/28 16:34:24 INFO util.ExitUtil: Exiting with status 0 18/03/28 16:34:24 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.1.250 ************************************************************/
2.啓動dfs及yarn
[root@master local]# cd hadoop/sbin/ [root@master sbin]# start-dfs.sh [root@master sbin]# start-yarn.sh
注:其他主機操做相同
查看結果
master上 [root@master sbin]# jps|grep -v Jps 3746 ResourceManager 3496 NameNode slave1上 [root@slave1 ~]# jps|grep -v Jps 3906 DataNode 4060 NodeManager 3996 SecondaryNameNode slave2上 [root@slave2 ~]# jps|grep -v Jps 3446 NodeManager 3351 DataNode
1.查看集羣狀態
[root@master sbin]# hdfs dfsadmin -report 18/03/28 21:28:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Configured Capacity: 30694719488 (28.59 GB) Present Capacity: 22252130304 (20.72 GB) DFS Remaining: 22251991040 (20.72 GB) DFS Used: 139264 (136 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 ------------------------------------------------- Live datanodes (2): Name: 192.168.1.249:50010 (slave1) Hostname: slave1 Decommission Status : Normal Configured Capacity: 15347359744 (14.29 GB) DFS Used: 69632 (68 KB) Non DFS Used: 4247437312 (3.96 GB) DFS Remaining: 11099852800 (10.34 GB) DFS Used%: 0.00% DFS Remaining%: 72.32% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Wed Mar 28 21:28:02 CST 2018 Name: 192.168.1.248:50010 (slave2) Hostname: slave2 Decommission Status : Normal Configured Capacity: 15347359744 (14.29 GB) DFS Used: 69632 (68 KB) Non DFS Used: 4195151872 (3.91 GB) DFS Remaining: 11152138240 (10.39 GB) DFS Used%: 0.00% DFS Remaining%: 72.66% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Wed Mar 28 21:28:02 CST 2018
2.測試YARN
能夠訪問YARN的管理界面,驗證YARN,以下圖所示:
3. 測試查看HDFS
4.測試向hadoop集羣系統提交一個mapreduce任務
[root@master sbin]# hdfs dfs -mkdir -p /Hadoop/test [root@master ~]# hdfs dfs -put install.log /Hadoop/test [root@master ~]# hdfs dfs -ls /Hadoop/test Found 1 items -rw-r--r-- 2 root supergroup 28207 2018-03-28 16:48 /Hadoop/test/install.log
能夠看到一切正常,至此咱們的三臺hadoop集羣搭建完畢。
其餘問題:
緣由及解決辦法見此連接:http://www.javashuo.com/article/p-wpoyrnah-mq.html