問題導讀 一、配置過程當中會遇到哪些問題,如何解決? 二、Java調用Hadoop2.6 ,運行MR程序須要作哪些配置? 三、如何經過Web程序調用Hadoop? 1. hadoop集羣: 1.1 系統及硬件配置: hadoop版本:2.6 ;三臺虛擬機:node101(192.168.0.101)、node102(192.168.0.102)、node103(192.168.0.103); 每臺機器2G內存、1個CPU核; node101: NodeManager、 NameNode、ResourceManager、DataNode; node102: NodeManager、DataNode 、SecondaryNameNode、JobHistoryServer; node103: NodeManager 、DataNode; 1.2 配置過程當中遇到的問題: 1) NodeManager啓動不了; 最開始配置的虛擬機配置的是512M內存,因此在yarn-site.xml 中的「yarn.nodemanager.resource.memory-mb」配置爲512(其默認配置是1024),查看日誌,報錯:
- org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved SHUTDOWN signal from Resourcemanager ,Registration of NodeManager failed, Message from ResourceManager: NodeManager from node101 doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the NodeManager.
複製代碼 把它改成1024或者以上就能夠正常啓動NodeManager了,我設置的是2048; 2) 任務能夠提交,可是不會繼續運行 a. 因爲這裏每一個虛擬機只配置了一個核,可是yarn-site.xml裏面的「yarn.nodemanager.resource.cpu-vcores」默認配置是8,這樣在分配資源的時候會有問題,因此把這個參數配置爲1; b. 出現下面的錯誤:
- is running beyond virtual memory limits. Current usage: 96.6 MB of 1.5 GB physical memory used; 1.6 GB of 1.5 GB virtual memory used. Killing container.
複製代碼 這個應該是map、reduce、NodeManager的資源配置沒有配置好,大小配置不正確致使的,可是我改了很久,感受應該是沒問題的,可是一直報這個錯,最後沒辦法,把這個檢查去掉了,即把yarn-site.xml 中的「yarn.nodemanager.vmem-check-enabled」配置爲false;這樣就能夠提交任務了。 1.3 配置文件(但願有高人能夠指點下資源配置狀況,能夠不出現上面b的錯誤,而不是使用去掉檢查的方法): 1)hadoop-env.sh 和yarn-env.sh 中配置jdk,同時HADOOP_HEAPSIZE和YARN_HEAPSIZE配置爲512; 2)hdfs-site.xml 配置數據存儲路徑和secondaryname所在節點:
- <configuration>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:////data/hadoop/hdfs/name</value>
- <description>Determines where on the local filesystem the DFS name node
- should store the name table(fsimage). If this is a comma-delimited list
- of directories then the name table is replicated in all of the
- directories, for redundancy. </description>
- </property>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:///data/hadoop/hdfs/data</value>
- <description>Determines where on the local filesystem an DFS data node
- should store its blocks. If this is a comma-delimited
- list of directories, then data will be stored in all named
- directories, typically on different devices.
- Directories that do not exist are ignored.
- </description>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>node102:50090</value>
- </property>
- </configuration>
複製代碼 3)core-site.xml 配置namenode:
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://node101:8020</value>
- </property>
- </configuration>
複製代碼 4) mapred-site.xml 配置map和reduce的資源:
- <configuration>
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- <description>The runtime framework for executing MapReduce jobs.
- Can be one of local, classic or yarn.
- </description>
- </property>
-
- <!-- jobhistory properties -->
- <property>
- <name>mapreduce.jobhistory.address</name>
- <value>node102:10020</value>
- <description>MapReduce JobHistory Server IPC host:port</description>
- </property>
-
-
- <property>
- <name>mapreduce.map.memory.mb</name>
- <value>1024</value>
- </property>
- <property>
- <name>mapreduce.reduce.memory.mb</name>
- <value>1024</value>
- </property>
- <property>
- <name>mapreduce.map.java.opts</name>
- <value>-Xmx512m</value>
- </property>
- <property>
- <name>mapreduce.reduce.java.opts</name>
- <value>-Xmx512m</value>
- </property>
- </configuration>
複製代碼 5)yarn-site.xml 配置resourcemanager及相關資源:
- <configuration>
-
- <property>
- <description>The hostname of the RM.</description>
- <name>yarn.resourcemanager.hostname</name>
- <value>node101</value>
- </property>
-
- <property>
- <description>The address of the applications manager interface in the RM.</description>
- <name>yarn.resourcemanager.address</name>
- <value>${yarn.resourcemanager.hostname}:8032</value>
- </property>
-
- <property>
- <description>The address of the scheduler interface.</description>
- <name>yarn.resourcemanager.scheduler.address</name>
- <value>${yarn.resourcemanager.hostname}:8030</value>
- </property>
-
- <property>
- <description>The http address of the RM web application.</description>
- <name>yarn.resourcemanager.webapp.address</name>
- <value>${yarn.resourcemanager.hostname}:8088</value>
- </property>
-
- <property>
- <description>The https adddress of the RM web application.</description>
- <name>yarn.resourcemanager.webapp.https.address</name>
- <value>${yarn.resourcemanager.hostname}:8090</value>
- </property>
-
- <property>
- <name>yarn.resourcemanager.resource-tracker.address</name>
- <value>${yarn.resourcemanager.hostname}:8031</value>
- </property>
-
- <property>
- <description>The address of the RM admin interface.</description>
- <name>yarn.resourcemanager.admin.address</name>
- <value>${yarn.resourcemanager.hostname}:8033</value>
- </property>
-
- <property>
- <description>List of directories to store localized files in. An
- application's localized file directory will be found in:
- ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
- Individual containers' work directories, called container_${contid}, will
- be subdirectories of this.
- </description>
- <name>yarn.nodemanager.local-dirs</name>
- <value>/data/hadoop/yarn/local</value>
- </property>
-
- <property>
- <description>Whether to enable log aggregation</description>
- <name>yarn.log-aggregation-enable</name>
- <value>true</value>
- </property>
-
- <property>
- <description>Where to aggregate logs to.</description>
- <name>yarn.nodemanager.remote-app-log-dir</name>
- <value>/data/tmp/logs</value>
- </property>
-
- <property>
- <description>Amount of physical memory, in MB, that can be allocated
- for containers.</description>
- <name>yarn.nodemanager.resource.memory-mb</name>
- <value>2048</value>
- </property>
- <property>
- <name>yarn.scheduler.minimum-allocation-mb</name>
- <value>512</value>
- </property>
- <property>
- <name>yarn.nodemanager.vmem-pmem-ratio</name>
- <value>1.0</value>
- </property>
- <property>
- <name>yarn.nodemanager.vmem-check-enabled</name>
- <value>false</value>
- </property>
- <!--
- <property>
- <description>The class to use as the resource scheduler.</description>
- <name>yarn.resourcemanager.scheduler.class</name>
- <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
- </property>
-
- <property>
- <description>fair-scheduler conf location</description>
- <name>yarn.scheduler.fair.allocation.file</name>
- <value>${yarn.home.dir}/etc/hadoop/fairscheduler.xml</value>
- </property>
- -->
- <property>
- <name>yarn.nodemanager.resource.cpu-vcores</name>
- <value>1</value>
- </property>
- <property>
- <description>the valid service name should only contain a-zA-Z0-9_ and can not start with numbers</description>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
- <property>
- <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
- <value>org.apache.hadoop.mapred.ShuffleHandler</value>
- </property>
- </configuration>
複製代碼 2. Java調用Hadoop2.6 ,運行MR程序: 需修改下面兩個地方: 1) 調用主程序的Configuration須要配置:
- Configuration conf = new Configuration();
-
- conf.setBoolean("mapreduce.app-submission.cross-platform", true);// 配置使用跨平臺提交任務
- conf.set("fs.defaultFS", "hdfs://node101:8020");//指定namenode
- conf.set("mapreduce.framework.name", "yarn"); // 指定使用yarn框架
- conf.set("yarn.resourcemanager.address", "node101:8032"); // 指定resourcemanager
- conf.set("yarn.resourcemanager.scheduler.address", "node101:8030");// 指定資源分配器
複製代碼 2) 添加下面的類到classpath: == == 其餘地方不用修改,這樣就能夠運行; 3. Web程序調用Hadoop2.6,運行MR程序; 程序能夠在 百度網盤: 連接:http://pan.baidu.com/s/1cv7WU 密碼:x7c9 java web程序調用hadoop2.6 下載 這個web程序調用部分和上面的java是同樣的,基本都沒有修改,所使用到的jar包也所有放在了lib下面。 最後有一點,我運行了三個map,可是三個map不是均勻分佈的: 能夠看到node103分配了兩個map,node101分配了1一個map;還有一次是node101分配了兩個map,node103分配了一個map;兩次node102都沒有分配到map任務,這個應該是資源管理和任務分配的地方仍是有點問題的緣故。 |