1、運行mapreduce 任務,可是在yarn裏看不到任務。java
bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /test /out1node
我配置的resourceManager 爲 http://192.168.31.136:8088web
緣由是在 mapred-site.xml 沒有配置express
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> |
2、配置好問題一後,運行mr任務提示The auxService:mapreduce_shuffle does not existapache
緣由是在yarn-site.xml 沒有配置yarn.nodemanager.aux-services 節點api
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> |
報錯以下緩存
16/11/29 23:10:45 INFO mapreduce.Job: Task Id : attempt_1480432102879_0001_m_000000_2, Status : FAILED Container launch failed for container_e02_1480432102879_0001_01_000004 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:168) at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:106) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:155) at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) |
-----------------------------------------------------------------------------------------------------------------app
配置文件記錄less
已提早搭建好三個節點的zookeeper 小集羣,配置能夠實現 HDFS HA和 YARN HAssh
1、hadoop-env.sh 修改了
export JAVA_HOME=/usr/lib/jvm/jdk8/jdk1.8.0_111 |
2、yarn-env.sh 修改了
export JAVA_HOME=/usr/lib/jvm/jdk8/jdk1.8.0_111 |
3、core-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定nameservice的名稱爲mycluster --> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <!-- 編輯日誌文件存儲的路徑 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/jxlgzwh/hadoop-2.7.2/data/jn</value> </property> <!-- 配置緩存文件的目錄 --> <property> <name>hadoop.tmp.dir</name> <value>/home/jxlgzwh/hadoop-2.7.2/data/tmp</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>master:2181,slave01:2181,slave02:2181</value> </property> </configuration> |
4、hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- 指定hdfs的nameservice的名稱爲mycluster --> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <!-- 指定mycluster的兩個namenode的名稱,分別是nn1,nn2 --> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <!-- 配置nn1,nn2的rpc通訊 端口 --> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>master:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>slave01:8020</value> </property> <!-- 配置nn1,nn2的http訪問端口 --> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>master:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>slave01:50070</value> </property> <!-- 指定namenode的元數據存儲在journalnode中的路徑 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://master:8485;slave01:8485;slave02:8485/mycluster</value> </property> <!-- 配置失敗自動切換的方式 --> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 配置隔離機制 --> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <!-- 指定祕鑰的位置 --> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/jxlgzwh/.ssh/id_dsa</value> </property> <!-- 數據備份的個數 --> <property> <name>dfs.replication</name> <value>3</value> </property> <!--關閉權限驗證 --> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> </configuration> |
5、mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration> |
6、slaves
192.168.31.136 192.168.31.130 192.168.31.229 |
7、yarn-site.xml
<?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <configuration> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>master</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>slave01</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>master:8088</value> </property> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>slave01:8088</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>master:2181,slave01:2181,slave02:2181</value> </property> <!-- yarn restart--> <!-- 開啓resourcemanager restart --> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <!-- 配置zookeeper的存儲位置 --> <property> <name>yarn.resourcemanager.zk-state-store.parent-path</name> <value>/rmstore</value> </property> <!-- 配置存儲到zookeeper中 --> <property> <name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value> </property> <!-- 開啓nodemanager restart --> <property> <name>yarn.nodemanager.recovery.enabled</name> <value>true</value> </property> <!-- 指定nodemanager 存儲端口 --> <property> <name>yarn.nodemanager.address</name> <value>0.0.0.0:45454</value> </property> <!-- 開啓nodemanager 存儲目錄--> <property> <name>yarn.nodemanager.recovery.dir</name> <value>/home/jxlgzwh/hadoop-2.7.2/data/tmp/yarn-nm-recovery</value> </property> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> |
8、配置 /etc/hosts 文件 並配置 ssh免密碼登陸
192.168.31.136 master.com master 192.168.31.130 slave01 192.168.31.229 slave02 |