一、作好下文中的全部配置:Hadoop1.2.1分佈式安裝-1-準備篇。java
二、Hadoop2.x的發行版中有個小問題:libhadoop.so.1.0.0在64位OS中存在問題,由於它是32位的,在64位OS中hadoop啓動時會報一個WARN的日誌。這個包的做用是調用native的api,能夠提升hadoop的性能,若是這個包失效,那就是使用jvm作壓縮等工做,效率就會很低。處理方法就是從新編譯Hadoop,見xxx(link article)。node
三、在打算作namenode的機器上,wget或其餘方式下載hadoop的壓縮包,並解壓到本地指定目錄。下載解壓命令參考Linux經常使用命令。web
四、各類配置文件和hadoop1會有所不一樣,共有七個文件,如下分別描述。shell
/hadoop-2.4.1/etc/hadoop/hadoop-env.shapache
# The java implementation to use. export JAVA_HOME=${JAVA_HOME}
/hadoop-2.4.1/etc/hadoop/yarn-env.shapi
# some Java parameters # export JAVA_HOME=/home/y/libexec/jdk1.6.0/ if [ "$JAVA_HOME" != "" ]; then #echo "run java in $JAVA_HOME" JAVA_HOME=$JAVA_HOME fi if [ "$JAVA_HOME" = "" ]; then echo "Error: JAVA_HOME is not set." exit 1 fi JAVA=$JAVA_HOME/bin/java JAVA_HEAP_MAX=-Xmx512m #默認的heap_max是1000m,個人虛擬機沒這麼大內存,因此改小了
/hadoop-2.4.1/etc/hadoop/slavesapp
#寫入你slave的節點。若是是多個就每行一個,寫入host名 bd24 bd25
/hadoop-2.4.1/etc/hadoop/core-site.xmlwebapp
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://bd23:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> <property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/wukong/a_usr/hadoop-2.4.1/tmp</value> <description>Abase for other temporary directories.</description> </property> <property> <name>hadoop.proxyuser.hduser.hosts</name> </value>*</value> </property> <property> <name>hadoop.proxyuser.hduser.groups</name. <value>*</value> </property> </configuration>
hdfs-site.xmljvm
<configuration> <property> <name>dfs.namenode.secondary.http-address</name> <value>bd23:9001</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/home/wukong/a_usr/hadoop-2.4.1/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/home/wukong/a_usr/hadoop-2.4.1/data</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
mapred-site.xml分佈式
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>bd23:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>bd23.19888</value> </property> </configuration>
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>bd23:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>bd23:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>bd23:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>bd23:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>bd23:8088</value> </property> </configuration>
三、將hadoop目錄拷貝到全部主機。遠程拷貝的方法見Linux經常使用命令
四、格式化
[wukong@bd23 hadoop-2.4.1]$ ./bin/hdfs namenode -format
看到以下輸出就證實成功了
14/07/31 13:58:30 INFO common.Storage: Storage directory /home/wukong/a_usr/hadoop-2.4.1/name has been successfully formatted.
五、啓動dfs
[wukong@bd23 hadoop-2.4.1]$ ./sbin/start-dfs.sh
看到以下輸出就證實成功了
Starting namenodes on [bd23] bd23: starting namenode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-namenode-bd23.out bd24: starting datanode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-datanode-bd24.out bd25: starting datanode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-datanode-bd25.out Starting secondary namenodes [bd23] bd23: starting secondarynamenode, logging to /home/wukong/a_usr/hadoop-2.4.1/logs/hadoop-wukong-secondarynamenode-bd23.out
六、使用jps查看機器啓動的進程狀況。正常狀況下master上應該有namenode和sencondarynamenode。slave上有datanode。
七、啓動yarn。使用腳本
[wukong@bd23 hadoop-2.4.1]$ ./sbin/start-yarn.sh
八、使用jps查看進程狀況。master上應該有namenode, sencondarynamenode, ResourceManager,slave上應該有datanode, nodeManager。
補充說明:
一、hadoop2中使用start-all.sh的時候,會提示腳本已過時,請使用start-dfs.sh。可是仍是會啓動起來hdfs和yarn。
二、一張值得注意的圖