【Hadoop大數據分析與挖掘實戰】(三)----------P23~25

6.安裝Hadoopjava

  1)在Hadoop網站下,下載穩定版的而且已經編譯好的二進制包,並解壓縮。
node

[hadoop@master ~]$ wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
[hadoop@master ~]$ tar -zxvf hadoop-2.7.3.tar.gz -C ~/opt
[hadoop@master ~]$ cd ~/opt/hadoop-2.7.3

  2)設置環境變量:apache

[hadoop@master hadoop-2.7.3]$ vim ~/.bashrc
# User specific aliases and functions
export HADOOP_PREFIX=$HOME/opt/hadoop-2.7.3
export HADOOP_COMMON_HOME=HADOOP_PREFIX
export HADOOP_HDFS_HOME=HADOOP_PREFIX
export HADOOP_MAPRED_HOME=HADOOP_PREFIX
export HADOOP_YARN_HOME=HADOOP_PREFIX
export HADOOP_CONF_DIR=HADOOP_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin

  3)修改配置文件(etc/hadoop/hadoop-env.sh),添加下面的命令(這裏須要注意JAVA_HOME的設置須要根據本身機器的實際狀況進行設置):vim

##把JAVA_HOME後的內容修改爲本機設定的JAVA_HOME

# I added
export JAVA_HOME=/usr/lib/jvm/java

##我修改成

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64

  4) 修改配置文件(etc/hadoop/core-site.xml),內容以下:(注①)bash

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.0.131:9000</value>
    </property>
    <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/home/hadoop/opt/var/hadoop/tmp/hadoop-$USER</value>
        </property>
</configuration>

  5) 修改配置文件(etc/hadoop/hdfs-site.xml),內容以下:ssh

<configuration>
    <property>
        <name>dfs.datanode.data.dir</name>    
        <value>file:/home/hadoop/opt/var/hadoop/hdfs/datanode</value>
    </property>
    <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/hadoop/opt/var/hadoop/hdfs/namenode</value>
        </property>
    <property>
                <name>dfs.namenode.checkpoint.dir</name>
                <value>file:/home/hadoop/opt/var/hadoop/hdfs/namesecondary</value>
        </property>
    <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
<!--        <property>
                <name>dfs.datanode.max.xcievers</name>
                <value>2</value>
        </property>-->
</configuration>

  6) 修改配置文件(etc/hadoop/yarn-site.xml),內容以下:jvm

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master</value>
        </property>
        <property>                                                                  
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
        </property>  
</configuration>

  7) 首先,複製etc/hadoop/mapred-site.xml.template爲etc/hadoop/mapred-site.xmloop

[hadoop@master hadoop]$ cp mapred-site.xml.template mapred-site.xml

  而後修改配置文件(etc/hadoop/mapred-site.xm),內容以下:大數據

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
        <property>
                <name>mapreduce.jobtracker.staging.root.dir</name>
                <value>/home</value>
        </property>
</configuration>

  8)複製hadoop至slave1,slave2。(注②)網站

[hadoop@master opt]$ scp -r /home/hadoop/opt/hadoop-2.7.3 hadoop@slave1:/home/hadoop/opt/
[hadoop@master opt]$ scp -r /home/hadoop/opt/hadoop-2.7.3 hadoop@slave2:/home/hadoop/opt/

  9)格式化HDFS:

[hadoop@master hadoop]$ hdfs namenod -format

  10)啓動hadoop集羣,啓動結束後使用jps命令列出守護進程安裝驗證是否成功。(注③)

#在使用過程當中,我發現每次都須要輸入master的密碼。因而又將公鑰拷給了master一份以後,就不須要輸入「三次」密碼了!
[hadoop@master ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub master
#啓動hadoop集羣
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave2.out
master: starting datanode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave2.out
master: starting nodemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-master.out
#master主節點
[hadoop@master ~]$ jps
44469 DataNode
45256 Jps
44651 SecondaryNameNode
44811 ResourceManager
44939 NodeManager
44319 NameNode
#slave1節點
[hadoop@slave1 ~]$ jps
35973 NodeManager
35847 DataNode
36106 Jps
#slave2節點
[hadoop@slave2 ~]$ jps
36360 NodeManager
36234 DataNode
36493 Jps

注①:等同於一個選項,name中是選項名,value中是選項內容。中間的設置能夠查閱hadoop2.7.3的官方文檔http://hadoop.apache.org/docs/r2.7.3/ 我在其中翻閱了以後留下的這些配置不必定所有都是須要的,可是我已經配置了半個月了,心力憔悴之下我不想再試錯了。若是有和我一樣配置的朋友能把這份配置精簡一下,萬分感謝。

注②:在《Hadoop大數據分析與挖掘實戰》這本書裏沒有這一項,雖然沒有試錯,可是我認爲是必須的。

注③:書中分兩步啓動,start-dfs.sh及start-yarn.sh,可是啓動以後書中P30頁的實踐內容作不了。後來通過排查,發現start-all.sh能夠。應該是2.7.3中又添加了其餘的啓動項(?!),但願有大神能夠指正。

作了半個月,內容自用,謝絕轉載。

相關文章
相關標籤/搜索