大數據學習(1)Hadoop安裝

集羣架構

Hadoop的安裝其實就是HDFS和YARN集羣的配置,從下面的架構圖能夠看出,HDFS的每個DataNode都須要配置NameNode的位置。同理YARN中的每個NodeManager都須要配置ResourceManager的位置。html

NameNode和ResourceManager的做用如此重要,在集羣環境下,他們存在單點問題嗎?在Hadoop1.0中確實存在,不過在2.0中已經獲得解決,具體參考:node

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.htmlapache

https://www.ibm.com/developerworks/cn/opensource/os-cn-hadoop-name-node/index.htmlcentos

 

 

 

 

 

配置

由於每臺機器上的配置都是同樣的,因此配置時通常是配置好一臺服務器,而後複製到其餘服務器上。服務器

JAVA_HOME

在hadoop-env.sh文件中配置JAVA_HOME.架構

core-site.xml

配置hdfs文件系統,經過fs.defaultFS配置hdfs的NameNode節點。app

<property>
    <name>fs.defaultFS</name>
    <value>hdfs://{hdfs-name-node-server-host}:9000</value>
</property>

經過hadoop.tmp.dir配置hadoop運行時產生文件的存儲目錄oop

<property>
    <name>hadoop.tmp.dir</name>
    <value>/usr/local/hadoop-data/tmp</value>
</property>

 

hdfs-site.xml

配置文件副本數量和second namenodespa

<property>
    <name>dfs.replication</name>
    <value>1</value>
</property>
        
<property>
    <name>dfs.secondary.http.address</name>
    <value>{second-namenode-host}:50090</value>
</property>

 

yarn-site.xml

配置YARN的ResourceManager:code

<property>
    <name>yarn.resourcemanager.hostname</name>
    <value>{resource-manager-host}</value>
</property>

和reducer獲取數據的方式:

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>

 

最後記得把hadoop的bin和sbin目錄添加到環境變量中:

export HADOOP_HOME=/user/local/hadoop-2.6.5
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

 

格式化namenode

hdfs namenode -format (hadoop namenode -format)

 

啓動Hadoop

先啓動HDFS的NameNode:

 hadoop-daemon.sh start datanode

在集羣的DataNode上啓動DataNode:

 hadoop-daemon.sh start datanode

查看啓動結果

[root@server1 ~]# jps
2111 Jps
2077 NameNode

若是啓動成功,經過http://server1:50070,能夠看到相似下面的頁面:

 


再啓動YARN

[root@vcentos1 sbin]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-resourcemanager-vcentos1.out
vcentos3: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-vcentos3.out
vcentos2: starting nodemanager, logging to /usr/local/hadoop-2.6.5/logs/yarn-root-nodemanager-vcentos2.out
[root@server1 sbin]# jps
2450 ResourceManager
2516 Jps
2077 NameNode

hadoop下的sbin目錄下的文件是用來管理hadoop服務的:

hadoop-dameon.sh:用來單獨啓動namenode或datanode;

start/stop-dfs.sh:配合/etc/hadoop/slaves,能夠批量啓動/關閉NameNode和集羣中的其餘DataNode;

start/stop-yarn.sh:配合/etc/hadoop/slaves,能夠批量啓動/關閉ResourceManager和集羣中的其餘NodeManager;

bin目錄下的文件能夠提供hdfs、yarn和mapreduce服務:

[root@server1 bin]# hadoop fs 
Usage: hadoop fs [generic options]
        [-appendToFile <localsrc> ... <dst>]
        [-cat [-ignoreCrc] <src> ...]
        [-checksum <src> ...]
        [-chgrp [-R] GROUP PATH...]
        [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
        [-chown [-R] [OWNER][:[GROUP]] PATH...]
        [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
        [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-count [-q] [-h] <path> ...]
        [-cp [-f] [-p | -p[topax]] <src> ... <dst>]
        [-createSnapshot <snapshotDir> [<snapshotName>]]
        [-deleteSnapshot <snapshotDir> <snapshotName>]
        [-df [-h] [<path> ...]]
        [-du [-s] [-h] <path> ...]
        [-expunge]
        [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
        [-getfacl [-R] <path>]
        [-getfattr [-R] {-n name | -d} [-e en] <path>]
        [-getmerge [-nl] <src> <localdst>]
        [-help [cmd ...]]
        [-ls [-d] [-h] [-R] [<path> ...]]
        [-mkdir [-p] <path> ...]
        [-moveFromLocal <localsrc> ... <dst>]
        [-moveToLocal <src> <localdst>]
        [-mv <src> ... <dst>]
        [-put [-f] [-p] [-l] <localsrc> ... <dst>]
        [-renameSnapshot <snapshotDir> <oldName> <newName>]
        [-rm [-f] [-r|-R] [-skipTrash] <src> ...]
        [-rmdir [--ignore-fail-on-non-empty] <dir> ...]
        [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
        [-setfattr {-n name [-v value] | -x name} <path>]
        [-setrep [-R] [-w] <rep> <path> ...]
        [-stat [format] <path> ...]
        [-tail [-f] <file>]
        [-test -[defsz] <path>]
        [-text [-ignoreCrc] <src> ...]
        [-touchz <path> ...]
        [-usage [cmd ...]]

 

 


 

參考:

最新安裝文檔:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/ClusterSetup.html

2.6.5安裝文檔:http://hadoop.apache.org/docs/r2.6.5/hadoop-project-dist/hadoop-common/SingleCluster.html

Secondary Namenode:http://blog.madhukaraphatak.com/secondary-namenode---what-it-really-do/

相關文章
相關標籤/搜索