Hadoop集羣安裝-CDH5(5臺服務器集羣)

CDH5包下載:http://archive.cloudera.com/cdh5/html

架構設計:java

主機規劃:node

IPmysql

Hostweb

部署模塊sql

進程apache

192.168.254.151vim

Hadoop-NN-01安全

NameNodebash

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

192.168.254.152

Hadoop-NN-02

NameNode

ResourceManager

NameNode

DFSZKFailoverController

ResourceManager

192.168.254.153

Hadoop-DN-01

Zookeeper-01

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

192.168.254.154

Hadoop-DN-02

Zookeeper-02

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

192.168.254.155

Hadoop-DN-03

Zookeeper-03

DataNode

NodeManager

Zookeeper

DataNode

NodeManager

JournalNode

QuorumPeerMain

各個進程解釋:

  • NameNode
  • ResourceManager
  • DFSZKFC:DFS Zookeeper Failover Controller 激活Standby NameNode
  • DataNode
  • NodeManager
  • JournalNode:NameNode共享editlog結點服務(若是使用NFS共享,則該進程和全部啓動相關配置接可省略)。
  • QuorumPeerMain:Zookeeper主進程

目錄規劃:

名稱

路徑

$HADOOP_HOME

/home/hadoopuser/hadoop-2.6.0-cdh5.6.0

Data

$ HADOOP_HOME/data

Log

$ HADOOP_HOME/logs

 

集羣安裝:

1、關閉防火牆(防火牆能夠之後配置)

2、安裝JDK(略)

3、修改HostName並配置Host5臺)

[root@Linux01 ~]# vim /etc/sysconfig/network
[root@Linux01 ~]# vim /etc/hosts
192.168.254.151 Hadoop-NN-01
192.168.254.152 Hadoop-NN-02
192.168.254.153 Hadoop-DN-01 Zookeeper-01
192.168.254.154 Hadoop-DN-02 Zookeeper-02
192.168.254.155 Hadoop-DN-03 Zookeeper-03

4、爲了安全,建立Hadoop專門登陸的用戶(5臺)

[root@Linux01 ~]# useradd hadoopuser
[root@Linux01 ~]# passwd hadoopuser
[root@Linux01 ~]# su – hadoopuser   #切換用戶

5、配置SSH免密碼登陸(2NameNode

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-keygen   --生成公私鑰
[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoopuser@Hadoop-NN-01

-I 表示 input

~/.ssh/id_rsa.pub 表示哪一個公鑰組

或者省略爲:

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-copy-id Hadoop-NN-01(或寫IP:10.10.51.231)   #將公鑰扔到對方服務器
[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh-copy-id 」-p 6000 Hadoop-NN-01」  #若是帶端口則這樣寫

注意修改Hadoop的配置文件

vi Hadoop-env.sh
export HADOOP_SSH_OPTS=」-p 6000」

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh Hadoop-NN-01  #驗證(退出當前鏈接命令:exit、logout)
[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ ssh Hadoop-NN-01 –p 6000  #若是帶端口這樣寫

6、配置環境變量:vi ~/.bashrc 而後 source ~/.bashrc5臺)

[hadoopuser@Linux01 ~]$ vi ~/.bashrc
# hadoop cdh5
export HADOOP_HOME=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

[hadoopuser@Linux01 ~]$ source ~/.bashrc  #生效

7、安裝zookeeper3DataNode

  安裝文檔:http://www.cnblogs.com/hunttown/p/5807383.html

8、安裝Hadoop,並配置(只裝1配置完成後分發給其它節點)

1、解壓

2、修改配置文件

配置名稱

類型

說明

hadoop-env.sh

Bash腳本

Hadoop運行環境變量設置

core-site.xml

xml

配置Hadoop core,如IO

hdfs-site.xml

xml

配置HDFS守護進程:NN、JN、DN

yarn-env.sh

Bash腳本

Yarn運行環境變量設置

yarn-site.xml

xml

Yarn框架配置環境

mapred-site.xml

xml

MR屬性設置

capacity-scheduler.xml

xml

Yarn調度屬性設置

container-executor.cfg

Cfg

Yarn Container配置

mapred-queues.xml

xml

MR隊列設置

hadoop-metrics.properties

Java屬性

Hadoop Metrics配置

hadoop-metrics2.properties

Java屬性

Hadoop Metrics配置

slaves

Plain Text

DN節點配置

exclude

Plain Text

移除DN節點配置文件

log4j.properties

 

系統日誌設置

configuration.xsl

   

 

1)修改 $HADOOP_HOME/etc/hadoop/hadoop-env.sh

#--------------------Java Env------------------------------
export JAVA_HOME="/usr/java/jdk1.8.0_73"

#--------------------Hadoop Env----------------------------
#export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_PREFIX="/home/hadoopuser/hadoop-2.6.0-cdh5.6.0"

#--------------------Hadoop Daemon Options-----------------
# export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
# export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

#--------------------Hadoop Logs---------------------------
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

#--------------------SSH PORT-------------------------------
export HADOOP_SSH_OPTS="-p 6000"        #若是你修改了SSH登陸端口,必定要修改此配置。

 

2)修改 $HADOOP_HOME/etc/hadoop/core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <!--Yarn 須要使用 fs.defaultFS 指定NameNode URI -->
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://mycluster</value>
                <description>該值來自於hdfs-site.xml中的配置</description>
        </property>
        <!--HDFS超級用戶 -->
        <property>
                <name>dfs.permissions.superusergroup</name>
                <value>zero</value>
        </property>
        <!--==============================Trash機制======================================= -->
        <property>
                <!--多長時間建立CheckPoint NameNode截點上運行的CheckPointer 從Current文件夾建立CheckPoint;默認:0 由fs.trash.interval項指定 -->
                <name>fs.trash.checkpoint.interval</name>
                <value>0</value>
        </property>
        <property>
                <!--多少分鐘.Trash下的CheckPoint目錄會被刪除,該配置服務器設置優先級大於客戶端,默認:0 不刪除 -->
                <name>fs.trash.interval</name>
                <value>1440</value>
        </property>
</configuration>

 

3)修改 $HADOOP_HOME/etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
        <!--開啓web hdfs -->
        <property>
                <name>dfs.webhdfs.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/data/dfs/name</value>
                <description> namenode 存放name table(fsimage)本地目錄(須要修改)</description>
        </property>
        <property>
                <name>dfs.namenode.edits.dir</name>
                <value>${dfs.namenode.name.dir}</value>
                <description>namenode存放 transaction file(edits)本地目錄(須要修改)</description>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/data/dfs/data</value>
                <description>datanode存放block本地目錄(須要修改)</description>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>1</value>
                <description>文件副本個數,默認爲3</description>
        </property>
        <!-- 塊大小 (默認) -->
        <property>
                <name>dfs.blocksize</name>
                <value>268435456</value>
                < description>塊大小256M</description>
        </property>
        <!--======================================================================= -->
        <!--HDFS高可用配置 -->
        <!--nameservices邏輯名 -->
        <property>
                <name>dfs.nameservices</name>
                <value>mycluster</value>
        </property>
        <property>
                <!--設置NameNode IDs 此版本最大隻支持兩個NameNode -->
                <name>dfs.ha.namenodes.mycluster</name>
                <value>nn1,nn2</value>
        </property>
        <!-- Hdfs HA: dfs.namenode.rpc-address.[nameservice ID] rpc 通訊地址 -->
        <property>
                <name>dfs.namenode.rpc-address.mycluster.nn1</name>
                <value>Hadoop-NN-01:8020</value>
        </property>
        <property>
                <name>dfs.namenode.rpc-address.mycluster.nn2</name>
                <value>Hadoop-NN-02:8020</value>
        </property>
        <!-- Hdfs HA: dfs.namenode.http-address.[nameservice ID] http 通訊地址 -->
        <property>
                <name>dfs.namenode.http-address.mycluster.nn1</name>
                <value>Hadoop-NN-01:50070</value>
        </property>
        <property>
                <name>dfs.namenode.http-address.mycluster.nn2</name>
                <value>Hadoop-NN-02:50070</value>
        </property>

        <!--==================Namenode editlog同步 ============================================ -->
        <!--保證數據恢復 -->
        <property>
                <name>dfs.journalnode.http-address</name>
                <value>0.0.0.0:8480</value>
        </property>
        <property>
                <name>dfs.journalnode.rpc-address</name>
                <value>0.0.0.0:8485</value>
        </property>
        <property>
                <!--設置JournalNode服務器地址,QuorumJournalManager 用於存儲editlog -->
                <!--格式:qjournal://<host1:port1>;<host2:port2>;<host3:port3>/<journalId> 端口同journalnode.rpc-address -->
                <name>dfs.namenode.shared.edits.dir</name>
                <value>qjournal://Hadoop-DN-01:8485;Hadoop-DN-02:8485;Hadoop-DN-03:8485/mycluster</value>
        </property>
        <property>
                <!--JournalNode存放數據地址 -->
                <name>dfs.journalnode.edits.dir</name>
                <value>/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/data/dfs/jn</value>
        </property>
        <!--==================DataNode editlog同步 ============================================ -->
        <property>
                <!--DataNode,Client鏈接Namenode識別選擇Active NameNode策略 -->
                <name>dfs.client.failover.proxy.provider.mycluster</name>
                <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!--==================Namenode fencing:=============================================== -->
        <!--Failover後防止停掉的Namenode啓動,形成兩個服務 -->
        <property>
                <name>dfs.ha.fencing.methods</name>
                <value>sshfence</value>
        </property>
        <property>
                <name>dfs.ha.fencing.ssh.private-key-files</name>
                <value>/home/hadoopuser/.ssh/id_rsa</value>
        </property>
        <property>
                <!--多少milliseconds 認爲fencing失敗 -->
                <name>dfs.ha.fencing.ssh.connect-timeout</name>
                <value>30000</value>
        </property>

        <!--==================NameNode auto failover base ZKFC and Zookeeper====================== -->
        <!--開啓基於Zookeeper及ZKFC進程的自動備援設置,監視進程是否死掉 -->
        <property>
                <name>dfs.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>ha.zookeeper.quorum</name>
                <!--<value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value>-->
                <value>Hadoop-DN-01:2181,Hadoop-DN-02:2181,Hadoop-DN-03:2181</value>
        </property>
        <property>
                <!--指定ZooKeeper超時間隔,單位毫秒 -->
                <name>ha.zookeeper.session-timeout.ms</name>
                <value>2000</value>
        </property>
</configuration>

 

(4)修改 $HADOOP_HOME/etc/hadoop/yarn-env.sh

#Yarn Daemon Options
#export YARN_RESOURCEMANAGER_OPTS
#export YARN_NODEMANAGER_OPTS
#export YARN_PROXYSERVER_OPTS
#export HADOOP_JOB_HISTORYSERVER_OPTS

#Yarn Logs
export YARN_LOG_DIR="/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/logs"

 

5)修改 $HADOOP_HOEM/etc/hadoop/mapred-site.xml

<configuration>
        <!-- 配置JVM大小 -->
        <property>
                <name>mapred.child.java.opts</name>
                <value>-Xmx1000m</value>
                <final>true</final>
                <description>final=true表示禁止用戶修改JVM大小</description>
        </property>
        <!-- 配置 MapReduce Applications -->
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
        <!-- JobHistory Server ============================================================== -->
        <!-- 配置 MapReduce JobHistory Server 地址 ,默認端口10020 -->
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>0.0.0.0:10020</value>
        </property>
        <!-- 配置 MapReduce JobHistory Server web ui 地址, 默認端口19888 -->
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>0.0.0.0:19888</value>
        </property>
</configuration>

HBase的配置:

<!-- HBase使用 start -->
<property>
    <name>mapred.remote.os</name>
    <value>Linux</value>
</property>
<property>
    <name>mapreduce.app-submission.cross-platform</name>
    <value>true</value>
</property>
<property>
    <name>mapreduce.application.classpath</name>
    <value>
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/lib/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/lib/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/lib/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/lib/*,
        /usr/local/hbase/lib/*
    </value>
</property>
<!-- HBase使用 end -->

 

另:JVM配置也能夠這麼寫:

<property>
    <name>mapred.task.java.opts</name>
    <value>-Xmx2000m</value>
</property>
<property>
    <name>mapred.child.java.opts</name>
    <value>${mapred.task.java.opts} -Xmx1000m</value>
    <final>true</final>
    <description>相同的jvm arg寫在一塊兒,好比"-Xmx2000m -Xmx1000m",後面的會覆蓋前面的,也就是說最終「-Xmx1000m」纔會生效。</description>
</property>

另:若是要分別配置map和reduce的JVM大小,能夠這麼寫

<property>
    <name>mapred.map.child.java.opts</name>
    <value>-Xmx512M</value>
</property>
<property>
    <name>mapred.reduce.child.java.opts</name>
    <value>-Xmx1024M</value>
</property>

 

(6)修改 $HADOOP_HOME/etc/hadoop/yarn-site.xml

<configuration>
        <!-- nodemanager 配置 ================================================= -->
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
                <value>org.apache.hadoop.mapred.ShuffleHandler</value>
        </property>
        <property>
                <description>Address where the localizer IPC is.</description>
                <name>yarn.nodemanager.localizer.address</name>
                <value>0.0.0.0:23344</value>
        </property>
        <property>
                <description>NM Webapp address.</description>
                <name>yarn.nodemanager.webapp.address</name>
                <value>0.0.0.0:23999</value>
        </property>

        <!-- HA 配置 =============================================================== -->
        <!-- Resource Manager Configs -->
        <property>
                <name>yarn.resourcemanager.connect.retry-interval.ms</name>
                <value>2000</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.automatic-failover.enabled</name>
                <value>true</value>
        </property>
        <!-- 使嵌入式自動故障轉移。HA環境啓動,與 ZKRMStateStore 配合 處理fencing -->
        <property>
                <name>yarn.resourcemanager.ha.automatic-failover.embedded</name>
                <value>true</value>
        </property>
        <!-- 集羣名稱,確保HA選舉時對應的集羣 -->
        <property>
                <name>yarn.resourcemanager.cluster-id</name>
                <value>yarn-cluster</value>
        </property>
        <property>
                <name>yarn.resourcemanager.ha.rm-ids</name>
                <value>rm1,rm2</value>
        </property>
        <!--這裏RM主備結點須要單獨指定,(可選)
        <property>
                <name>yarn.resourcemanager.ha.id</name>
                <value>rm2</value>
        </property>
        -->
        <property>
                <name>yarn.resourcemanager.scheduler.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
        </property>
        <property>
                <name>yarn.resourcemanager.recovery.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms</name>
                <value>5000</value>
        </property>
        <!-- ZKRMStateStore 配置 -->
        <property>
                <name>yarn.resourcemanager.store.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
        </property>
        <property>
                <name>yarn.resourcemanager.zk-address</name>
                <!--<value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value>-->
                <value>Hadoop-DN-01:2181,Hadoop-DN-02:2181,Hadoop-DN-03:2181</value>
        </property>
        <property>
                <name>yarn.resourcemanager.zk.state-store.address</name>
                <!--<value>Zookeeper-01:2181,Zookeeper-02:2181,Zookeeper-03:2181</value>-->
                <value>Hadoop-DN-01:2181,Hadoop-DN-02:2181,Hadoop-DN-03:2181</value>
        </property>
        <!-- Client訪問RM的RPC地址 (applications manager interface) -->
        <property>
                <name>yarn.resourcemanager.address.rm1</name>
                <value>Hadoop-NN-01:23140</value>
        </property>
        <property>
                <name>yarn.resourcemanager.address.rm2</name>
                <value>Hadoop-NN-02:23140</value>
        </property>
        <!-- AM訪問RM的RPC地址(scheduler interface) -->
        <property>
                <name>yarn.resourcemanager.scheduler.address.rm1</name>
                <value>Hadoop-NN-01:23130</value>
        </property>
        <property>
                <name>yarn.resourcemanager.scheduler.address.rm2</name>
                <value>Hadoop-NN-02:23130</value>
        </property>
        <!-- RM admin interface -->
        <property>
                <name>yarn.resourcemanager.admin.address.rm1</name>
                <value>Hadoop-NN-01:23141</value>
        </property>
        <property>
                <name>yarn.resourcemanager.admin.address.rm2</name>
                <value>Hadoop-NN-02:23141</value>
        </property>
        <!--NM訪問RM的RPC端口 -->
        <property>
                <name>yarn.resourcemanager.resource-tracker.address.rm1</name>
                <value>Hadoop-NN-01:23125</value>
        </property>
        <property>
                <name>yarn.resourcemanager.resource-tracker.address.rm2</name>
                <value>Hadoop-NN-02:23125</value>
        </property>
        <!-- RM web application 地址 -->
        <property>
                <name>yarn.resourcemanager.webapp.address.rm1</name>
                <value>Hadoop-NN-01:23188</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.address.rm2</name>
                <value>Hadoop-NN-02:23188</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.https.address.rm1</name>
                <value>Hadoop-NN-01:23189</value>
        </property>
        <property>
                <name>yarn.resourcemanager.webapp.https.address.rm2</name>
                <value>Hadoop-NN-02:23189</value>
        </property>
</configuration>

 HBase的配置:

<!-- HBase使用 start -->
<property>
    <name>mapreduce.application.classpath</name>
    <value>
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/common/lib/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/hdfs/lib/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/mapreduce/lib/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/*,
        /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/yarn/lib/*,
        /usr/local/hbase/lib/*
    </value>
</property>
<!-- HBase使用 end -->

 

7)修改 $HADOOP_HOME/etc/hadoop/slaves

Hadoop-DN-01
Hadoop-DN-02
Hadoop-DN-03

3、分發程序

#由於個人SSH登陸修改了端口,因此使用了 -P 6000
scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-NN-02:/home/hadoopuser
scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-DN-01:/home/hadoopuser
scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-DN-02:/home/hadoopuser
scp -P 6000 -r /home/hadoopuser/hadoop-2.6.0-cdh5.6.0 hadoopuser@Hadoop-DN-03:/home/hadoopuser

4、啓動HDFS

1)啓動JournalNode

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/hadoopuser/hadoop-2.6.0-cdh5.6.0/logs/hadoop-puppet-journalnode-BigData-03.out

驗證JournalNode

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps

5652 QuorumPeerMain
9076 Jps
9029 JournalNode

中止JournalNode

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh stop journalnode stoping journalnode

2NameNode 格式化:

結點Hadoop-NN-01:hdfs namenode -format

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hdfs namenode -format

3)同步NameNode元數據:

同步Hadoop-NN-01元數據到Hadoop-NN-02

主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir還應該確保共享存儲目錄下(dfs.namenode.shared.edits.dir ) 包含NameNode 全部的元數據。

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ scp -P 6000 -r data/ hadoopuser@Hadoop-NN-02:/home/hadoopuser/hadoop-2.6.0-cdh5.6.0

4)初始化ZFCK

建立ZNode,記錄狀態信息。

結點Hadoop-NN-01:hdfs zkfc -formatZK

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hdfs zkfc -formatZK

5)啓動

集羣啓動法:Hadoop-NN-01: start-dfs.sh

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-dfs.sh

單進程啓動法:

<1>NameNode(Hadoop-NN-01,Hadoop-NN-02):hadoop-daemon.sh start namenode

<2>DataNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):hadoop-daemon.sh start datanode

<3>JournalNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):hadoop-daemon.sh start journalnode

<4>ZKFC(Hadoop-NN-01,Hadoop-NN-02):hadoop-daemon.sh start zkfc

6)驗證

<1>進程

NameNode:jps

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps

9329 JournalNode
9875 NameNode
10155 DFSZKFailoverController
10223 Jps

DataNode:jps

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ jps

9498 Jps
9019 JournalNode
9389 DataNode
5613 QuorumPeerMain

<2>頁面:

Active結點:http://192.168.254.151:50070

7)中止:stop-dfs.sh

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-dfs.sh

五、啓動Yarn

1)啓動

<1>集羣啓動

Hadoop-NN-01啓動Yarn,命令所在目錄:$HADOOP_HOME/sbin

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-yarn.sh

Hadoop-NN-02備機啓動RM:

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh start resourcemanager

<2>單進程啓動

ResourceManager(Hadoop-NN-01,Hadoop-NN-02):yarn-daemon.sh start resourcemanager

DataNode(Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03):yarn-daemon.sh start nodemanager

2)驗證

<1>進程:

JobTracker:Hadoop-NN-01,Hadoop-NN-02

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ jps

9329 JournalNode
9875 NameNode
10355 ResourceManager
10646 Jps
10155 DFSZKFailoverController

TaskTracker:Hadoop-DN-01,Hadoop-DN-02,Hadoop-DN-03

[hadoopuser@Linux05 hadoop-2.6.0-cdh5.6.0]$ jps

9552 NodeManager
9680 Jps
9019 JournalNode
9389 DataNode
5613 QuorumPeerMain

<2>頁面

ResourceManger(Active):192.168.254.151:23188

ResourceManager(Standby):192.168.254.152:23188

3)中止

Hadoop-NN-01:stop-yarn.sh

[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-yarn.sh
 Hadoop-NN-02:yarn-daemon.sh stop resourcemanager
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daeman.sh stop resourcemanager

 

附:Hadoop經常使用命令總結

#第1步 啓動zookeeper
[hadoopuser@Linux01 ~]$ zkServer.sh start
[hadoopuser@Linux01 ~]$ zkServer.sh stop  #中止

#第2步 啓動JournalNode:
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh start journalnode starting journalnode, logging to /home/hadoopuser/hadoop-dir/hadoop-2.6.0-cdh5.6.0/logs/hadoop-puppet-journalnode-BigData-03.out  #兩個namenode
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ hadoop-daemon.sh stop journalnode stoping journalnode  #中止

#第3步 啓動DFS:
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-dfs.sh
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-dfs.sh  #中止

#第4步 啓動Yarn:
#Hadoop-NN-01啓動Yarn
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ start-yarn.sh
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ stop-yarn.sh  #中止
#Hadoop-NN-02備機啓動RM
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh start resourcemanager
[hadoopuser@Linux01 hadoop-2.6.0-cdh5.6.0]$ yarn-daemon.sh stop resourcemanager  #中止


#若是安裝了HBase
#Hadoop-NN-01啓動HBase的Thrift Server:
[hadoopuser@Linux01 bin]$ hbase-daemon.sh start thrift
[hadoopuser@Linux01 bin]$ hbase-daemon.sh stop thrift    #中止

#Hadoop-NN-01啓動HBase:
[hadoopuser@Linux01 bin]$ hbase/bin/start-hbase.sh
[hadoopuser@Linux01 bin]$ hbase/bin/stop-hbase.sh    #中止

#若是安裝了RHive
#Hadoop-NN-01啓動Rserve:
[hadoopuser@Linux01 ~]$ Rserve --RS-conf /usr/local/lib64/R/Rserv.conf    #中止 直接kill

#Hadoop-NN-01啓動hive遠程服務(rhive是經過thrift鏈接hiveserver的,須要要啓動後臺thrift服務):
[hadoopuser@Linux01 ~]$ nohup hive --service hiveserver2 &   #注意這裏是hiveserver2

 

附:Hadoop經常使用環境變量配置

# JAVA
export JAVA_HOME=/usr/java/jdk1.8.0_73
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

# MYSQL
export PATH=/usr/local/mysql/bin:/usr/local/mysql/lib:$PATH

# Hive
export HIVE_HOME=/home/hadoopuser/hive
export PATH=$PATH:$HIVE_HOME/bin

# Hadoop
export HADOOP_HOME=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0
export HADOOP_CONF_DIR=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/etc/hadoop
export HADOOP_CMD=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/bin/hadoop
export HADOOP_STREAMING=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/share/hadoop/tools/lib/hadoop-streaming-2.6.0-cdh5.6.0.jar
export JAVA_LIBRARY_PATH=/home/hadoopuser/hadoop-2.6.0-cdh5.6.0/lib/native/
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

# R
export R_HOME=/usr/local/lib64/R
export PATH=$PATH:$R_HOME/bin
export RHIVE_DATA=/usr/local/lib64/R/rhive/data
export CLASSPATH=.:/usr/local/lib64/R/library/rJava/jri
export LD_LIBRARY_PATH=/usr/local/lib64/R/library/rJava/jri
export RServe_HOME=/usr/local/lib64/R/library/Rserve

# thrift
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig/

# HBase
export HBASE_HOME=/usr/local/hbase
export PATH=$PATH:$HBASE_HOME/bin

# Zookeeper
export ZOOKEEPER_HOME=/home/hadoopuser/zookeeper-3.4.5-cdh5.6.0
export PATH=$PATH:$ZOOKEEPER_HOME/bin

# Sqoop2
export SQOOP2_HOME=/home/hadoopuser/sqoop2-1.99.5-cdh5.6.0
export CATALINA_BASE=$SQOOP2_HOME/server
export PATH=$PATH:$SQOOP2_HOME/bin

# Scala
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:${SCALA_HOME}/bin

# Spark
export SPARK_HOME=/home/hadoopuser/spark-1.5.0-cdh5.6.0
export PATH=$PATH:${SPARK_HOME}/bin

# Storm
export STORM_HOME=/home/hadoopuser/apache-storm-0.9.6
export PATH=$PATH:$STORM_HOME/bin

#kafka
export KAFKA_HOME=/home/hadoopuser/kafka_2.10-0.9.0.1
export PATH=$PATH:$KAFKA_HOME/bin
相關文章
相關標籤/搜索