假如咱們只有3臺linux虛擬機,主機名分別爲hadoop0一、hadoop02和hadoop03,在這3臺機器上,hadoop集羣的部署狀況以下:node
hadoop01:1個namenode,1個datanode,1個journalnode,1個zkfc,1個resourcemanager,1個nodemanager;
hadoop02:1個namenode,1個datanode,1個journalnode,1個zkfc,1個resourcemanager,1個nodemanager;
hadoop03:1個datenode,1個journalnode,1個nodemanager;
下面咱們來介紹啓動hdfs和yarn的一些命令。linux
1.啓動hdfs集羣(使用hadoop的批量啓動腳本)app
/root/apps/hadoop/sbin/start-dfs.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-dfs.sh Starting namenodes on [hadoop01 hadoop02] hadoop01: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out hadoop02: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out hadoop03: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out hadoop02: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out hadoop01: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out Starting journal nodes [hadoop01 hadoop02 hadoop03] hadoop03: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out hadoop02: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out hadoop01: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out Starting ZK Failover Controllers on NN hosts [hadoop01 hadoop02] hadoop01: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out hadoop02: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out [root@hadoop01 ~]#
從上面的啓動日誌能夠看出,start-dfs.sh這個啓動腳本是經過ssh對多個節點的namenode、datanode、journalnode以及zkfc進程進行批量啓動的。ssh
2.中止hdfs集羣(使用hadoop的批量啓動腳本)oop
/root/apps/hadoop/sbin/stop-dfs.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-dfs.sh Stopping namenodes on [hadoop01 hadoop02] hadoop02: stopping namenode hadoop01: stopping namenode hadoop02: stopping datanode hadoop03: stopping datanode hadoop01: stopping datanode Stopping journal nodes [hadoop01 hadoop02 hadoop03] hadoop03: stopping journalnode hadoop02: stopping journalnode hadoop01: stopping journalnode Stopping ZK Failover Controllers on NN hosts [hadoop01 hadoop02] hadoop01: stopping zkfc hadoop02: stopping zkfc [root@hadoop01 ~]#
3.啓動單個進程spa
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenode starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanode starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnode starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfc starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out
分別查看啓動後3臺虛擬機上的進程狀況:日誌
[root@hadoop01 ~]# jps 6695 DataNode 2002 QuorumPeerMain 6879 DFSZKFailoverController 7035 Jps 6800 JournalNode 6580 NameNode [root@hadoop01 ~]#
[root@hadoop02 ~]# jps 6360 JournalNode 6436 DFSZKFailoverController 2130 QuorumPeerMain 6541 Jps 6255 DataNode 6155 NameNode [root@hadoop02 ~]#
[root@hadoop03 apps]# jps 5331 Jps 5103 DataNode 5204 JournalNode 2258 QuorumPeerMain [root@hadoop03 apps]#
3.中止單個進程code
[root@hadoop01 ~]# jps 6695 DataNode 2002 QuorumPeerMain 8486 Jps 6879 DFSZKFailoverController 6800 JournalNode 6580 NameNode [root@hadoop01 ~]# [root@hadoop01 ~]# [root@hadoop01 ~]# [root@hadoop01 ~]# [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc stopping zkfc [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode stopping journalnode [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode stopping datanode [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode stopping namenode [root@hadoop01 ~]# jps 2002 QuorumPeerMain 8572 Jps [root@hadoop01 ~]#
[root@hadoop02 ~]# jps 6360 JournalNode 6436 DFSZKFailoverController 2130 QuorumPeerMain 7378 Jps 6255 DataNode 6155 NameNode [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfc stopping zkfc [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode stopping journalnode [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode stopping datanode [root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenode stopping namenode [root@hadoop02 ~]# jps 7455 Jps 2130 QuorumPeerMain [root@hadoop02 ~]#
[root@hadoop03 apps]# jps 5103 DataNode 5204 JournalNode 5774 Jps 2258 QuorumPeerMain [root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnode stopping journalnode [root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanode stopping datanode [root@hadoop03 apps]# jps 5818 Jps 2258 QuorumPeerMain [root@hadoop03 apps]#
3.啓動yarn集羣(使用hadoop的批量啓動腳本)server
/root/apps/hadoop/sbin/start-yarn.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-yarn.sh starting yarn daemons starting resourcemanager, logging to /root/apps/hadoop/logs/yarn-root-resourcemanager-hadoop01.out hadoop03: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop03.out hadoop02: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop02.out hadoop01: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop01.out [root@hadoop01 ~]#
從上面的啓動日誌能夠看出,start-yarn.sh啓動腳本只在本地啓動一個ResourceManager進程,而3臺機器上的nodemanager都是經過ssh的方式啓動的。因此hadoop02機器上的ResourceManager須要咱們手動去啓動。blog
4.啓動hadoop02上的ResourceManager進程
/root/apps/hadoop/sbin/yarn-daemon.sh start resourcemanager
5.中止yarn
/root/apps/hadoop/sbin/stop-yarn.sh
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-yarn.sh stopping yarn daemons stopping resourcemanager hadoop01: stopping nodemanager hadoop03: stopping nodemanager hadoop02: stopping nodemanager no proxyserver to stop [root@hadoop01 ~]#
經過上面的中止日誌能夠看出,stop-yarn.sh腳本只中止了本地的那個ResourceManager進程,因此hadoop02上的那個resourcemanager咱們須要單獨去中止。
6.中止hadoop02上的resourcemanager
/root/apps/hadoop/sbin/yarn-daemon.sh stop resourcemanager