Hadoop啓動腳本分析

                     Hadoop啓動腳本分析node

                                      做者:尹正傑bash

版權聲明:原創做品,謝絕轉載!不然將追究法律責任。ssh

 

 

    能看到這篇博客的你估計對Hadoop已經有一個系統的瞭解了,最起碼各類搭建方式你應該是會的,不會也沒有關係,能夠參考個人筆記,裏面有各類搭建方式,哈哈哈~ide

 

[yinzhengjie@s101 ~]$ cat `which xcall.sh`
#!/bin/bash
#@author :yinzhengjie
#blog:http://www.cnblogs.com/yinzhengjie
#EMAIL:y1053419035@qq.com


#判斷用戶是否傳參
if [ $# -lt 1 ];then
        echo "請輸入參數"
        exit
fi

#獲取用戶輸入的命令
cmd=$@

for (( i=101;i<=104;i++ ))
do
        #使終端變綠色 
        tput setaf 2
        echo ============= s$i $cmd ============
        #使終端變回原來的顏色,即白灰色
        tput setaf 7
        #遠程執行命令
        ssh s$i $cmd
        #判斷命令是否執行成功
        if [ $? == 0 ];then
                echo "命令執行成功"
        fi
done
[yinzhengjie@s101 ~]$ 
xcall.sh 腳本內容,我在測試的時候常常使用它

 

 

一.start-all.sh腳本分析oop

[yinzhengjie@s101 ~]$ cat `which start-all.sh`  | grep -v ^# | grep -v ^$
echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh"
bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
if [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then
  "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIR
fi
if [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then
  "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIR
fi
[yinzhengjie@s101 ~]$ 

  從這個腳本中的第一行咱們能夠看出來,這個腳本已通過時了,取而代之的是:「This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh」,也就是 「start-dfs.sh」和「start-yarn.sh」。測試

 

二.start-dfs.sh 腳本分析spa

[yinzhengjie@s101 ~]$ more `which start-dfs.sh` | grep -v ^# | grep -v ^$
usage="Usage: start-dfs.sh [-upgrade|-rollback] [other options such as -clusterId]"
bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/hdfs-config.sh
if [[ $# -ge 1 ]]; then
  startOpt="$1"
  shift
  case "$startOpt" in
    -upgrade)
      nameStartOpt="$startOpt"
    ;;
    -rollback)
      dataStartOpt="$startOpt"
    ;;
    *)
      echo $usage
      exit 1
    ;;
  esac
fi
nameStartOpt="$nameStartOpt $@"
NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)
echo "Starting namenodes on [$NAMENODES]"
"$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
  --config "$HADOOP_CONF_DIR" \
  --hostnames "$NAMENODES" \
  --script "$bin/hdfs" start namenode $nameStartOpt
if [ -n "$HADOOP_SECURE_DN_USER" ]; then
  echo \
    "Attempting to start secure cluster, skipping datanodes. " \
    "Run start-secure-dns.sh as root to complete startup."
else
  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
    --config "$HADOOP_CONF_DIR" \
    --script "$bin/hdfs" start datanode $dataStartOpt
fi
SECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)
if [ -n "$SECONDARY_NAMENODES" ]; then
  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"
  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
      --config "$HADOOP_CONF_DIR" \
      --hostnames "$SECONDARY_NAMENODES" \
      --script "$bin/hdfs" start secondarynamenode
fi
SHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)
case "$SHARED_EDITS_DIR" in
qjournal://*)
  JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')
  echo "Starting journal nodes [$JOURNAL_NODES]"
  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
      --config "$HADOOP_CONF_DIR" \
      --hostnames "$JOURNAL_NODES" \
      --script "$bin/hdfs" start journalnode ;;
esac
AUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled)
if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; then
  echo "Starting ZK Failover Controllers on NN hosts [$NAMENODES]"
  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \
    --config "$HADOOP_CONF_DIR" \
    --hostnames "$NAMENODES" \
    --script "$bin/hdfs" start zkfc
fi
[yinzhengjie@s101 ~]$ 

   以上的註釋已經被我過濾掉了,從這個腳本中大體能夠看出這個腳本是用來啓動hdfs進程的,即分別是:NameNode,DataNode以及secondaryNameNode。code

1>.單獨啓動NameNode腳本用法以下:blog

[yinzhengjie@s101 ~]$ hadoop-daemon.sh --hostnames s101 start namenode
starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s101.out
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
11531 Jps
11453 NameNode
命令執行成功
============= s102 jps ============
3657 Jps
命令執行成功
============= s103 jps ============
3627 Jps
命令執行成功
============= s104 jps ============
3598 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 

  以上是單獨啓動NameNode節點的腳本用法,若是你想要批量啓動的話可使用hadoop-daemons.sh命令,只不過因爲我部署的集羣環境只有一個NameNode節點,所以看不出來有任何效果。dns

[yinzhengjie@s101 ~]$ hadoop-daemons.sh --hostnames  ` hdfs getconf -namenodes` start namenode
s101: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s101.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
13395 Jps
13318 NameNode
命令執行成功
============= s102 jps ============
3960 Jps
命令執行成功
============= s103 jps ============
3930 Jps
命令執行成功
============= s104 jps ============
3899 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ hadoop-daemons.sh --hostnames ` hdfs getconf -namenodes` start namenode

2>.單獨啓動DataNode腳本以下:

[yinzhengjie@s101 ~]$ hadoop-daemon.sh start datanode
starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s101.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
12119 Jps
12045 DataNode
命令執行成功
============= s102 jps ============
3779 Jps
命令執行成功
============= s103 jps ============
3750 Jps
命令執行成功
============= s104 jps ============
3719 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 

  以上是單獨啓動DataNode的腳本用法,想要執行若是你想要批量啓動的話可使用hadoop-daemons.sh命令,因爲我有三個節點,看起來效果就很明顯了。

[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
14482 Jps
命令執行成功
============= s102 jps ============
4267 Jps
命令執行成功
============= s103 jps ============
4238 Jps
命令執行成功
============= s104 jps ============
4206 Jps
命令執行成功
[yinzhengjie@s101 ~]$ hadoop-daemons.sh start datanode
s102: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s102.out
s104: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s104.out
s103: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s103.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
14552 Jps
命令執行成功
============= s102 jps ============
4386 Jps
4316 DataNode
命令執行成功
============= s103 jps ============
4357 Jps
4287 DataNode
命令執行成功
============= s104 jps ============
4325 Jps
4255 DataNode
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ hadoop-daemons.sh start datanode

3>.單獨啓動secondaryNameNode

[yinzhengjie@s101 ~]$ hadoop-daemon.sh --hostnames s101 start secondarynamenode
starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s101.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
15127 SecondaryNameNode
15179 Jps
命令執行成功
============= s102 jps ============
4541 Jps
命令執行成功
============= s103 jps ============
4513 Jps
命令執行成功
============= s104 jps ============
4480 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 

  以上是單獨啓動secondaryNameNode的腳本用法,想要執行若是你想要批量啓動的話可使用hadoop-daemons.sh命令,因爲我有三個節點,看起來效果就很明顯了。

[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
17273 Jps
命令執行成功
============= s102 jps ============
4993 Jps
命令執行成功
============= s103 jps ============
4965 Jps
命令執行成功
============= s104 jps ============
4929 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ for i in `cat /soft/hadoop/etc/hadoop/slaves | grep -v ^#` ;do  hadoop-daemons.sh --hostnames $i start secondarynamenode ;done
s102: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s102.out
s103: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s103.out
s104: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s104.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
17394 Jps
命令執行成功
============= s102 jps ============
5089 Jps
5042 SecondaryNameNode
命令執行成功
============= s103 jps ============
5061 Jps
5014 SecondaryNameNode
命令執行成功
============= s104 jps ============
5026 Jps
4979 SecondaryNameNode
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ for i in `cat /soft/hadoop/etc/hadoop/slaves | grep -v ^#` ;do hadoop-daemons.sh --hostnames $i start secondarynamenode ;done

 

三.start-yarn.sh 腳本分析

[yinzhengjie@s101 ~]$ cat /soft/hadoop/sbin/start-yarn.sh | grep -v ^# | grep -v ^$
echo "starting yarn daemons"
bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/yarn-config.sh
"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR  start resourcemanager
"$bin"/yarn-daemons.sh --config $YARN_CONF_DIR  start nodemanager
[yinzhengjie@s101 ~]$ 

  其實用法跟上面的相似,單獨啓動進程以下:

[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
18290 Jps
命令執行成功
============= s102 jps ============
5314 Jps
命令執行成功
============= s103 jps ============
5288 Jps
命令執行成功
============= s104 jps ============
5249 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ yarn-daemon.sh start  nodemanager
starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s101.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
18344 NodeManager
18474 Jps
命令執行成功
============= s102 jps ============
5337 Jps
命令執行成功
============= s103 jps ============
5311 Jps
命令執行成功
============= s104 jps ============
5273 Jps
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ yarn-daemon.sh start nodemanager

  若是想要想要批量啓動的,實操以下:

[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
18570 Jps
命令執行成功
============= s102 jps ============
5383 Jps
命令執行成功
============= s103 jps ============
5357 Jps
命令執行成功
============= s104 jps ============
5319 Jps
命令執行成功
[yinzhengjie@s101 ~]$ yarn-daemons.sh start  nodemanager
s102: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s102.out
s104: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s104.out
s103: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s103.out
[yinzhengjie@s101 ~]$ xcall.sh jps
============= s101 jps ============
18645 Jps
命令執行成功
============= s102 jps ============
5562 Jps
5436 NodeManager
命令執行成功
============= s103 jps ============
5536 Jps
5410 NodeManager
命令執行成功
============= s104 jps ============
5498 Jps
5372 NodeManager
命令執行成功
[yinzhengjie@s101 ~]$ 
[yinzhengjie@s101 ~]$ yarn-daemons.sh start nodemanager

 

二.stop-all.sh腳本分析

[yinzhengjie@s101 ~]$ cat `which stop-all.sh` | grep -v ^#  | grep -v ^$
echo "This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh"
bin=`dirname "${BASH_SOURCE-$0}"`
bin=`cd "$bin"; pwd`
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
. $HADOOP_LIBEXEC_DIR/hadoop-config.sh
if [ -f "${HADOOP_HDFS_HOME}"/sbin/stop-dfs.sh ]; then
  "${HADOOP_HDFS_HOME}"/sbin/stop-dfs.sh --config $HADOOP_CONF_DIR
fi
if [ -f "${HADOOP_HDFS_HOME}"/sbin/stop-yarn.sh ]; then
  "${HADOOP_HDFS_HOME}"/sbin/stop-yarn.sh --config $HADOOP_CONF_DIR
fi
[yinzhengjie@s101 ~]$ 

  看到第一行時:echo "This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh",估計你已經明白是怎麼回事了把,就是把上面的全部start參數換成了stop參數。從這個腳本中的第一行咱們能夠看出來,這個腳本已通過時了,取而代之的是:「This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh」,也就是 「stop-dfs.sh」和「stop-yarn.sh」。

 

三.小結

  綜上所述,咱們能夠獲得如下四個等式:

1>.start-all.sh = start-dfs.sh + start-yarn.sh

2>.stop-all.sh = stop-dfs.sh + stop-yarn.sh

3>.hadoop-damons.sh = hadoop-damon.sh + slaves

4>.yarn-damons.sh = yarn-damon.sh + slaves 

相關文章
相關標籤/搜索