上一節,講過了,執行hadoop namenode -format後java
其實是執行node
/root/hadoop-2.7.0-bin/bin/hdfs namenode -format
下面就來分析這個腳本apache
---app
bin=`which $0` bin=`dirname ${bin}` bin=`cd "$bin" > /dev/null; pwd`
打印函數
bin=/root/hadoop-2.7.0-bin/bin
---oop
DEFAULT_LIBEXEC_DIR="$bin"/../libexec
打印’fetch
DEFAULT_LIBEXEC_DIR=/root/hadoop-2.7.0-bin/bin/../libexec
---this
cygwin=false case "$(uname)" in CYGWIN*) cygwin=true;; esac
這個不會執行,過濾spa
---.net
接下來執行一個腳本
HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR} . $HADOOP_LIBEXEC_DIR/hdfs-config.sh
實際上執行的是
/root/hadoop-2.7.0-bin/libexec/hdfs-config.sh
這個腳本實際上是調用另一個腳本,調用的哪一個腳本?讀者能夠本身去探索一下:)
---回到hdfs腳本
function print_usage(){ echo "Usage: hdfs [--config confdir] [--loglevel loglevel] COMMAND" echo " where COMMAND is one of:" echo " dfs run a filesystem command on the file systems supported in Hadoop." echo " classpath prints the classpath" echo " namenode -format format the DFS filesystem" echo " secondarynamenode run the DFS secondary namenode" echo " namenode run the DFS namenode" echo " journalnode run the DFS journalnode" echo " zkfc run the ZK Failover Controller daemon" echo " datanode run a DFS datanode" echo " dfsadmin run a DFS admin client" echo " haadmin run a DFS HA admin client" echo " fsck run a DFS filesystem checking utility" echo " balancer run a cluster balancing utility" echo " jmxget get JMX exported values from NameNode or DataNode." echo " mover run a utility to move block replicas across" echo " storage types" echo " oiv apply the offline fsimage viewer to an fsimage" echo " oiv_legacy apply the offline fsimage viewer to an legacy fsimage" echo " oev apply the offline edits viewer to an edits file" echo " fetchdt fetch a delegation token from the NameNode" echo " getconf get config values from configuration" echo " groups get the groups which users belong to" echo " snapshotDiff diff two snapshots of a directory or diff the" echo " current directory contents with a snapshot" echo " lsSnapshottableDir list all snapshottable dirs owned by the current user" echo " Use -help to see options" echo " portmap run a portmap service" echo " nfs3 run an NFS version 3 gateway" echo " cacheadmin configure the HDFS cache" echo " crypto configure HDFS encryption zones" echo " storagepolicies list/get/set block storage policies" echo " version print the version" echo "" echo "Most commands print help when invoked w/o parameters." # There are also debug commands, but they don't show up in this listing. } if [ $# = 0 ]; then print_usage exit fi
這個太簡單,就是一個函數而已,告訴用途
---
接下來到了最關鍵的時刻了,就是執行命令
if [ "$COMMAND" = "namenode" ] ; then CLASS='org.apache.hadoop.hdfs.server.namenode.NameNode' HADOOP_OPTS="$HADOOP_OPTS $HADOOP_NAMENODE_OPTS"
其中
HADOOP_OPTS= -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-2.7.0-bin -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/hadoop-2.7.0-bin/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender
---
剩下的一段是cgwin,忽略
---
export CLASSPATH=$CLASSPATH HADOOP_OPTS="$HADOOP_OPTS -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,NullAppender}"
賦值語句很少說
---
接下來的一個if-else語句,實際上執行的是最後一個分支
else # run it exec "$JAVA" -Dproc_$COMMAND $JAVA_HEAP_MAX $HADOOP_OPTS $CLASS "$@" fi
廬山真面目要出來了,打印執行語句
/usr/java/jdk1.8.0_45/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/root/hadoop-2.7.0-bin/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/root/hadoop-2.7.0-bin -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/root/hadoop-2.7.0-bin/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.hdfs.server.namenode.NameNode -format
喲,不錯喔。
終於揭開了廬山真面目。
下一節,咱們開始分析NameNode的源碼。