Hadoop集羣-HDFS集羣中大數據運維經常使用的命令總結
html
做者:尹正傑java
版權聲明:原創做品,謝絕轉載!不然將追究法律責任。node
本篇博客會簡單涉及到滾動編輯,融合鏡像文件,目錄的空間配額等運維操做簡介。話很少少,直接上命令便於之後查看。linux
一.查看hadf的幫助信息nginx
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs Usage: hdfs [--config confdir] COMMAND where COMMAND is one of: dfs run a filesystem command on the file systems supported in Hadoop. namenode -format format the DFS filesystem secondarynamenode run the DFS secondary namenode namenode run the DFS namenode journalnode run the DFS journalnode zkfc run the ZK Failover Controller daemon datanode run a DFS datanode dfsadmin run a DFS admin client diskbalancer Distributes data evenly among disks on a given node haadmin run a DFS HA admin client fsck run a DFS filesystem checking utility balancer run a cluster balancing utility jmxget get JMX exported values from NameNode or DataNode. mover run a utility to move block replicas across storage types oiv apply the offline fsimage viewer to an fsimage oiv_legacy apply the offline fsimage viewer to an legacy fsimage oev apply the offline edits viewer to an edits file fetchdt fetch a delegation token from the NameNode getconf get config values from configuration groups get the groups which users belong to snapshotDiff diff two snapshots of a directory or diff the current directory contents with a snapshot lsSnapshottableDir list all snapshottable dirs owned by the current user Use -help to see options portmap run a portmap service nfs3 run an NFS version 3 gateway cacheadmin configure the HDFS cache crypto configure HDFS encryption zones storagepolicies list/get/set block storage policies version print the version Most commands print help when invoked w/o parameters. [hdfs@node101.yinzhengjie.org.cn ~]$
綜上所述,hdfs有多個子選項,做爲一枚新手建議從dfs入手,dfs子選項意思是在hdfs文件系統上運行當前系統的命令,而這些命令跟我們學習的Linux命令長得幾乎同樣,接下來咱們一塊兒來看看若是使用它們吧。git
二.hdfs與dfs結合使用的案例web
其實hdfs 和dfs 結合使用的話實際上調用的是hadoop fs這個命令。不信你本身看幫助信息以下:sql
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs Usage: hadoop fs [generic options] [-appendToFile <localsrc> ... <dst>] [-cat [-ignoreCrc] <src> ...] [-checksum <src> ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>] [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-count [-q] [-h] [-v] [-x] <path> ...] [-cp [-f] [-p | -p[topax]] <src> ... <dst>] [-createSnapshot <snapshotDir> [<snapshotName>]] [-deleteSnapshot <snapshotDir> <snapshotName>] [-df [-h] [<path> ...]] [-du [-s] [-h] [-x] <path> ...] [-expunge] [-find <path> ... <expression> ...] [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-getfacl [-R] <path>] [-getfattr [-R] {-n name | -d} [-e en] <path>] [-getmerge [-nl] <src> <localdst>] [-help [cmd ...]] [-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]] [-mkdir [-p] <path> ...] [-moveFromLocal <localsrc> ... <dst>] [-moveToLocal <src> <localdst>] [-mv <src> ... <dst>] [-put [-f] [-p] [-l] <localsrc> ... <dst>] [-renameSnapshot <snapshotDir> <oldName> <newName>] [-rm [-f] [-r|-R] [-skipTrash] <src> ...] [-rmdir [--ignore-fail-on-non-empty] <dir> ...] [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]] [-setfattr {-n name [-v value] | -x name} <path>] [-setrep [-R] [-w] <rep> <path> ...] [-stat [format] <path> ...] [-tail [-f] <file>] [-test -[defsz] <path>] [-text [-ignoreCrc] <src> ...] [-touchz <path> ...] [-usage [cmd ...]] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] [hdfs@node101.yinzhengjie.org.cn ~]$
1>.查看hdfs子命令的幫助信息shell
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -help ls -ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...] : List the contents that match the specified file pattern. If path is not specified, the contents of /user/<currentUser> will be listed. For a directory a list of its direct children is returned (unless -d option is specified). Directory entries are of the form: permissions - userId groupId sizeOfDirectory(in bytes) modificationDate(yyyy-MM-dd HH:mm) directoryName and file entries are of the form: permissions numberOfReplicas userId groupId sizeOfFile(in bytes) modificationDate(yyyy-MM-dd HH:mm) fileName -C Display the paths of files and directories only. -d Directories are listed as plain files. -h Formats the sizes of files in a human-readable fashion rather than a number of bytes. -q Print ? instead of non-printable characters. -R Recursively list the contents of directories. -t Sort files by modification time (most recent first). -S Sort files by size. -r Reverse the order of the sort. -u Use time of last access instead of modification for display and sorting. [hdfs@node101.yinzhengjie.org.cn ~]$
2>.查看hdfs文件系統中已經存在的文件express
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls / Found 5 items drwxr-xr-x - hbase hbase 0 2019-05-25 00:03 /hbase drwxr-xr-x - hdfs supergroup 0 2019-05-22 19:17 /jobtracker drwxr-xr-x - hdfs supergroup 0 2019-05-28 12:11 /system drwxrwxrwt - hdfs supergroup 0 2019-05-23 13:37 /tmp drwxrwxrwx - hdfs supergroup 0 2019-05-23 13:47 /user [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$
3>.在hdfs文件系統中建立文件
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/ Found 1 items drwxr-xr-x - root supergroup 0 2019-05-22 19:46 /user/yinzhengjie/data/day001 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -touchz /user/yinzhengjie/data/1.txt [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/ Found 2 items -rw-r--r-- 3 hdfs supergroup 0 2019-05-28 15:16 /user/yinzhengjie/data/1.txt drwxr-xr-x - root supergroup 0 2019-05-22 19:46 /user/yinzhengjie/data/day001 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$
4>.上傳文件至根目錄(在上傳的過程當中會產生一個以"*.Copying"字樣的臨時文件)
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 0 2018-05-25 21:56 /1.txt -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -put hadoop-2.7.3.tar.gz / [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 0 2018-05-25 21:56 /1.txt -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$
5>.在hdfs文件系統中下載文件
[yinzhengjie@s101 ~]$ ll total 0 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 0 2018-05-25 21:56 /1.txt -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -get /1.txt [yinzhengjie@s101 ~]$ ll total 0 -rw-r--r--. 1 yinzhengjie yinzhengjie 0 May 25 22:06 1.txt drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell [yinzhengjie@s101 ~]$
6>.在hdfs文件系統中刪除文件
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 0 2018-05-25 21:56 /1.txt -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -rm /1.txt 18/05/25 22:08:07 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted /1.txt [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$
7>.在hdfs文件系統中查看文件內容
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -cat /xrsync.sh #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判斷用戶是否傳參 if [ $# -lt 1 ];then echo "請輸入參數"; exit fi #獲取文件路徑 file=$@ #獲取子路徑 filename=`basename $file` #獲取父路徑 dirpath=`dirname $file` #獲取完整路徑 cd $dirpath fullpath=`pwd -P` #同步文件到DataNode for (( i=102;i<=104;i++ )) do #使終端變綠色 tput setaf 2 echo =========== s$i %file =========== #使終端變回原來的顏色,即白灰色 tput setaf 7 #遠程執行命令 rsync -lr $filename `whoami`@s$i:$fullpath #判斷命令是否執行成功 if [ $? == 0 ];then echo "命令執行成功" fi done [yinzhengjie@s101 ~]$
8>.在hdfs文件系統中建立目錄
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -mkdir /shell [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:12 /shell -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$
9>.在hdfs文件系統中修改文件名稱(固然你能夠能夠用來移動文件到目錄喲)
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:12 /shell -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /xcall.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -mv /xcall.sh /call.sh [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /call.sh -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:12 /shell -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 4 items -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /call.sh -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:12 /shell -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -mv /call.sh /shell [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:19 /shell -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -ls /shell Found 1 items -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /shell/call.sh [yinzhengjie@s101 ~]$
10>.在hdfs問系統中拷貝文件到目錄
[yinzhengjie@s101 ~]$ hdfs dfs -ls /shell Found 1 items -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /shell/call.sh [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:19 /shell -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -cp /xrsync.sh /shell [yinzhengjie@s101 ~]$ hdfs dfs -ls /shell Found 2 items -rw-r--r-- 3 yinzhengjie supergroup 517 2018-05-25 21:17 /shell/call.sh -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 22:21 /shell/xrsync.sh [yinzhengjie@s101 ~]$
11>.遞歸刪除目錄
[yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 3 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz drwxr-xr-x - yinzhengjie supergroup 0 2018-05-25 22:21 /shell -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -rmr /shell rmr: DEPRECATED: Please use 'rm -r' instead. 18/05/25 22:22:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted /shell [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 2 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 /xrsync.sh [yinzhengjie@s101 ~]$
12>.列出本地文件的內容(默認是hdfs文件系統喲)
[yinzhengjie@s101 ~]$ hdfs dfs -ls file:///home/yinzhengjie/ Found 9 items -rw------- 1 yinzhengjie yinzhengjie 940 2018-05-25 19:17 file:///home/yinzhengjie/.bash_history -rw-r--r-- 1 yinzhengjie yinzhengjie 18 2015-11-19 21:02 file:///home/yinzhengjie/.bash_logout -rw-r--r-- 1 yinzhengjie yinzhengjie 193 2015-11-19 21:02 file:///home/yinzhengjie/.bash_profile -rw-r--r-- 1 yinzhengjie yinzhengjie 231 2015-11-19 21:02 file:///home/yinzhengjie/.bashrc drwxrwxr-x - yinzhengjie yinzhengjie 39 2018-05-25 09:14 file:///home/yinzhengjie/.oracle_jre_usage drwx------ - yinzhengjie yinzhengjie 76 2018-05-25 19:20 file:///home/yinzhengjie/.ssh -rw-r--r-- 1 yinzhengjie yinzhengjie 0 2018-05-25 22:06 file:///home/yinzhengjie/1.txt drwxrwxr-x - yinzhengjie yinzhengjie 35 2018-05-25 19:08 file:///home/yinzhengjie/hadoop drwxrwxr-x - yinzhengjie yinzhengjie 96 2018-05-25 22:05 file:///home/yinzhengjie/shell [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -ls hdfs:/ Found 2 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-25 21:59 hdfs:///hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 700 2018-05-25 21:17 hdfs:///xrsync.sh [yinzhengjie@s101 ~]$
13>.追加文件內容到hdfs文件系統中的文件
[yinzhengjie@s101 ~]$ ll total 390280 drwxrwxr-x. 3 yinzhengjie yinzhengjie 16 May 27 00:01 hadoop drwxr-xr-x. 9 yinzhengjie yinzhengjie 4096 Aug 17 2016 hadoop-2.7.3 -rw-rw-r--. 1 yinzhengjie yinzhengjie 214092195 Aug 26 2016 hadoop-2.7.3.tar.gz -rw-rw-r--. 1 yinzhengjie yinzhengjie 185540433 May 17 2017 jdk-8u131-linux-x64.tar.gz -rwxrwxr-x. 1 yinzhengjie yinzhengjie 615 May 26 23:24 xcall.sh -rwxrwxr-x. 1 yinzhengjie yinzhengjie 742 May 26 23:29 xrsync.sh [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 2 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-27 00:16 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 615 2018-05-27 00:15 /xcall.sh [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -appendToFile xrsync.sh /xcall.sh [yinzhengjie@s101 ~]$ hdfs dfs -ls / Found 2 items -rw-r--r-- 3 yinzhengjie supergroup 214092195 2018-05-27 00:16 /hadoop-2.7.3.tar.gz -rw-r--r-- 3 yinzhengjie supergroup 1357 2018-05-27 01:28 /xcall.sh [yinzhengjie@s101 ~]$
14>.格式化名稱節點
[root@yinzhengjie ~]# hdfs namenode 18/05/27 17:23:56 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = yinzhengjie/211.98.71.195 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.7.3 STARTUP_MSG: classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z STARTUP_MSG: java = 1.8.0_131 ************************************************************/ 18/05/27 17:23:56 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 18/05/27 17:23:56 INFO namenode.NameNode: createNameNode [] 18/05/27 17:23:56 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 18/05/27 17:23:57 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 18/05/27 17:23:57 INFO impl.MetricsSystemImpl: NameNode metrics system started 18/05/27 17:23:57 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost/ 18/05/27 17:23:57 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070 18/05/27 17:23:57 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 18/05/27 17:23:57 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 18/05/27 17:23:57 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined 18/05/27 17:23:57 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 18/05/27 17:23:57 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 18/05/27 17:23:57 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 18/05/27 17:23:57 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 18/05/27 17:23:57 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 18/05/27 17:23:57 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 18/05/27 17:23:57 INFO http.HttpServer2: Jetty bound to port 50070 18/05/27 17:23:57 INFO mortbay.log: jetty-6.1.26 18/05/27 17:23:58 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 18/05/27 17:23:58 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 18/05/27 17:23:58 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 18/05/27 17:23:58 INFO namenode.FSNamesystem: No KeyProvider found. 18/05/27 17:23:58 INFO namenode.FSNamesystem: fsLock is fair:true 18/05/27 17:23:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 18/05/27 17:23:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 18/05/27 17:23:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 18/05/27 17:23:58 INFO blockmanagement.BlockManager: The block deletion will start around 2018 May 27 17:23:58 18/05/27 17:23:58 INFO util.GSet: Computing capacity for map BlocksMap 18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit 18/05/27 17:23:58 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 18/05/27 17:23:58 INFO util.GSet: capacity = 2^21 = 2097152 entries 18/05/27 17:23:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 18/05/27 17:23:58 INFO blockmanagement.BlockManager: defaultReplication = 1 18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxReplication = 512 18/05/27 17:23:58 INFO blockmanagement.BlockManager: minReplication = 1 18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 18/05/27 17:23:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 18/05/27 17:23:58 INFO blockmanagement.BlockManager: encryptDataTransfer = false 18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 18/05/27 17:23:58 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 18/05/27 17:23:58 INFO namenode.FSNamesystem: supergroup = supergroup 18/05/27 17:23:58 INFO namenode.FSNamesystem: isPermissionEnabled = true 18/05/27 17:23:58 INFO namenode.FSNamesystem: HA Enabled: false 18/05/27 17:23:58 INFO namenode.FSNamesystem: Append Enabled: true 18/05/27 17:23:58 INFO util.GSet: Computing capacity for map INodeMap 18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit 18/05/27 17:23:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 18/05/27 17:23:58 INFO util.GSet: capacity = 2^20 = 1048576 entries 18/05/27 17:23:58 INFO namenode.FSDirectory: ACLs enabled? false 18/05/27 17:23:58 INFO namenode.FSDirectory: XAttrs enabled? true 18/05/27 17:23:58 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 18/05/27 17:23:58 INFO namenode.NameNode: Caching file names occuring more than 10 times 18/05/27 17:23:58 INFO util.GSet: Computing capacity for map cachedBlocks 18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit 18/05/27 17:23:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 18/05/27 17:23:58 INFO util.GSet: capacity = 2^18 = 262144 entries 18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 18/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 18/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 18/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 18/05/27 17:23:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 18/05/27 17:23:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 18/05/27 17:23:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/05/27 17:23:58 INFO util.GSet: VM type = 64-bit 18/05/27 17:23:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 18/05/27 17:23:58 INFO util.GSet: capacity = 2^15 = 32768 entries 18/05/27 17:23:58 WARN common.Storage: Storage directory /tmp/hadoop-root/dfs/name does not exist 18/05/27 17:23:58 WARN namenode.FSNamesystem: Encountered exception loading fsimage org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 18/05/27 17:23:58 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070 18/05/27 17:23:58 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system... 18/05/27 17:23:58 INFO impl.MetricsSystemImpl: NameNode metrics system stopped. 18/05/27 17:23:58 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete. 18/05/27 17:23:58 ERROR namenode.NameNode: Failed to start namenode. org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559) 18/05/27 17:23:58 INFO util.ExitUtil: Exiting with status 1 18/05/27 17:23:58 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at yinzhengjie/211.98.71.195 ************************************************************/ [root@yinzhengjie ~]#
15>.建立快照(關於快照更詳細的用法請參考:https://www.cnblogs.com/yinzhengjie/p/9099529.html)
[root@yinzhengjie ~]# hdfs dfs -ls -R / drwxr-xr-x - root supergroup 0 2018-05-27 20:37 /data drwxr-xr-x - root supergroup 0 2018-05-27 18:24 /data/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/index.html -rw-r--r-- 1 root supergroup 12 2018-05-27 20:28 /data/name.txt -rw-r--r-- 1 root supergroup 11 2018-05-27 18:27 /data/yinzhengjie.sql [root@yinzhengjie ~]# [root@yinzhengjie ~]# echo "hello" > 1.txt [root@yinzhengjie ~]# [root@yinzhengjie ~]# echo "world" > 2.txt [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put 1.txt /data [root@yinzhengjie ~]# hdfs dfs -put 2.txt /data/etc [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R / drwxr-xr-x - root supergroup 0 2018-05-27 20:58 /data -rw-r--r-- 1 root supergroup 6 2018-05-27 20:58 /data/1.txt drwxr-xr-x - root supergroup 0 2018-05-27 20:58 /data/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 20:58 /data/etc/2.txt -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/index.html -rw-r--r-- 1 root supergroup 12 2018-05-27 20:28 /data/name.txt -rw-r--r-- 1 root supergroup 11 2018-05-27 18:27 /data/yinzhengjie.sql [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfsadmin -allowSnapshot /data #啓用快照功能 Allowing snaphot on /data succeeded [root@yinzhengjie ~]# hdfs dfs -createSnapshot /data firstSnapshot #建立快照並起名爲「firstSnapshot」。 Created snapshot /data/.snapshot/firstSnapshot [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R /data/.snapshot/firstSnapshot -rw-r--r-- 1 root supergroup 6 2018-05-27 20:58 /data/.snapshot/firstSnapshot/1.txt drwxr-xr-x - root supergroup 0 2018-05-27 20:58 /data/.snapshot/firstSnapshot/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 20:58 /data/.snapshot/firstSnapshot/etc/2.txt -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/.snapshot/firstSnapshot/index.html -rw-r--r-- 1 root supergroup 12 2018-05-27 20:28 /data/.snapshot/firstSnapshot/name.txt -rw-r--r-- 1 root supergroup 11 2018-05-27 18:27 /data/.snapshot/firstSnapshot/yinzhengjie.sql [root@yinzhengjie ~]#
16>.重命名快照
[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/ Found 1 items drwxr-xr-x - root supergroup 0 2018-05-27 21:02 /data/.snapshot/firstSnapshot [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -renameSnapshot /data firstSnapshot newSnapshot #將/data目錄的firstSnapshot快照名稱更名爲newSnapshot [root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/ Found 1 items drwxr-xr-x - root supergroup 0 2018-05-27 21:02 /data/.snapshot/newSnapshot [root@yinzhengjie ~]#
17>.刪除快照
[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/ Found 1 items drwxr-xr-x - root supergroup 0 2018-05-27 21:02 /data/.snapshot/newSnapshot [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -deleteSnapshot /data newSnapshot [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/ [root@yinzhengjie ~]# [root@yinzhengjie ~]#
18>.查看hadoop的Sequencefile文件內容
[yinzhengjie@s101 data]$ hdfs dfs -text file:///home/yinzhengjie/data/seq 18/06/01 06:32:32 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library 18/06/01 06:32:32 INFO compress.CodecPool: Got brand-new decompressor [.deflate] yinzhengjie 18 [yinzhengjie@s101 data]$
19>.使用df命令查看可用空間
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -df Filesystem Size Used Available Use% hdfs://yinzhengjie-hdfs-ha 1804514672640 4035805184 1800478867456 0% [hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -df -h Filesystem Size Used Available Use% hdfs://yinzhengjie-hdfs-ha 1.6 T 3.8 G 1.6 T 0% [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$
20>.下降複製因子
[hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/ Found 2 items -rw-r--r-- 3 hdfs supergroup 0 2019-05-28 15:16 /user/yinzhengjie/data/1.txt drwxr-xr-x - root supergroup 0 2019-05-22 19:46 /user/yinzhengjie/data/day001 [hdfs@node105.yinzhengjie.org.cn ~]$ [hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -setrep -w 2 /user/yinzhengjie/data/1.txt Replication 2 set: /user/yinzhengjie/data/1.txt Waiting for /user/yinzhengjie/data/1.txt ... done [hdfs@node105.yinzhengjie.org.cn ~]$ [hdfs@node105.yinzhengjie.org.cn ~]$ hdfs dfs -ls /user/yinzhengjie/data/ Found 2 items -rw-r--r-- 2 hdfs supergroup 0 2019-05-28 15:16 /user/yinzhengjie/data/1.txt drwxr-xr-x - root supergroup 0 2019-05-22 19:46 /user/yinzhengjie/data/day001 [hdfs@node105.yinzhengjie.org.cn ~]$
21>.使用du命令查看已用空間
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du /user/yinzhengjie/data/day001 1000000000 3000000000 /user/yinzhengjie/data/day001/test_input 1000000165 1000001650 /user/yinzhengjie/data/day001/test_output 24 72 /user/yinzhengjie/data/day001/ts_validate [hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du -h /user/yinzhengjie/data/day001 953.7 M 2.8 G /user/yinzhengjie/data/day001/test_input 953.7 M 953.7 M /user/yinzhengjie/data/day001/test_output 24 72 /user/yinzhengjie/data/day001/ts_validate [hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -du -s -h /user/yinzhengjie/data/day001 1.9 G 3.7 G /user/yinzhengjie/data/day001 [hdfs@node101.yinzhengjie.org.cn ~]$
三.hdfs與getconf結合使用的案例
1>.獲取NameNode的節點名稱(可能包含多個)
[yinzhengjie@s101 ~]$ hdfs getconf hdfs getconf is utility for getting configuration information from the config file. hadoop getconf [-namenodes] gets list of namenodes in the cluster. [-secondaryNameNodes] gets list of secondary namenodes in the cluster. [-backupNodes] gets list of backup nodes in the cluster. [-includeFile] gets the include file path that defines the datanodes that can join the cluster. [-excludeFile] gets the exclude file path that defines the datanodes that need to decommissioned. [-nnRpcAddresses] gets the namenode rpc addresses [-confKey [key]] gets a specific key from the configuration [yinzhengjie@s101 ~]$ hdfs getconf -namenodes s101 [yinzhengjie@s101 ~]$
2>.獲取hdfs最小塊信息(默認大小爲1M,即1048576字節,若是想要修改的話必須爲512的倍數,由於HDFS底層傳輸數據是每512字節進行校驗)
[yinzhengjie@s101 ~]$ hdfs getconf hdfs getconf is utility for getting configuration information from the config file. hadoop getconf [-namenodes] gets list of namenodes in the cluster. [-secondaryNameNodes] gets list of secondary namenodes in the cluster. [-backupNodes] gets list of backup nodes in the cluster. [-includeFile] gets the include file path that defines the datanodes that can join the cluster. [-excludeFile] gets the exclude file path that defines the datanodes that need to decommissioned. [-nnRpcAddresses] gets the namenode rpc addresses [-confKey [key]] gets a specific key from the configuration [yinzhengjie@s101 ~]$ hdfs getconf -confKey dfs.namenode.fs-limits.min-block-size 1048576 [yinzhengjie@s101 ~]$
3>.查找hdfs的NameNode的RPC地址
[root@node101.yinzhengjie.org.cn ~]# hdfs getconf -nnRpcAddresses calculation101.aggrx:8022 calculation111.aggrx:8022 [root@node101.yinzhengjie.org.cn ~]#
四.hdfs與dfsadmin結合使用的案例
1>.查看hdfs dfsadmin的幫助信息
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin Usage: hdfs dfsadmin Note: Administrative commands can only be run as the HDFS superuser. [-report [-live] [-dead] [-decommissioning]] [-safemode <enter | leave | get | wait>] [-saveNamespace] [-rollEdits] [-restoreFailedStorage true|false|check] [-refreshNodes] [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>] [-setSpaceQuota <quota> <dirname>...<dirname>] [-clrSpaceQuota <dirname>...<dirname>] [-finalizeUpgrade] [-rollingUpgrade [<query|prepare|finalize>]] [-refreshServiceAcl] [-refreshUserToGroupsMappings] [-refreshSuperUserGroupsConfiguration] [-refreshCallQueue] [-refresh <host:ipc_port> <key> [arg1..argn] [-reconfig <datanode|...> <host:ipc_port> <start|status|properties>] [-printTopology] [-refreshNamenodes datanode_host:ipc_port] [-deleteBlockPool datanode_host:ipc_port blockpoolId [force]] [-setBalancerBandwidth <bandwidth in bytes per second>] [-fetchImage <local directory>] [-allowSnapshot <snapshotDir>] [-disallowSnapshot <snapshotDir>] [-shutdownDatanode <datanode_host:ipc_port> [upgrade]] [-getDatanodeInfo <datanode_host:ipc_port>] [-metasave filename] [-triggerBlockReport [-incremental] <datanode_host:ipc_port>] [-listOpenFiles [-blockingDecommission] [-path <path>]] [-help [cmd]] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] [hdfs@node101.yinzhengjie.org.cn ~]$
2>.查看指定命令的幫助信息
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -help rollEdits -rollEdits: Rolls the edit log. [hdfs@node101.yinzhengjie.org.cn ~]$
3>.手動滾動日誌(關於日誌滾動更詳細的用法請參考:https://www.cnblogs.com/yinzhengjie/p/9098092.html)
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -rollEdits Successfully rolled edit logs. New segment starts at txid 82206 [hdfs@node101.yinzhengjie.org.cn ~]$
4>.查看當前的模式
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$
5>.進入安全模式
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode enter Safe mode is ON in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is ON in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get Safe mode is ON in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is ON in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$
6>.離開安全模式
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get Safe mode is ON in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is ON in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode leave Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$
7>.安全模式的wait狀態
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode Usage: hdfs dfsadmin [-safemode enter | leave | get | wait] [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode wait Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -safemode get Safe mode is OFF in node105.yinzhengjie.org.cn/10.1.2.105:8020 Safe mode is OFF in node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$
8>.檢查HDFS集羣的狀態
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -help report -report [-live] [-dead] [-decommissioning]: Reports basic filesystem information and statistics. The dfs usage can be different from "du" usage, because it measures raw space used by replication, checksums, snapshots and etc. on all the DNs. Optional flags may be used to filter the list of displayed DNs. [hdfs@node101.yinzhengjie.org.cn ~]$
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -report Configured Capacity: 1804514672640 (1.64 TB) #此集羣中HDFS已配置的容量 Present Capacity: 1804514672640 (1.64 TB) #現有的HDFS容量。 DFS Remaining: 1800478859331 (1.64 TB) #剩餘的HDFS容量。 DFS Used: 4035813309 (3.76 GB) #HDFS使用存儲的統計信息,按照文件大小統計。 DFS Used%: 0.22% #同上,這裏安裝的是百分比統計。 Under replicated blocks: 1 #顯示是否有任何未充分複製的塊。 Blocks with corrupt replicas: 0 #顯示是否有損壞的塊。 Missing blocks: 0 #顯示是否有丟失的塊。 Missing blocks (with replication factor 1): 0 #同上 ------------------------------------------------- Live datanodes (4): #顯示集羣中有多少個DataNode是活動的並可用。 Name: 10.1.2.102:50010 (node102.yinzhengjie.org.cn) #主機名稱或者機架名稱 Hostname: node102.yinzhengjie.org.cn #主機名 Rack: /default #默認機架 Decommission Status : Normal #當前節點的DataNode的狀態(已停用或者未停用,Normal表示正常的) Configured Capacity: 451128668160 (420.15 GB) #DataNode的配置和使用的容量 DFS Used: 1089761280 (1.01 GB) Non DFS Used: 0 (0 B) DFS Remaining: 450038906880 (419.13 GB) DFS Used%: 0.24% DFS Remaining%: 99.76% Configured Cache Capacity: 1782579200 (1.66 GB) Cache Used: 0 (0 B) #緩存使用狀況統計信息(若是已配置) Cache Remaining: 1782579200 (1.66 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 2 Last contact: Tue May 28 14:55:25 CST 2019 Name: 10.1.2.103:50010 (node103.yinzhengjie.org.cn) Hostname: node103.yinzhengjie.org.cn Rack: /default Decommission Status : Normal Configured Capacity: 451128668160 (420.15 GB) DFS Used: 1009278976 (962.52 MB) Non DFS Used: 0 (0 B) DFS Remaining: 450119389184 (419.21 GB) DFS Used%: 0.22% DFS Remaining%: 99.78% Configured Cache Capacity: 1782579200 (1.66 GB) Cache Used: 0 (0 B) Cache Remaining: 1782579200 (1.66 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 2 Last contact: Tue May 28 14:55:26 CST 2019 Name: 10.1.2.104:50010 (node104.yinzhengjie.org.cn) Hostname: node104.yinzhengjie.org.cn Rack: /default Decommission Status : Normal Configured Capacity: 451128668160 (420.15 GB) DFS Used: 981598141 (936.12 MB) Non DFS Used: 0 (0 B) DFS Remaining: 450147070019 (419.23 GB) DFS Used%: 0.22% DFS Remaining%: 99.78% Configured Cache Capacity: 1782579200 (1.66 GB) Cache Used: 0 (0 B) Cache Remaining: 1782579200 (1.66 GB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 2 Last contact: Tue May 28 14:55:27 CST 2019 Name: 10.1.2.105:50010 (node105.yinzhengjie.org.cn) Hostname: node105.yinzhengjie.org.cn Rack: /default Decommission Status : Normal Configured Capacity: 451128668160 (420.15 GB) DFS Used: 955174912 (910.93 MB) Non DFS Used: 0 (0 B) DFS Remaining: 450173493248 (419.26 GB) DFS Used%: 0.21% DFS Remaining%: 99.79% Configured Cache Capacity: 942669824 (899 MB) Cache Used: 0 (0 B) Cache Remaining: 942669824 (899 MB) Cache Used%: 0.00% Cache Remaining%: 100.00% Xceivers: 2 Last contact: Tue May 28 14:55:24 CST 2019 [hdfs@node101.yinzhengjie.org.cn ~]$
9>.目錄配額(計算目錄下的全部文件的總個數,若是爲1,表示目錄下不能放文件,即空目錄!)
[root@yinzhengjie ~]# ll total 16 -rw-r--r--. 1 root root 6 May 27 17:41 index.html -rw-r--r--. 1 root root 12 May 27 17:42 nginx.conf -rw-r--r--. 1 root root 11 May 27 17:42 yinzhengjie.sql -rw-r--r--. 1 root root 7 May 27 18:20 zabbix.conf [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls / [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -mkdir -p /data/etc [root@yinzhengjie ~]# hdfs dfs -ls / Found 1 items drwxr-xr-x - root supergroup 0 2018-05-27 18:24 /data [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfsadmin -setQuota 3 /data [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put index.html /data [root@yinzhengjie ~]# hdfs dfs -put yinzhengjie.sql /data put: The NameSpace quota (directories and files) of directory /data is exceeded: quota=3 file count=4 [root@yinzhengjie ~]# hdfs dfs -ls /data Found 2 items drwxr-xr-x - root supergroup 0 2018-05-27 18:24 /data/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/index.html [root@yinzhengjie ~]# hdfs dfsadmin -setQuota 5 /data [root@yinzhengjie ~]# hdfs dfs -put yinzhengjie.sql /data [root@yinzhengjie ~]# hdfs dfs -ls /data Found 3 items drwxr-xr-x - root supergroup 0 2018-05-27 18:24 /data/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/index.html -rw-r--r-- 1 root supergroup 11 2018-05-27 18:27 /data/yinzhengjie.sql [root@yinzhengjie ~]#
10>.空間配額(計算目錄下全部文件的總大小,包括副本數,所以空間配最小的值能夠獲得一個等式:"空間配最小的值 >= 須要上傳文件的實際大小 * 副本數")
[root@yinzhengjie ~]# ll total 181196 -rw-r--r--. 1 root root 185540433 May 27 19:34 jdk-8u131-linux-x64.tar.gz -rw-r--r--. 1 root root 12 May 27 19:38 name.txt [root@yinzhengjie ~]# [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R / drwxr-xr-x - root supergroup 0 2018-05-27 20:27 /data drwxr-xr-x - root supergroup 0 2018-05-27 18:24 /data/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/index.html -rw-r--r-- 1 root supergroup 11 2018-05-27 18:27 /data/yinzhengjie.sql [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfsadmin -setSpaceQuota 134217745 /data #這裏設置/data 目錄配額大小爲128M,我測試機器是僞分佈式,指定副本數爲1,所以設置目錄配個大小爲128 [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put name.txt /data #上傳文件的大小/data目錄中去,發現能夠正常上傳 [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R / #檢查上傳已經成功 drwxr-xr-x - root supergroup 0 2018-05-27 20:28 /data drwxr-xr-x - root supergroup 0 2018-05-27 18:24 /data/etc -rw-r--r-- 1 root supergroup 6 2018-05-27 18:25 /data/index.html -rw-r--r-- 1 root supergroup 12 2018-05-27 20:28 /data/name.txt -rw-r--r-- 1 root supergroup 11 2018-05-27 18:27 /data/yinzhengjie.sql [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put jdk-8u131-linux-x64.tar.gz /data #當咱們上傳第二個文件時,就會報如下的錯誤! 18/05/27 20:29:40 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /data is exceeded: quota = 134217745 B = 128.00 MB but diskspace consumed = 134217757 B = 128.00 MB at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211) at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:878) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:707) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:666) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:491) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3573) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3157) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3038) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1458) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448) Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.DSQuotaExceededException): The DiskSpace quota of /data is exceeded: quota = 134217745 B = 128.00 MB but diskspace consumed = 134217757 B = 128.00 MB at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211) at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:878) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:707) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:666) at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:491) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3573) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3157) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3038) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043) at org.apache.hadoop.ipc.Client.call(Client.java:1475) at org.apache.hadoop.ipc.Client.call(Client.java:1412) at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229) at com.sun.proxy.$Proxy10.addBlock(Unknown Source) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) at com.sun.proxy.$Proxy11.addBlock(Unknown Source) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455) ... 2 more put: The DiskSpace quota of /data is exceeded: quota = 134217745 B = 128.00 MB but diskspace consumed = 134217757 B = 128.00 MB [root@yinzhengjie ~]#
11>.清空配額管理
[root@yinzhengjie ~]# hdfs dfsadmin -clrSpaceQuota /data [root@yinzhengjie ~]# echo $? 0 [root@yinzhengjie ~]#
12>.對某個目錄啓用快照功能(快照功能默認爲禁用狀態)
[root@yinzhengjie ~]# hdfs dfsadmin -allowSnapShot /data Allowing snaphot on /data succeeded [root@yinzhengjie ~]#
13>.對某個目錄禁用快照功能
[root@yinzhengjie ~]# hdfs dfsadmin -disallowSnapShot /data Disallowing snaphot on /data succeeded [root@yinzhengjie ~]#
14>.獲取某個namenode的節點狀態
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs haadmin -getServiceState namenode23 active [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs haadmin -getServiceState namenode31 standby [hdfs@node101.yinzhengjie.org.cn ~]$
15>.使用dfsadmin -metasave命令提供的信息比dfsadmin -report命令提供的更多。使用此命令能夠獲取各類的塊相關的信息(好比:塊總數,正在等待複製的塊,當前正在複製的塊)
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfs -ls / Found 5 items drwxr-xr-x - hbase hbase 0 2019-05-25 00:03 /hbase drwxr-xr-x - hdfs supergroup 0 2019-05-22 19:17 /jobtracker drwxr-xr-x - hdfs supergroup 0 2019-05-28 12:11 /system drwxrwxrwt - hdfs supergroup 0 2019-05-23 13:37 /tmp drwxrwxrwx - hdfs supergroup 0 2019-05-23 13:47 /user [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ hdfs dfsadmin -metasave /hbase #咱們獲取某個目錄的詳細信息,容許成功後會有如下輸出,並在「/var/log/hadoop-hdfs/」目錄中建立一個文件,該文件名稱和我們這裏輸入的path名稱一致,即「hbase」 Created metasave file /hbase in the log directory of namenode node105.yinzhengjie.org.cn/10.1.2.105:8020 Created metasave file /hbase in the log directory of namenode node101.yinzhengjie.org.cn/10.1.2.101:8020 [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$ cat /var/log/hadoop-hdfs/hbase #查看輸入文件包含有關塊的如下信息 2027 files and directories, 60 blocks = 2087 total Live Datanodes: 4 Dead Datanodes: 0 Metasave: Blocks waiting for replication: 0 Mis-replicated blocks that have been postponed: Metasave: Blocks being replicated: 0 Metasave: Blocks 0 waiting deletion from 0 datanodes. Metasave: Number of datanodes: 4 10.1.2.102:50010 /default IN 451128668160(420.15 GB) 1089761280(1.01 GB) 0.24% 450038906880(419.13 GB) 1782579200(1.66 GB) 0(0 B) 0.00% 1782579200(1.66 GB) Tue May 28 15:26:58 CST 2019 10.1.2.105:50010 /default IN 451128668160(420.15 GB) 955174912(910.93 MB) 0.21% 450173493248(419.26 GB) 942669824(899 MB) 0(0 B) 0.00% 942669824(899 MB) Tue May 28 15:26:59 CST 2019 10.1.2.103:50010 /default IN 451128668160(420.15 GB) 1009278976(962.52 MB) 0.22% 450119389184(419.21 GB) 1782579200(1.66 GB) 0(0 B) 0.00% 1782579200(1.66 GB) Tue May 28 15:26:58 CST 2019 10.1.2.104:50010 /default IN 451128668160(420.15 GB) 981590016(936.12 MB) 0.22% 450147078144(419.23 GB) 1782579200(1.66 GB) 0(0 B) 0.00% 1782579200(1.66 GB) Tue May 28 15:26:57 CST 2019 [hdfs@node101.yinzhengjie.org.cn ~]$
五.hdfs與fsck結合使用的案例
1>.查看hdfs文件系統信息
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs fsck / Connecting to namenode via http://node101.yinzhengjie.org.cn:50070/fsck?ugi=hdfs&path=%2F FSCK started by hdfs (auth:SIMPLE) from /10.1.2.101 for path / at Thu May 23 14:32:41 CST 2019 ....................................... /user/yinzhengjie/data/day001/test_output/_partition.lst: Under replicated BP-1230584423-10.1.2.101-1558513980919:blk_1073742006_1182. Target Replicas is 10 but found 4 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s). ..................Status: HEALTHY #表明此次HDFS上block檢查結果 Total size: 2001318792 B (Total open files size: 498 B) #表明根目錄下文件總大小 Total dirs: 189 #表明檢測的目錄下總共有多少目錄 Total files: 57 #表明檢測的目錄下總共有多少文件 Total symlinks: 0 (Files currently being written: 7) #表明檢測的目錄下有多少個符號連接 Total blocks (validated): 58 (avg. block size 34505496 B) (Total open file blocks (not validated): 6) #表明檢測的目錄下有多少的block是有效的。 Minimally replicated blocks: 58 (100.0 %) #表明拷貝的最小block塊數。 Over-replicated blocks: 0 (0.0 %) #表明當前副本數大於指定副本數的block數量。 Under-replicated blocks: 1 (1.7241379 %) #表明當前副本書小於指定副本數的block數量。 Mis-replicated blocks: 0 (0.0 %) #表明丟失的block塊數量。 Default replication factor: 3 #表明默認的副本數(自身一份,默認拷貝兩份)。 Average block replication: 2.3965516 #表明塊的平均複製數,即平均備份的數目,Default replication factor 的值爲3,所以須要備份在備份2個,這裏的平均備份數等於2是理想值,若是大於2說明可能有多餘的副本數存在。 Corrupt blocks: 0 #表明壞的塊數,這個指不爲0,說明當前集羣有不可恢復的塊,即數據丟失啦! Missing replicas: 6 (4.137931 %) #表明丟失的副本數 Number of data-nodes: 4 #表明有多好個DN節點 Number of racks: 1 #表明有多少個機架 FSCK ended at Thu May 23 14:32:41 CST 2019 in 7 milliseconds The filesystem under path '/' is HEALTHY [hdfs@node101.yinzhengjie.org.cn ~]$
2>.fsck指令顯示HDFS塊信息
[hdfs@node101.yinzhengjie.org.cn ~]$ hdfs fsck / -files -blocks Connecting to namenode via http://node101.yinzhengjie.org.cn:50070/fsck?ugi=hdfs&files=1&blocks=1&path=%2F FSCK started by hdfs (auth:SIMPLE) from /10.1.2.101 for path / at Thu May 23 14:30:51 CST 2019 / <dir> /hbase <dir> /hbase/.tmp <dir> /hbase/MasterProcWALs <dir> /hbase/MasterProcWALs/state-00000000000000000002.log 30 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743172_2348 len=30 Live_repl=3 /hbase/WALs <dir> /hbase/WALs/node102.yinzhengjie.org.cn,60020,1558589829098 <dir> /hbase/WALs/node102.yinzhengjie.org.cn,60020,1558590692594 <dir> /hbase/WALs/node103.yinzhengjie.org.cn,60020,1558589826957 <dir> /hbase/WALs/node103.yinzhengjie.org.cn,60020,1558590692071 <dir> /hbase/WALs/node104.yinzhengjie.org.cn,60020,1558590690690 <dir> /hbase/WALs/node105.yinzhengjie.org.cn,60020,1558589830953 <dir> /hbase/WALs/node105.yinzhengjie.org.cn,60020,1558590695092 <dir> /hbase/data <dir> /hbase/data/default <dir> /hbase/data/hbase <dir> /hbase/data/hbase/meta <dir> /hbase/data/hbase/meta/.tabledesc <dir> /hbase/data/hbase/meta/.tabledesc/.tableinfo.0000000001 398 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743148_2324 len=398 Live_repl=3 /hbase/data/hbase/meta/.tmp <dir> /hbase/data/hbase/meta/1588230740 <dir> /hbase/data/hbase/meta/1588230740/.regioninfo 32 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743147_2323 len=32 Live_repl=3 /hbase/data/hbase/meta/1588230740/info <dir> /hbase/data/hbase/meta/1588230740/info/4502037817cf408da4c31f38632d386e 1389 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743185_2361 len=1389 Live_repl=3 /hbase/data/hbase/meta/1588230740/info/cc017533033a4b57904c694bf156d9a6 1529 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743168_2344 len=1529 Live_repl=3 /hbase/data/hbase/meta/1588230740/recovered.edits <dir> /hbase/data/hbase/meta/1588230740/recovered.edits/20.seqid 0 bytes, 0 block(s): OK /hbase/data/hbase/namespace <dir> /hbase/data/hbase/namespace/.tabledesc <dir> /hbase/data/hbase/namespace/.tabledesc/.tableinfo.0000000001 312 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743156_2332 len=312 Live_repl=3 /hbase/data/hbase/namespace/.tmp <dir> /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7 <dir> /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/.regioninfo 42 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743157_2333 len=42 Live_repl=3 /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/info <dir> /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/info/2efca8e894a4419f9d6e86bb8c8c736b 1079 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743166_2342 len=1079 Live_repl=3 /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/recovered.edits <dir> /hbase/data/hbase/namespace/27b26f72bc8dabfb5f8dae587ad9cda7/recovered.edits/11.seqid 0 bytes, 0 block(s): OK /hbase/hbase.id 42 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743146_2322 len=42 Live_repl=3 /hbase/hbase.version 7 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743145_2321 len=7 Live_repl=3 /hbase/oldWALs <dir> /jobtracker <dir> /jobtracker/jobsInfo <dir> /jobtracker/jobsInfo/job_201905221917_0001.info 1013 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743091_2267 len=1013 Live_repl=3 /jobtracker/jobsInfo/job_201905221917_0002.info 1013 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743117_2293 len=1013 Live_repl=3 /tmp <dir> /tmp/.cloudera_health_monitoring_canary_files <dir> /tmp/hbase-staging <dir> /tmp/hbase-staging/DONOTERASE <dir> /tmp/hive <dir> /tmp/hive/hive <dir> /tmp/hive/hive/2cd86efc-ec86-40ac-8472-d78c9e6b90a4 <dir> /tmp/hive/hive/2cd86efc-ec86-40ac-8472-d78c9e6b90a4/_tmp_space.db <dir> /tmp/hive/root <dir> /tmp/logs <dir> /tmp/logs/root <dir> /tmp/logs/root/logs <dir> /tmp/logs/root/logs/application_1558520562958_0001 <dir> /tmp/logs/root/logs/application_1558520562958_0001/node102.yinzhengjie.org.cn_8041 3197 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742001_1177 len=3197 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0001/node103.yinzhengjie.org.cn_8041 3197 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742002_1178 len=3197 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0001/node104.yinzhengjie.org.cn_8041 35641 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742003_1179 len=35641 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002 <dir> /tmp/logs/root/logs/application_1558520562958_0002/node102.yinzhengjie.org.cn_8041 68827 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742034_1210 len=68827 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002/node103.yinzhengjie.org.cn_8041 191005 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742037_1213 len=191005 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002/node104.yinzhengjie.org.cn_8041 80248 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742036_1212 len=80248 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0002/node105.yinzhengjie.org.cn_8041 64631 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742035_1211 len=64631 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003 <dir> /tmp/logs/root/logs/application_1558520562958_0003/node102.yinzhengjie.org.cn_8041 23256 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742056_1232 len=23256 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003/node103.yinzhengjie.org.cn_8041 35498 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742055_1231 len=35498 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003/node104.yinzhengjie.org.cn_8041 131199 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742057_1233 len=131199 Live_repl=3 /tmp/logs/root/logs/application_1558520562958_0003/node105.yinzhengjie.org.cn_8041 19428 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742054_1230 len=19428 Live_repl=3 /tmp/mapred <dir> /tmp/mapred/system <dir> /tmp/mapred/system/jobtracker.info 4 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741947_1123 len=4 Live_repl=3 /tmp/mapred/system/seq-000000000002 <dir> /tmp/mapred/system/seq-000000000002/jobtracker.info 4 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743173_2349 len=4 Live_repl=3 /user <dir> /user/history <dir> /user/history/done <dir> /user/history/done/2019 <dir> /user/history/done/2019/05 <dir> /user/history/done/2019/05/22 <dir> /user/history/done/2019/05/22/000000 <dir> /user/history/done/2019/05/22/000000/job_1558520562958_0001-1558525119627-root-TeraGen-1558525149229-2-0-SUCCEEDED-root.users.root-1558525126064.jhist 18715 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741999_1175 len=18715 Live_repl=3 /user/history/done/2019/05/22/000000/job_1558520562958_0001_conf.xml 153279 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742000_1176 len=153279 Live_repl=3 /user/history/done/2019/05/22/000000/job_1558520562958_0002-1558525279858-root-TeraSort-1558525343312-8-16-SUCCEEDED-root.users.root-1558525284386.jhist 102347 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742032_1208 len=102347 Live_repl=1 /user/history/done/2019/05/22/000000/job_1558520562958_0002_conf.xml 154575 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742033_1209 len=154575 Live_repl=1 /user/history/done/2019/05/22/000000/job_1558520562958_0003-1558525587458-root-TeraValidate-1558525623716-16-1-SUCCEEDED-root.users.root-1558525591653.jhist 71381 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742052_1228 len=71381 Live_repl=3 /user/history/done/2019/05/22/000000/job_1558520562958_0003_conf.xml 153701 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742053_1229 len=153701 Live_repl=3 /user/history/done_intermediate <dir> /user/history/done_intermediate/root <dir> /user/hive <dir> /user/hive/warehouse <dir> /user/hive/warehouse/page_view <dir> /user/hive/warehouse/page_view/PageViewData.csv 1584 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073743063_2239 len=1584 Live_repl=3 /user/hue <dir> /user/hue/.Trash <dir> /user/hue/.Trash/190523130000 <dir> /user/hue/.Trash/190523130000/user <dir> /user/hue/.Trash/190523130000/user/hue <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0/p2=420 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1/p2=421 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p0 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p0/p2=420 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p1 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586450195/p1=p1/p2=421 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p0 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p0/p2=420 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p1 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558586750132/p1=p1/p2=421 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p0 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p0/p2=420 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p1 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587050151/p1=p1/p2=421 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p0 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p0/p2=420 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p1 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558587350133/p1=p1/p2=421 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558586146864 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558586450229 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558586750160 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558587050181 <dir> /user/hue/.Trash/190523130000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558587350162 <dir> /user/hue/.Trash/190523140000 <dir> /user/hue/.Trash/190523140000/user <dir> /user/hue/.Trash/190523140000/user/hue <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0/p2=420 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1/p2=421 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p0 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p0/p2=420 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p1 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558589871630/p1=p1/p2=421 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p0 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p0/p2=420 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p1 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590170701/p1=p1/p2=421 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p0 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p0/p2=420 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p1 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590470694/p1=p1/p2=421 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p0 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p0/p2=420 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p1 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558590772147/p1=p1/p2=421 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p0 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p0/p2=420 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p1 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591070813/p1=p1/p2=421 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558587650383 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558589871818 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558590170843 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558590470827 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558590772354 <dir> /user/hue/.Trash/190523140000/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591070964 <dir> /user/hue/.Trash/Current <dir> /user/hue/.Trash/Current/user <dir> /user/hue/.Trash/Current/user/hue <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p0/p2=420 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table/p1=p1/p2=421 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p0 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p0/p2=420 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p1 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591670694/p1=p1/p2=421 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p0 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p0/p2=420 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p1 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558591970721/p1=p1/p2=421 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p0 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p0/p2=420 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p1 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592270766/p1=p1/p2=421 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p0 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p0/p2=420 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p1 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592570728/p1=p1/p2=421 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p0 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p0/p2=420 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p1 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df10872/cm_test_table1558592870675/p1=p1/p2=421 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591370839 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591670884 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558591970864 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558592270913 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558592570869 <dir> /user/hue/.Trash/Current/user/hue/.cloudera_manager_hive_metastore_canary/hive_HIVEMETASTORE_4e4cb72650ff61877f815b077df108721558592870812 <dir> /user/hue/.cloudera_manager_hive_metastore_canary <dir> /user/impala <dir> /user/root <dir> /user/root/.staging <dir> /user/yinzhengjie <dir> /user/yinzhengjie/data <dir> /user/yinzhengjie/data/day001 <dir> /user/yinzhengjie/data/day001/test_input <dir> /user/yinzhengjie/data/day001/test_input/_SUCCESS 0 bytes, 0 block(s): OK /user/yinzhengjie/data/day001/test_input/part-m-00000 500000000 bytes, 4 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741990_1166 len=134217728 Live_repl=3 1. BP-1230584423-10.1.2.101-1558513980919:blk_1073741992_1168 len=134217728 Live_repl=3 2. BP-1230584423-10.1.2.101-1558513980919:blk_1073741994_1170 len=134217728 Live_repl=3 3. BP-1230584423-10.1.2.101-1558513980919:blk_1073741996_1172 len=97346816 Live_repl=3 /user/yinzhengjie/data/day001/test_input/part-m-00001 500000000 bytes, 4 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073741989_1165 len=134217728 Live_repl=3 1. BP-1230584423-10.1.2.101-1558513980919:blk_1073741991_1167 len=134217728 Live_repl=3 2. BP-1230584423-10.1.2.101-1558513980919:blk_1073741993_1169 len=134217728 Live_repl=3 3. BP-1230584423-10.1.2.101-1558513980919:blk_1073741995_1171 len=97346816 Live_repl=3 /user/yinzhengjie/data/day001/test_output <dir> /user/yinzhengjie/data/day001/test_output/_SUCCESS 0 bytes, 0 block(s): OK /user/yinzhengjie/data/day001/test_output/_partition.lst 165 bytes, 1 block(s): Under replicated BP-1230584423-10.1.2.101-1558513980919:blk_1073742006_1182. Target Replicas is 10 but found 4 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s). 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742006_1182 len=165 Live_repl=4 /user/yinzhengjie/data/day001/test_output/part-r-00000 62307000 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742015_1191 len=62307000 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00001 62782700 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742016_1192 len=62782700 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00002 61993900 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742017_1193 len=61993900 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00003 63217700 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742019_1195 len=63217700 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00004 62628600 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742018_1194 len=62628600 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00005 62884100 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742020_1196 len=62884100 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00006 63079700 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742021_1197 len=63079700 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00007 61421800 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742022_1198 len=61421800 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00008 61319800 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742023_1199 len=61319800 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00009 61467300 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742025_1201 len=61467300 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00010 62823400 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742024_1200 len=62823400 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00011 63392200 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742026_1202 len=63392200 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00012 62889200 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742027_1203 len=62889200 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00013 62953000 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742028_1204 len=62953000 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00014 62072800 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742029_1205 len=62072800 Live_repl=1 /user/yinzhengjie/data/day001/test_output/part-r-00015 62766800 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742030_1206 len=62766800 Live_repl=1 /user/yinzhengjie/data/day001/ts_validate <dir> /user/yinzhengjie/data/day001/ts_validate/_SUCCESS 0 bytes, 0 block(s): OK /user/yinzhengjie/data/day001/ts_validate/part-r-00000 24 bytes, 1 block(s): OK 0. BP-1230584423-10.1.2.101-1558513980919:blk_1073742050_1226 len=24 Live_repl=3 Status: HEALTHY Total size: 2001318792 B (Total open files size: 498 B) Total dirs: 189 Total files: 57 Total symlinks: 0 (Files currently being written: 7) Total blocks (validated): 58 (avg. block size 34505496 B) (Total open file blocks (not validated): 6) Minimally replicated blocks: 58 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 1 (1.7241379 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 3 Average block replication: 2.3965516 Corrupt blocks: 0 Missing replicas: 6 (4.137931 %) Number of data-nodes: 4 Number of racks: 1 FSCK ended at Thu May 23 14:30:51 CST 2019 in 8 milliseconds The filesystem under path '/' is HEALTHY [hdfs@node101.yinzhengjie.org.cn ~]$ [hdfs@node101.yinzhengjie.org.cn ~]$
3>.
六.hdfs與oiv結合我使用案例
1>.查看hdfs oiv的幫助信息
[yinzhengjie@s101 ~]$ hdfs oiv Usage: bin/hdfs oiv [OPTIONS] -i INPUTFILE -o OUTPUTFILE Offline Image Viewer View a Hadoop fsimage INPUTFILE using the specified PROCESSOR, saving the results in OUTPUTFILE. The oiv utility will attempt to parse correctly formed image files and will abort fail with mal-formed image files. The tool works offline and does not require a running cluster in order to process an image file. The following image processors are available: * XML: This processor creates an XML document with all elements of the fsimage enumerated, suitable for further analysis by XML tools. * FileDistribution: This processor analyzes the file size distribution in the image. -maxSize specifies the range [0, maxSize] of file sizes to be analyzed (128GB by default). -step defines the granularity of the distribution. (2MB by default) * Web: Run a viewer to expose read-only WebHDFS API. -addr specifies the address to listen. (localhost:5978 by default) * Delimited (experimental): Generate a text file with all of the elements common to both inodes and inodes-under-construction, separated by a delimiter. The default delimiter is \t, though this may be changed via the -delimiter argument. Required command line arguments: -i,--inputFile <arg> FSImage file to process. Optional command line arguments: -o,--outputFile <arg> Name of output file. If the specified file exists, it will be overwritten. (output to stdout by default) -p,--processor <arg> Select which type of processor to apply against image file. (XML|FileDistribution|Web|Delimited) (Web by default) -delimiter <arg> Delimiting string to use with Delimited processor. -t,--temp <arg> Use temporary dir to cache intermediate result to generate Delimited outputs. If not set, Delimited processor constructs the namespace in memory before outputting text. -h,--help Display usage information and exit [yinzhengjie@s101 ~]$
2>.使用oiv命令查詢hadoop鏡像文件
[yinzhengjie@s101 ~]$ ll total 0 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep fsimage -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.2K May 27 06:02 fsimage_0000000000000000767 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 06:02 fsimage_0000000000000000767.md5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.4K May 27 07:58 fsimage_0000000000000000932 -rw-rw-r--. 1 yinzhengjie yinzhengjie 62 May 27 07:58 fsimage_0000000000000000932.md5 [yinzhengjie@s101 ~]$ hdfs oiv -i ./hadoop/dfs/name/current/fsimage_0000000000000000767 -o yinzhengjie.xml -p XML [yinzhengjie@s101 ~]$ ll total 8 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml [yinzhengjie@s101 ~]$
3>.
七.hdfs與oev結合我使用案例
1>.查看hdfs oev的幫助信息
[yinzhengjie@s101 ~]$ hdfs oev Usage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE Offline edits viewer Parse a Hadoop edits log file INPUT_FILE and save results in OUTPUT_FILE. Required command line arguments: -i,--inputFile <arg> edits file to process, xml (case insensitive) extension means XML format, any other filename means binary format -o,--outputFile <arg> Name of output file. If the specified file exists, it will be overwritten, format of the file is determined by -p option Optional command line arguments: -p,--processor <arg> Select which type of processor to apply against image file, currently supported processors are: binary (native binary format that Hadoop uses), xml (default, XML format), stats (prints statistics about edits file) -h,--help Display usage information and exit -f,--fix-txids Renumber the transaction IDs in the input, so that there are no gaps or invalid transaction IDs. -r,--recover When reading binary edit logs, use recovery mode. This will give you the chance to skip corrupt parts of the edit log. -v,--verbose More verbose output, prints the input and output filenames, for processors that write to a file, also output to screen. On large image files this will dramatically increase processing time (default is false). Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions] [yinzhengjie@s101 ~]$
2>.使用oev命令查詢hadoop的編輯日誌文件
[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:33 edits_0000000000000001001-0000000000000001002 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:34 edits_0000000000000001003-0000000000000001004 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:35 edits_0000000000000001005-0000000000000001006 -rw-rw-r--. 1 yinzhengjie yinzhengjie 42 May 27 08:36 edits_0000000000000001007-0000000000000001008 -rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 08:36 edits_inprogress_0000000000000001009 [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ll total 8 drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_0000000000000001007-0000000000000001008 -o edits.xml -p XML [yinzhengjie@s101 ~]$ ll total 12 -rw-rw-r--. 1 yinzhengjie yinzhengjie 315 May 27 08:39 edits.xml drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoop drwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell -rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ cat edits.xml <?xml version="1.0" encoding="UTF-8"?> <EDITS> <EDITS_VERSION>-63</EDITS_VERSION> <RECORD> <OPCODE>OP_START_LOG_SEGMENT</OPCODE> <DATA> <TXID>1007</TXID> </DATA> </RECORD> <RECORD> <OPCODE>OP_END_LOG_SEGMENT</OPCODE> <DATA> <TXID>1008</TXID> </DATA> </RECORD> </EDITS> [yinzhengjie@s101 ~]$
3>.
八.hadoop命令介紹
在上面咱們以及提到過,"hadoop fs"其實就等價於「hdfs dfs」,可是hadoop有些命令是hdfs 命令所不支持的,咱們舉幾個例子:
1>.檢查壓縮庫本地安裝狀況
[yinzhengjie@s101 ~]$ hadoop checknative 18/05/27 04:40:13 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version 18/05/27 04:40:13 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library Native library checking: hadoop: true /soft/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0 zlib: true /lib64/libz.so.1 snappy: false lz4: true revision:99 bzip2: false openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)! [yinzhengjie@s101 ~]$
2>.格式化名稱節點
[root@yinzhengjie ~]# hadoop namenode -format DEPRECATED: Use of this script to execute hdfs command is deprecated. Instead use the hdfs command for it. 18/05/27 17:24:29 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = yinzhengjie/211.98.71.195 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.7.3 STARTUP_MSG: classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z STARTUP_MSG: java = 1.8.0_131 ************************************************************/ 18/05/27 17:24:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 18/05/27 17:24:29 INFO namenode.NameNode: createNameNode [-format] Formatting using clusterid: CID-36f63542-a60c-46d0-8df1-f8fa32730764 18/05/27 17:24:30 INFO namenode.FSNamesystem: No KeyProvider found. 18/05/27 17:24:30 INFO namenode.FSNamesystem: fsLock is fair:true 18/05/27 17:24:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 18/05/27 17:24:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 18/05/27 17:24:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 18/05/27 17:24:30 INFO blockmanagement.BlockManager: The block deletion will start around 2018 May 27 17:24:30 18/05/27 17:24:30 INFO util.GSet: Computing capacity for map BlocksMap 18/05/27 17:24:30 INFO util.GSet: VM type = 64-bit 18/05/27 17:24:30 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 18/05/27 17:24:30 INFO util.GSet: capacity = 2^21 = 2097152 entries 18/05/27 17:24:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 18/05/27 17:24:30 INFO blockmanagement.BlockManager: defaultReplication = 1 18/05/27 17:24:30 INFO blockmanagement.BlockManager: maxReplication = 512 18/05/27 17:24:30 INFO blockmanagement.BlockManager: minReplication = 1 18/05/27 17:24:30 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 18/05/27 17:24:30 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 18/05/27 17:24:30 INFO blockmanagement.BlockManager: encryptDataTransfer = false 18/05/27 17:24:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 18/05/27 17:24:30 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 18/05/27 17:24:30 INFO namenode.FSNamesystem: supergroup = supergroup 18/05/27 17:24:30 INFO namenode.FSNamesystem: isPermissionEnabled = true 18/05/27 17:24:30 INFO namenode.FSNamesystem: HA Enabled: false 18/05/27 17:24:30 INFO namenode.FSNamesystem: Append Enabled: true 18/05/27 17:24:30 INFO util.GSet: Computing capacity for map INodeMap 18/05/27 17:24:30 INFO util.GSet: VM type = 64-bit 18/05/27 17:24:30 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 18/05/27 17:24:30 INFO util.GSet: capacity = 2^20 = 1048576 entries 18/05/27 17:24:30 INFO namenode.FSDirectory: ACLs enabled? false 18/05/27 17:24:30 INFO namenode.FSDirectory: XAttrs enabled? true 18/05/27 17:24:30 INFO namenode.FSDirectory: Maximum size of an xattr: 16384 18/05/27 17:24:30 INFO namenode.NameNode: Caching file names occuring more than 10 times 18/05/27 17:24:30 INFO util.GSet: Computing capacity for map cachedBlocks 18/05/27 17:24:30 INFO util.GSet: VM type = 64-bit 18/05/27 17:24:30 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 18/05/27 17:24:30 INFO util.GSet: capacity = 2^18 = 262144 entries 18/05/27 17:24:30 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 18/05/27 17:24:30 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 18/05/27 17:24:30 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 18/05/27 17:24:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 18/05/27 17:24:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 18/05/27 17:24:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 18/05/27 17:24:30 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 18/05/27 17:24:30 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 18/05/27 17:24:30 INFO util.GSet: Computing capacity for map NameNodeRetryCache 18/05/27 17:24:30 INFO util.GSet: VM type = 64-bit 18/05/27 17:24:30 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 18/05/27 17:24:30 INFO util.GSet: capacity = 2^15 = 32768 entries 18/05/27 17:24:30 INFO namenode.FSImage: Allocated new BlockPoolId: BP-430965362-211.98.71.195-1527467070404 18/05/27 17:24:30 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted. 18/05/27 17:24:30 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 18/05/27 17:24:30 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 351 bytes saved in 0 seconds. 18/05/27 17:24:30 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 18/05/27 17:24:30 INFO util.ExitUtil: Exiting with status 0 18/05/27 17:24:30 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at yinzhengjie/211.98.71.195 ************************************************************/ [root@yinzhengjie ~]#
3>.執行自定義jar包
[yinzhengjie@s101 data]$ hadoop jar YinzhengjieMapReduce-1.0-SNAPSHOT.jar cn.org.yinzhengjie.mapreduce.wordcount.WordCountApp /world.txt /out 18/06/13 17:31:45 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 18/06/13 17:31:45 INFO input.FileInputFormat: Total input paths to process : 1 18/06/13 17:31:45 INFO mapreduce.JobSubmitter: number of splits:1 18/06/13 17:31:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528935621892_0001 18/06/13 17:31:46 INFO impl.YarnClientImpl: Submitted application application_1528935621892_0001 18/06/13 17:31:46 INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528935621892_0001/ 18/06/13 17:31:46 INFO mapreduce.Job: Running job: job_1528935621892_0001 18/06/13 17:31:54 INFO mapreduce.Job: Job job_1528935621892_0001 running in uber mode : false 18/06/13 17:31:54 INFO mapreduce.Job: map 0% reduce 0% 18/06/13 17:32:00 INFO mapreduce.Job: map 100% reduce 0% 18/06/13 17:32:07 INFO mapreduce.Job: map 100% reduce 100% 18/06/13 17:32:08 INFO mapreduce.Job: Job job_1528935621892_0001 completed successfully 18/06/13 17:32:08 INFO mapreduce.Job: Counters: 49 File System Counters FILE: Number of bytes read=1081 FILE: Number of bytes written=244043 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=644 HDFS: Number of bytes written=613 HDFS: Number of read operations=6 HDFS: Number of large read operations=0 HDFS: Number of write operations=2 Job Counters Launched map tasks=1 Launched reduce tasks=1 Data-local map tasks=1 Total time spent by all maps in occupied slots (ms)=3382 Total time spent by all reduces in occupied slots (ms)=4417 Total time spent by all map tasks (ms)=3382 Total time spent by all reduce tasks (ms)=4417 Total vcore-milliseconds taken by all map tasks=3382 Total vcore-milliseconds taken by all reduce tasks=4417 Total megabyte-milliseconds taken by all map tasks=3463168 Total megabyte-milliseconds taken by all reduce tasks=4523008 Map-Reduce Framework Map input records=1 Map output records=87 Map output bytes=901 Map output materialized bytes=1081 Input split bytes=91 Combine input records=0 Combine output records=0 Reduce input groups=67 Reduce shuffle bytes=1081 Reduce input records=87 Reduce output records=67 Spilled Records=174 Shuffled Maps =1 Failed Shuffles=0 Merged Map outputs=1 GC time elapsed (ms)=205 CPU time spent (ms)=1570 Physical memory (bytes) snapshot=363290624 Virtual memory (bytes) snapshot=4190236672 Total committed heap usage (bytes)=211574784 Shuffle Errors BAD_ID=0 CONNECTION=0 IO_ERROR=0 WRONG_LENGTH=0 WRONG_MAP=0 WRONG_REDUCE=0 File Input Format Counters Bytes Read=553 File Output Format Counters Bytes Written=613 [yinzhengjie@s101 data]$
關於「Hadoop fs」更多相關命令請參考個人筆記:http://www.javashuo.com/article/p-muxtevlc-cx.html