經過本教程您能夠學習到:node
hadoop的hdfs操做基本語法很簡單即linux
hadoop fs xxx
以hadoop fs引導的命令。web
有必定linux基礎的朋友都知道,要查看一個命令的具體用法,直接經過敲打該命令,系統就會爲咱們輸出該命令的操做文檔,例如如今咱們查看hadoop fs
的相關信息:shell
[root@h133 ~]# hadoop fs Usage: hadoop fs [generic options] [-appendToFile <localsrc> ... <dst>] [-cat [-ignoreCrc] <src> ...] [-checksum <src> ...] [-chgrp [-R] GROUP PATH...] [-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>] [-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-count [-q] [-h] <path> ...] [-cp [-f] [-p | -p[topax]] <src> ... <dst>] [-createSnapshot <snapshotDir> [<snapshotName>]] [-deleteSnapshot <snapshotDir> <snapshotName>] [-df [-h] [<path> ...]] [-du [-s] [-h] <path> ...] [-expunge] [-find <path> ... <expression> ...] [-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>] [-getfacl [-R] <path>] [-getfattr [-R] {-n name | -d} [-e en] <path>] [-getmerge [-nl] <src> <localdst>] [-help [cmd ...]] [-ls [-d] [-h] [-R] [<path> ...]] [-mkdir [-p] <path> ...] [-moveFromLocal <localsrc> ... <dst>] [-moveToLocal <src> <localdst>] [-mv <src> ... <dst>] [-put [-f] [-p] [-l] <localsrc> ... <dst>] [-renameSnapshot <snapshotDir> <oldName> <newName>] [-rm [-f] [-r|-R] [-skipTrash] <src> ...] [-rmdir [--ignore-fail-on-non-empty] <dir> ...] [-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]] [-setfattr {-n name [-v value] | -x name} <path>] [-setrep [-R] [-w] <rep> <path> ...] [-stat [format] <path> ...] [-tail [-f] <file>] [-test -[defsz] <path>] [-text [-ignoreCrc] <src> ...] [-touchz <path> ...] [-truncate [-w] <length> <path> ...] [-usage [cmd ...]] Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines. The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
爲了查看更詳細的信息,直接使用help命令express
[root@h133 ~]# hadoop fs -help
該命令在上一個命令的基礎上對每個參數進行了詳細的解釋。app
一、-help:查看命令具體文檔oop
# hadoop fs -help
二、-ls:列出目錄信息,添加參數-R
遞歸列出學習
[root@h133 ~]# hadoop fs -ls /user Found 1 items drwxr-xr-x - root supergroup 0 2019-01-03 02:03 /user/zhaoyi [root@h133 ~]# hadoop fs -ls -R /user drwxr-xr-x - root supergroup 0 2019-01-03 02:03 /user/zhaoyi drwxr-xr-x - root supergroup 0 2019-01-03 02:05 /user/zhaoyi/input -rw-r--r-- 3 root supergroup 212046774 2019-01-03 02:05 /user/zhaoyi/input/hadoop-2.7.2.tar.gz -rw-r--r-- 3 root supergroup 17 2019-01-03 02:05 /user/zhaoyi/input/zhaoyi.txt
三、-mkdir:建立目錄,添加參數-p,遞歸建立測試
[root@h133 ~]# hadoop fs -mkdir -p /user/yuanyong/test [root@h133 ~]# hadoop fs -ls /user Found 2 items drwxr-xr-x - root supergroup 0 2019-01-03 03:36 /user/yuanyong drwxr-xr-x - root supergroup 0 2019-01-03 02:03 /user/zhaoyi
四、-moveFromLocal:從本地移動到HDFS大數據
[root@h133 ~]# touch new.file [root@h133 ~]# ls 1 anaconda-ks.cfg new.file [root@h133 ~]# hadoop fs -moveFromLocal new.file /user/yuanyong/test [root@h133 ~]# ls 1 anaconda-ks.cfg [root@h133 ~]# hadoop fs -ls -R /user/yuanyong/test -rw-r--r-- 3 root supergroup 0 2019-01-03 03:40 /user/yuanyong/test/new.file
咱們在本地建立了一個文件new.file
,並將其使用命令移動到了/user/yuanyong/test
路徑下。經過本地查看命令和HDFS查看命令能夠觀察到文件的移動結果。
五、-moveToLocal:從HDFS移動到本地
[root@h133 ~]# hadoop fs -moveToLocal /user/yuanyong/test/new.file ./ moveToLocal: Option '-moveToLocal' is not implemented yet.
目前還沒實現該命令,要實現此需求咱們只需下載該文件並在HDFS上刪除該文件便可。
六、-appendToFile:追加一個文件到HDFS文件中
[root@h133 ~]# touch something.txt [root@h133 ~]# vi something.txt [root@h133 ~]# cat something.txt this is append info. [root@h133 ~]# hadoop fs -appendToFile something.txt /user/yuanyong/test/new.file [root@h133 ~]# hadoop fs -cat /user/yuanyong/test/new.file this is append info.
六、-cat: 顯示文件內容
七、-tail: 顯示文件的末尾內容,和linux的tail命令差很少
[root@h133 ~]# hadoop fs -tail -f /user/yuanyong/test/new.file this is append info.
八、-chgrp/-chmod/-chown:和linux上的命令一致。
九、-copyFromLocal:和-moveFromLocal
同樣的使用方式,不過是複製。
十、-copyToLocal:和-moveToLocal
同樣的使用方式,不過是複製。只不過該方法能夠順利的運行了。
十一、-cp :從hdfs的一個路徑拷貝到hdfs的另外一個路徑
[root@h133 ~]# hadoop fs -cp /user/yuanyong/test/new.file /user [root@h133 ~]# hadoop fs -cat /user/new.file this is append info.
十二、-mv:從hdfs的一個路徑移動到hdfs的另外一個路徑,和-cp使用方式一致。
咱們不難發現,不少命令都是基於linux命令上的在封裝,甚至功能如出一轍,咱們須要留意的就是語義環境而已。瞭解本地、HDFS文件系統兩個概念便可。
1三、-getmerge:合併下載HDFS文件爲一個文件到本地(Get all the files in the directories that match the source file pattern and merge and sort them to only one file on local fs.)。
[root@h133 ~]# hadoop fs -ls -R /user/zhaoyi/input -rw-r--r-- 3 root supergroup 29 2019-01-03 06:12 /user/zhaoyi/input/a.txt -rw-r--r-- 3 root supergroup 21 2019-01-03 06:12 /user/zhaoyi/input/b.txt -rw-r--r-- 3 root supergroup 24 2019-01-03 06:12 /user/zhaoyi/input/c.txt [root@h133 ~]# hadoop fs -getmerge /user/zhaoyi/ abc [root@h133 ~]# hadoop fs -getmerge /user/zhaoyi/input abc [root@h133 ~]# cat abc this is a text file content. this is b file text. this is c file content.
測試以前能夠往input目錄下多方几個文件,目前我放置了3個文件。合併的時候,我指定合併的文件名爲abc。內容隨意。
若是你指定的是一個目錄並在後面寫上一個*,則該命令會將該目錄下(包括子目錄)的全部文件進行合併。
[root@h133 ~]# hadoop fs -put c.txt /user/zhaoyi/input/input2 [root@h133 ~]# hadoop fs -getmerge /user/zhaoyi/input/* abc [root@h133 ~]# cat abc this is a text file content. this is b file text. this is c file content. this is c file content. [root@h133 ~]# hadoop fs -getmerge /user/zhaoyi/input abc [root@h133 ~]# cat abc this is a text file content. this is b file text. this is c file content.
1四、-put:上傳文件。
1五、-rm:刪除文件
-rm [-f] [-r|-R] [-skipTrash] <src> ... : Delete all files that match the specified file pattern. Equivalent to the Unix command "rm <src>" -skipTrash option bypasses trash, if enabled, and immediately deletes <src> -f If the file does not exist, do not display a diagnostic message or modify the exit status to reflect an error. -[rR] Recursively deletes directories
1六、-rmdir:刪除空目錄
1七、-df :統計文件系統的可用空間信息,和linux命令一致。
[root@h133 ~]# hadoop fs -df -h Filesystem Size Used Available Use% hdfs://h133:8020 51.0 G 180 K 44.3 G 0%
-h 命令格式化輸出合適的字節單位。
1八、-du:統計文件夾的使用信息,和linux命令一致。
[root@h133 ~]# hadoop fs -du / 140 /user
一樣能夠使用-h命令。
1九、-count:統計一個指定目錄下文件深度、文件節點個數以及大小。
[root@h133 ~]# hadoop fs -count /user/zhaoyi/input 2 4 98 /user/zhaoyi/input
20、-setrep:設置hdfs中文件的副本數量
[root@h133 ~]# hadoop fs -setrep 2 /user/zhaoyi/input/a.txt Replication 2 set: /user/zhaoyi/input/a.txt
這時候能夠經過web端查看這個文件的副本數,已經被設置爲2了。
本系列的文章參考資料來源有3個地方: