Hadoop集羣平常運維

1、備份namenode的元數據

namenode中的元數據很是重要,如丟失或者損壞,則整個系統沒法使用。所以應該常常對元數據進行備份,最好是異地備份。node

一、將元數據複製到遠程站點web

(1)如下代碼將secondary namenode中的元數據複製到一個時間命名的目錄下,而後經過scp命令遠程發送到其它機器。bash

#!/bin/bash 
export dirname=/mnt/tmphadoop/dfs/namesecondary/current/`date +%y%m%d%H` 
if [ ! -d ${dirname} ] 
then 
mkdir  ${dirname} 
cp /mnt/tmphadoop/dfs/namesecondary/current/*  ${dirname} 
fi 
scp -r ${dirname} slave1:/mnt/namenode_backup/ 
rm -r ${dirname}

(2)配置crontab,定時執行此項工做
0 0,8,14,20 * bash /mnt/scripts/namenode_backup_script.shapp

二、在遠程站點中啓動一個本地namenode守護進程,嘗試加載這些備份文件,以肯定是否已經進行了正確備份。tcp

2、數據備份

對於重要的數據,不能徹底依賴HDFS,而是須要進行備份,注意如下幾點
(1)儘可能異地備份
(2)若是使用distcp備份至另外一個hdfs集羣,則不要使用同一版本的hadoop,避免hadoop自身致使數據出錯。工具

3、文件系統檢查

按期在整個文件系統上運行HDFS的fsck工具,主動查找丟失或者損壞的塊。
建議天天執行一次。oop

[jediael@master ~]$ hadoop fsck / 
 ……省略輸出(如有錯誤,則在此外出現,不然只會出現點,一個點表示一個文件)…… 
 .........Status: HEALTHY 
  Total size:    14466494870 B 
  Total dirs:    502 
  Total files:   1592 (Files currently being written: 2) 
  Total blocks (validated):      1725 (avg. block size 8386373 B) 
  Minimally replicated blocks:   1725 (100.0 %) 
  Over-replicated blocks:        0 (0.0 %) 
  Under-replicated blocks:       648 (37.565216 %) 
  Mis-replicated blocks:         0 (0.0 %) 
  Default replication factor:    2 
  Average block replication:     2.0 
  Corrupt blocks:                0 
  Missing replicas:              760 (22.028986 %) 
  Number of data-nodes:          2 
  Number of racks:               1 
 FSCK ended at Sun Mar 01 20:17:57 CST 2015 in 608 milliseconds 
    
 The filesystem under path '/' is HEALTHY 
 
上海尚學堂 shsxt.com

(1)若hdfs-site.xml中的dfs.replication設置爲3,而實現上只有2個datanode,則在執行fsck時會出現如下錯誤;
/hbase/Mar0109_webpage/59ad1be6884739c29d0624d1d31a56d9/il/43e6cd4dc61b49e2a57adf0c63921c09: Under replicated blk_-4711857142889323098_6221. Target Replicas is 3 but found 2 replica(s).學習

注意,因爲原來的dfs.replication爲3,後來下線了一臺datanode,並將dfs.replication改成2,但原來已建立的文件也會記錄dfs.replication爲3,從而出現以上錯誤,並致使 Under-replicated blocks: 648 (37.565216 %)。大數據

(2)fsck工具還能夠用來檢查一個文件包括哪些塊,以及這些塊分別在哪等。this

[jediael@master conf]$ hadoop fsck /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 -files -blocks -racks 
2.  
3. FSCK started by jediael from /10.171.29.191 for path /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 at Sun Mar 01 20:39:35 CST 2015 
4. /hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7 21507169 bytes, 1 block(s):  Under replicated blk_7117944555454804881_3655. Target Replicas is 3 but found  replica(s). 
 0. blk_7117944555454804881_3655 len=21507169 repl=2 [/default-rack/10.171.94.155:50010, /default-rack/10.251.0.197:50010] 
    
 Status: HEALTHY 
  Total size:    21507169 B 
  Total dirs:    0 
  Total files:   1 
  Total blocks (validated):      1 (avg. block size 21507169 B) 
  Minimally replicated blocks:   1 (100.0 %) 
  Over-replicated blocks:        0 (0.0 %) 
  Under-replicated blocks:       1 (100.0 %) 
  Mis-replicated blocks:         0 (0.0 %) 
  Default replication factor:    2 
  Average block replication:     2.0 
  Corrupt blocks:                0 
  Missing replicas:              1 (50.0 %) 
  Number of data-nodes:          2 
  Number of racks:               1 
 FSCK ended at Sun Mar 01 20:39:35 CST 2015 in 0 milliseconds 
    
   
 The filesystem under path '/hbase/Feb2621_webpage/c23aa183c7cb86af27f15d4c2aee2795/s/30bee5fb620b4cd184412c69f70d24a7' is HEALTHY

此命令的用法以下:

[jediael@master ~]$ hadoop fsck -files 
 Usage: DFSck  [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]] 
           start checking from this path 
         -move   move corrupted files to /lost+found 
         -delete delete corrupted files 
         -files  print out files being checked 
         -openforwrite   print out files opened for write 
         -blocks print out block report 
         -locations      print out locations for every block 
         -racks  print out network topology for data-node locations 
                 By default fsck ignores files opened for write, use -openforwrite to report such files. They are usually  tagged CORRUPT or HEALTHY depending on their block allocation status 
 Generic options supported are 
 -conf <configuration file>     specify an application configuration file 
-D <property=value>            use value for given property 
 -fs <local|namenode:port>      specify a namenode 
 -jt <local|jobtracker:port>    specify a job tracker 
 -files <comma separated list of files>    specify comma separated files to be copied to the map reduce cluster 
 -libjars <comma separated list of jars>    specify comma separated jar files to include in the classpath. 
 -archives <comma separated list of archives>    specify comma separated archives to be unarchived on the compute machines. 
    
The general command line syntax is 
bin/hadoop command [genericOptions] [commandOptions]

詳細解釋請見《hadoop權威指南》P376

上海尚學堂大數據培訓課程之Hadoop,獲取相關學習資料教程請評論留言。上海尚學堂大數據培訓班即將開班,歡迎預約免費試聽名額。

相關文章
相關標籤/搜索