Hdfs磁盤存儲策略和預留空間配置

1、Hdfs磁盤存儲策略

一、 指定本地目錄存儲策略

  • data目錄爲Hot策略對應DISK;
  • data1目錄爲Cold策略對應ARCHIVE;
<property>
      <name>dfs.datanode.data.dir</name>
      <value>[DISK]/opt/beh/data/namenode/dfs/data,[ARCHIVE]/opt/beh/data/namenode/dfs/data1</value>
    </property>
  • 重啓hdfs
$ stop-dfs.sh
$ start-dfs.sh

二、指定hdfs目錄的存儲策略

  • 查看hdfs存儲策略
$ hdfs storagepolicies -listPolicies
Block Storage Policies:
        BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
        BlockStoragePolicy{WARM:5, storageTypes=[DISK, ARCHIVE], creationFallbacks=[DISK, ARCHIVE], replicationFallbacks=[DISK, ARCHIVE]}
        BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
        BlockStoragePolicy{ONE_SSD:10, storageTypes=[SSD, DISK], creationFallbacks=[SSD, DISK], replicationFallbacks=[SSD, DISK]}
        BlockStoragePolicy{ALL_SSD:12, storageTypes=[SSD], creationFallbacks=[DISK], replicationFallbacks=[DISK]}
        BlockStoragePolicy{LAZY_PERSIST:15, storageTypes=[RAM_DISK, DISK], creationFallbacks=[DISK], replicationFallbacks=[DISK]}
  • 建立2個hdfs目錄
$ hadoop fs -mkdir /Cold_data  
$ hadoop fs -mkdir /Hot_data
  • 指定hdfs目錄存儲策略
$  hdfs storagepolicies -setStoragePolicy -path hdfs://breath:9000/Cold_data -policy COLD     
Set storage policy COLD on hdfs://breath:9000/Cold_data
$  hdfs storagepolicies -setStoragePolicy -path hdfs://breath:9000/Hot_data -policy HOT         
Set storage policy HOT on hdfs://breath:9000/Hot_data
  • 查看2個目錄的存儲策略是否正確
$ hdfs storagepolicies -getStoragePolicy -path /Cold_data
The storage policy of /Cold_data:
BlockStoragePolicy{COLD:2, storageTypes=[ARCHIVE], creationFallbacks=[], replicationFallbacks=[]}
$ hdfs storagepolicies -getStoragePolicy -path /Hot_data 
The storage policy of /Hot_data:
BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}

三、存儲測試

  • 查看未上傳文件存儲目錄的大小
$ cd /opt/beh/data/namenode/dfs
$ du -sh *
38M     data
16K     data1
30M     name
14M     namesecondary
  • 生成一個1000M大小的文件
$  dd if=/dev/zero of=test.txt bs=1000M count=1
 
記錄了1+0 的讀入
記錄了1+0 的寫出
1048576000字節(1.0 GB)已複製,3.11214 秒,337 MB/秒
  • 將生成的文件上傳到/Cold_data目錄
$ hadoop fs -put test.txt /Cold_data
  • [x] 查看此時存儲目錄的大小
$ du -sh *
38M     data
1008M   data1
30M     name
14M     namesecondary

四、測試結果說明

上傳的文件所有存儲在了data1目錄下java

由於hdfs上的/Cold_data指定的是COLD 策略,與hdfs-site.xml裏面ARCHIVE策略的data1目錄相對應,因此文件存儲達到了測試目的node

2、Hdfs預留空間配置

一、參數修改

  • 修改hdfs-site.xml配置文件,添加參數
<property>
     <name>dfs.datanode.du.reserved</name>
     <value>32212254720</value>
</property>

<property>
      <name>dfs.datanode.data.dir</name>
      <value>[ARCHIVE]/opt/beh/data/namenode/dfs/data</value>
    </property>
  • 說明

設置dfs.datanode.du.reserved參數,32212254720表示指定預留空間爲30G;apache

修改dfs.datanode.data.dir,只保留一個本地存儲目錄;centos

-重啓hdfsapp

$ stop-dfs.sh
$ start-dfs.sh

二、上傳文件

  • 查看磁盤空間
$ df -h   
文件系統                 容量  已用  可用 已用% 掛載點
/dev/mapper/centos-root   46G   14G   32G   31% /
devtmpfs                 7.8G     0  7.8G    0% /dev
tmpfs                    7.8G     0  7.8G    0% /dev/shm
tmpfs                    7.8G  8.5M  7.8G    1% /run
tmpfs                    7.8G     0  7.8G    0% /sys/fs/cgroup
/dev/vda1                497M  125M  373M   25% /boot
tmpfs                    1.6G     0  1.6G    0% /run/user/0
tmpfs                    1.6G     0  1.6G    0% /run/user/1000
  • 往hdfs上上傳文件,一次上傳一個2G大小的文件
$ hadoop fs -put test1.txt /Cold_data/test1.txt 
$ hadoop fs -put test1.txt /Cold_data/test2.txt 
。
。
。
$ hadoop fs -put test1.txt /Cold_data/test7.txt
$ hadoop fs -put test1.txt /Cold_data/test8.txt
16/11/12 16:30:54 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.EOFException: Premature EOF: no length prefix available
        at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2239)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1451)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1373)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)
16/11/12 16:30:54 INFO hdfs.DFSClient: Abandoning BP-456596110-192.168.134.129-1450512233024:blk_1073744076_3254
16/11/12 16:30:54 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.10.1.31:50010,DS-01c3c362-44f4-46eb-a8d8-57d2c2d5f196,ARCHIVE]
16/11/12 16:30:54 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /Cold_data/test8.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1541)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3289)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:668)
        at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.addBlock(AuthorizationProviderProxyClientProtocol.java:212)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:483)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1060)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2040)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2038)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1544)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:600)
put: File /Cold_data/test8.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this operation.
  • 分析

此時數據目錄/opt/beh/data/namenode/dfs的空間大小以下ide

$ cd /opt/beh/data/namenode/dfs
$ du -sh *
15G     data
12K     data1
34M     name
19M     namesecondary
  • [x] 查看此時的磁盤空間
$ df -h
文件系統                 容量  已用  可用 已用% 掛載點
/dev/mapper/centos-root   46G   27G   19G   59% /
devtmpfs                 7.8G     0  7.8G    0% /dev
tmpfs                    7.8G     0  7.8G    0% /dev/shm
tmpfs                    7.8G  8.5M  7.8G    1% /run
tmpfs                    7.8G     0  7.8G    0% /sys/fs/cgroup
/dev/vda1                497M  125M  373M   25% /boot
tmpfs                    1.6G     0  1.6G    0% /run/user/0
tmpfs                    1.6G     0  1.6G    0% /run/user/1000

三、總結


  1. 出現報錯說明磁盤預留空間配置生效,可是查看磁盤空間能夠看到,本地目錄剩餘可用空間並非Hdfs設置的預留空間;
  2. Hdfs對一個數據目錄的可用存儲認定是當前目錄所在磁盤的總空間(此處爲/目錄46G),並非當前目錄的可用空間。
  • 實際上的HDFS的剩餘空間計算:

當前目錄(磁盤)的總空間46G - Hdfs已使用的總空間15G=31Goop

而此時預留空間爲30G,所以hdfs剩餘的可用空間爲1G,因此當再次上傳一個大小爲2G的文件時,出現以上的報錯。測試

由於此處測試直接使用了/目錄的存儲,其它非Hdfs佔用了部分空間,當hdfs的數據目錄對單塊磁盤一一對應,每塊磁盤的剩餘可用空間大小與預留空間配置的值至關時,就不會再往該磁盤寫入數據。this

相關文章
相關標籤/搜索