使用hdfs做爲druid.io的deep storage,可是在提交任務時卻出現了錯誤。java
錯誤以下:apache
2016-03-25T01:57:15,917 INFO [task-runner-0] io.druid.storage.hdfs.HdfsDataSegmentPusher - Copying segment[wikipedia_2013-08-31T00:00:00.000Z_2013-09-01T00:00:00.000Z_2016-03-25T01:57:07.729Z] to HDFS at location[hdfs://tt1.masiah.test/tmp/druid/RemoteStorage/wikipedia/20130831T000000.000Z_20130901T000000.000Z/2016-03-25T01_57_07.729Z/0] 2016-03-25T01:57:15,919 WARN [task-runner-0] io.druid.indexing.common.index.YeOldePlumberSchool - Failed to merge and upload java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2304) ~[?:?] at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2311) ~[?:?] at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:90) ~[?:?] at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2350) ~[?:?] at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2332) ~[?:?] at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:369) ~[?:?] at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) ~[?:?] at io.druid.storage.hdfs.HdfsDataSegmentPusher.push(HdfsDataSegmentPusher.java:83) ~[?:?]
該問題的主要緣由是在啓動index任務時,mddileManager節點(若是使用overlord的local模式,則不須要配置middleManager節點,會在overlord內部實現middleManager功能)啓動時加載hdfs包出現了錯誤。oop
解決辦法替換包。首先中止middleManager節點的運行。ui
查找到數據包在你的druid包中,rm -rf extensions-repo/org/apache/hadoop/hadoop-hdfs/*。刪除這個目錄下全部的文件,這樣從新啓動middleManager節點,該節點會從新獲取包 extensions-repo/org/apache/hadoop/hadoop-hdfs2.3.0/hadoop-hdfs-2.3.0.jar,因爲這個包的錯誤致使了問題的出現,因此這裏從新下載該包。code
從新運行後,問題沒有了。ip