一、描述spark中怎麼加載lzo壓縮格式的文件node
二、比較lzo格式文件以textFile方式和LzoTextInputFormat方式計算數據,Running Tasks個數的影響git
a.確保lzo文件所在文件夾中生成lzo.index索引文件github
(對該lzo壓縮文件進行index操做,生成lzo.index文件,map操做才能夠進行splitapache
hadoop jar ${HADOOP_HOME}/lib/hadoop-lzo.jar com.hadoop.compression.lzo.DistributedLzoIndexer /wh/source/)bash
b.以LzoTextInputFormat處理,可以正常按分塊數分配Tasks
ide
查看文件塊數量oop
[tech@dx2 ~]$ hdfs fsck /wh/source/hotel.2017-08-07.txt_10.10.10.10_20170807.lzo Connecting to namenode via http://nn1.zdp.ol:50070 FSCK started by bwtech (auth:SIMPLE) from /10.10.10.10 for path /wh/source/hotel.2017-08-07.txt_10.10.16.105_20170807.lzo at Tue Aug 08 15:27:52 CST 2017 .Status: HEALTHY Total size:2892666412 B Total dirs:0 Total files:1 Total symlinks:0 Total blocks (validated):11 (avg. block size 262969673 B) Minimally replicated blocks:11 (100.0 %) Over-replicated blocks:0 (0.0 %) Under-replicated blocks:0 (0.0 %) Mis-replicated blocks:0 (0.0 %) Default replication factor:3 Average block replication:3.0 Corrupt blocks:0 Missing replicas:0 (0.0 %) Number of data-nodes:21 Number of racks:2 FSCK ended at Tue Aug 08 15:27:52 CST 2017 in 3 milliseconds
Spark源代碼能夠參考https://github.com/chocolateBlack/LearningSpark/blob/master/src/main/scala-2.11/SparkLzoFile.scalaspa
import com.hadoop.mapreduce.LzoTextInputFormat import org.apache.hadoop.io.{Text, LongWritable} import org.apache.spark.{SparkContext, SparkConf} object SparkLzoFile{ def main(args:Array[String]){ val conf = new SparkConf().setAppName("Spark_Lzo_File") val sc = new SparkContext(conf) //文件路徑 val filePath = "/wh/source/hotel.2017-08-07.txt_10.10.10.10_20170807.lzo" //按textFile方式加載文件 val textFile = sc.textFile(filePath) //按lzoTextInputFormat加載數據文件 val lzoFile = sc.newAPIHadoopFile[LongWritable, Text, LzoTextInputFormat](filePath) println(textFile.partitions.length)// partitions個數輸出 1 println(lzoFile.partitions.length)// partitions個數輸出 11 //兩種方式計算word count查看後臺任務 lzoFile.map(_._2.toString).flatMap(x=>x.split("-")).map((_,1)).reduceByKey(_+_).collect textFile.flatMap(x=>x.split("\t")).map((_,1)).reduceByKey(_+_).collect } }