從集合建立RDD
def parallelize[T](seq: Seq[T], numSlices: Int = defaultParallelism)(implicit arg0: ClassTag[T]): RDD[T]es6
從一個Seq集合建立RDD。apache
參數1:Seq集合,必須。oop
參數2:分區數,默認爲該Application分配到的資源的CPU核數大數據
- scala> var rdd = sc.parallelize(1 to 10)
- rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at parallelize at :21
-
- scala> rdd.collect
- res3: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
-
- scala> rdd.partitions.size
- res4: Int = 15
-
- //設置RDD爲3個分區
- scala> var rdd2 = sc.parallelize(1 to 10,3)
- rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at :21
-
- scala> rdd2.collect
- res5: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
-
- scala> rdd2.partitions.size
- res6: Int = 3
-
def makeRDD[T](seq: Seq[T], numSlices: Int = defaultParallelism)(implicit arg0: ClassTag[T]): RDD[T]優化
這種用法和parallelize徹底相同es5
def makeRDD[T](seq: Seq[(T, Seq[String])])(implicit arg0: ClassTag[T]): RDD[T]spa
該用法能夠指定每個分區的preferredLocations。scala
- scala> var collect = Seq((1 to 10,Seq("slave007.lxw1234.com","slave002.lxw1234.com")),
- (11 to 15,Seq("slave013.lxw1234.com","slave015.lxw1234.com")))
- collect: Seq[(scala.collection.immutable.Range.Inclusive, Seq[String])] = List((Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
- List(slave007.lxw1234.com, slave002.lxw1234.com)), (Range(11, 12, 13, 14, 15),List(slave013.lxw1234.com, slave015.lxw1234.com)))
-
- scala> var rdd = sc.makeRDD(collect)
- rdd: org.apache.spark.rdd.RDD[scala.collection.immutable.Range.Inclusive] = ParallelCollectionRDD[6] at makeRDD at :23
-
- scala> rdd.partitions.size
- res33: Int = 2
-
- scala> rdd.preferredLocations(rdd.partitions(0))
- res34: Seq[String] = List(slave007.lxw1234.com, slave002.lxw1234.com)
-
- scala> rdd.preferredLocations(rdd.partitions(1))
- res35: Seq[String] = List(slave013.lxw1234.com, slave015.lxw1234.com)
-
-
指定分區的優先位置,對後續的調度優化有幫助。orm
從外部存儲建立RDD
//從hdfs文件建立.xml
- //從hdfs文件建立
- scala> var rdd = sc.textFile("hdfs:///tmp/lxw1234/1.txt")
- rdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[26] at textFile at :21
-
- scala> rdd.count
- res48: Long = 4
-
- //從本地文件建立
- scala> var rdd = sc.textFile("file:///etc/hadoop/conf/core-site.xml")
- rdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[28] at textFile at :21
-
- scala> rdd.count
- res49: Long = 97
-
注意這裏的本地文件路徑須要在Driver和Executor端存在。
hadoopFile
sequenceFile
objectFile
newAPIHadoopFile
hadoopRDD
newAPIHadoopRDD
好比:從HBase建立RDD
- scala> import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor, TableName}
- import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor, TableName}
-
- scala> import org.apache.hadoop.hbase.mapreduce.TableInputFormat
- import org.apache.hadoop.hbase.mapreduce.TableInputFormat
-
- scala> import org.apache.hadoop.hbase.client.HBaseAdmin
- import org.apache.hadoop.hbase.client.HBaseAdmin
-
- scala> val conf = HBaseConfiguration.create()
- scala> conf.set(TableInputFormat.INPUT_TABLE,"lxw1234")
- scala> var hbaseRDD = sc.newAPIHadoopRDD(
- conf,classOf[org.apache.hadoop.hbase.mapreduce.TableInputFormat],classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],classOf[org.apache.hadoop.hbase.client.Result])
-
- scala> hbaseRDD.count
- res52: Long = 1
-
若是以爲本博客對您有幫助,請 贊助做者 。
轉載請註明:lxw的大數據田地 » Spark算子:RDD建立操做