One needs to create a schema, and attach it to the Rdd.
Assuming val spark is a product of a SparkSession.builder...
import org.apache.spark._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
/* Lets gin up some sample data:
* As RDD's and dataframes can have columns of differing types, lets make our
* sample data a three wide, two tall, rectangle of mixed types.
* A column of Strings, a column of Longs, and a column of Doubules
*/
val arrayOfArrayOfAnys = Array.ofDim[Any](2,3)
arrayOfArrayOfAnys(0)(0)="aString"
arrayOfArrayOfAnys(0)(1)=0L
arrayOfArrayOfAnys(0)(2)=3.14159
arrayOfArrayOfAnys(1)(0)="bString"
arrayOfArrayOfAnys(1)(1)=9876543210L
arrayOfArrayOfAnys(1)(2)=2.71828
/* The way to convert an anything which looks rectangular,
* (Array[Array[String]] or Array[Array[Any]] or Array[Row], ... ) into an RDD is to
* throw it into sparkContext.parallelize.
* http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext shows
* the parallelize definition as
* def parallelize[T](seq: Seq[T], numSlices: Int = defaultParallelism)
* so in our case our ArrayOfArrayOfAnys is treated as a sequence of ArraysOfAnys.
* Will leave the numSlices as the defaultParallelism, as I have no particular cause to change it.
*/
val rddOfArrayOfArrayOfAnys=spark.sparkContext.parallelize(arrayOfArrayOfAnys)
/* We'll be using the sqlContext.createDataFrame to add a schema our RDD.
* The RDD which goes into createDataFrame is an RDD[Row] which is not what we happen to have.
* To convert anything one tall and several wide into a Row, one can use Row.fromSeq(thatThing.toSeq)
* As we have an RDD[somethingWeDontWant], we can map each of the RDD rows into the desired Row type.
*/
val rddOfRows=rddOfArrayOfArrayOfAnys.map(f=>
Row.fromSeq(f.toSeq)
)
/* Now to construct our schema. This needs to be a StructType of 1 StructField per column in our dataframe.
* https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.types.StructField shows the definition as
* case class StructField(name: String, dataType: DataType, nullable: Boolean = true, metadata: Metadata = Metadata.empty)
* Will leave the two default values in place for each of the columns:
* nullability as true,
* metadata as an empty Map[String,Any]
*
*/
val schema = StructType(
StructField("colOfStrings", StringType) ::
StructField("colOfLongs" , LongType ) ::
StructField("colOfDoubles", DoubleType) ::
Nil
)
val df=spark.sqlContext.createDataFrame(rddOfRows,schema)
/*
* +------------+----------+------------+
* |colOfStrings|colOfLongs|colOfDoubles|
* +------------+----------+------------+
* | aString| 0| 3.14159|
* | bString|9876543210| 2.71828|
* +------------+----------+------------+
*/
df.show
Same steps, but with fewer val declarations:
val arrayOfArrayOfAnys=Array(
Array("aString",0L ,3.14159),
Array("bString",9876543210L,2.71828)
)
val rddOfRows=spark.sparkContext.parallelize(arrayOfArrayOfAnys).map(f=>Row.fromSeq(f.toSeq))
/* If one knows the datatypes, for instance from JDBC queries as to RDBC column metadata:
* Consider constructing the schema from an Array[StructField]. This would allow looping over
* the columns, with a match statement applying the appropriate sql datatypes as the second
* StructField arguments.
*/
val sf=new Array[StructField](3)
sf(0)=StructField("colOfStrings",StringType)
sf(1)=StructField("colOfLongs" ,LongType )
sf(2)=StructField("colOfDoubles",DoubleType)
val df=spark.sqlContext.createDataFrame(rddOfRows,StructType(sf.toList))
df.show