[scala] Select Specific Columns from Spark DataFrame

Let's say our parent Dataframe has 'n' columns

we can create 'x' child DataFrames( Lets consider 2 in our case).

The columns for the child Dataframe can be chosen as per desire from any of the parent Dataframe columns.

Consider source has 10 columns and we want to split into 2 DataFrames that contains columns referenced from the parent Dataframe.

The columns for the child Dataframe can be decided using the select Dataframe API

val parentDF = spark.read.format("csv").load("/path of the CSV file")

val Child1_DF = parentDF.select("col1","col2","col3","col9","col10").show()

val child2_DF = parentDF.select("col5", "col6","col7","col8","col1","col2").show()

Notice that the column count in the child dataframes can differ in length and will be less than the parent dataframe column count.

we can also refer to the column names without mentioning the real names using the positional indexes of the desired column from the parent dataframe

Import spark implicits first which acts as a helper class for usage of $-notation to access the columns using the positional indexes

import spark.implicits._
import org.apache.spark.sql.functions._

val child3_DF  = parentDF.select("_c0","_c1","_c2","_c8","_c9").show()

we can also select column basing on certain conditions. Lets say we want only even numbered columns to be selected in the child dataframe. By even we refer to even indexed columns and index being starting from '0'

val parentColumns = parentDF.columns.toList


res0: List[String] = List(_c0, _c1, _c2, _c3, _c4, _c5, _c6, _c7,_c8,_c9)

val evenParentColumns =  res0.zipWithIndex.filter(_._2 % 2 == 0).map( _._1).toSeq

res1: scala.collection.immutable.Seq[String] = List(_c0, _c2, _c4, _c6,_c8)

Now feed these columns to be selected from the parentDF.Note that the select API need seq type arguments.So we converted the "evenParentColumns" to Seq collection

val child4_DF = parentDF.select(res1.head, res1.tail:_*).show()

This will show the even indexed columns from the parent Dataframe.


| _c0 | _c2 | _c4 |_c6 |_c8 |


|ITE00100554|TMAX|null| E| 1 |

|TE00100554 |TMIN|null| E| 4 |

|GM000010962|PRCP|null| E| 7 |

So Now we are left with the even numbered columns in the dataframe

Similarly we can also apply other operations to the Dataframe column like shown below

val child5_DF = parentDF.select($"_c0", $"_c8" + 1).show()

So by many ways as mentioned we can select the columns in the Dataframe.

Examples related to scala

Intermediate language used in scalac? Why does calling sumr on a stream with 50 tuples not complete Select Specific Columns from Spark DataFrame Joining Spark dataframes on the key Provide schema while reading csv file as a dataframe how to filter out a null value from spark dataframe Fetching distinct values on a column using Spark DataFrame Can't push to the heroku Spark - Error "A master URL must be set in your configuration" when submitting an app Add jars to a Spark Job - spark-submit

Examples related to apache-spark

Select Specific Columns from Spark DataFrame Select columns in PySpark dataframe What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Spark dataframe: collect () vs select () How does createOrReplaceTempView work in Spark? Spark difference between reduceByKey vs groupByKey vs aggregateByKey vs combineByKey Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion Convert date from String to Date format in Dataframes

Examples related to apache-spark-sql

Select Specific Columns from Spark DataFrame Pyspark: Filter dataframe based on multiple conditions Select columns in PySpark dataframe What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Spark dataframe: collect () vs select () How does createOrReplaceTempView work in Spark? Filter df when values matches part of a string in pyspark Convert date from String to Date format in Dataframes Take n rows from a spark dataframe and pass to toPandas()