[apache-spark] How to find count of Null and Nan values for each column in a PySpark dataframe efficiently?

import numpy as np

df = spark.createDataFrame(
    [(1, 1, None), (1, 2, float(5)), (1, 3, np.nan), (1, 4, None), (1, 5, float(10)), (1, 6, float('nan')), (1, 6, float('nan'))],
    ('session', "timestamp1", "id2"))

Expected output

dataframe with count of nan/null for each column

Note: The previous questions I found in stack overflow only checks for null & not nan. Thats why i have created a new question.

I know i can use isnull() function in spark to find number of Null values in Spark column but how to find Nan values in Spark dataframe?

This question is related to apache-spark pyspark apache-spark-sql pyspark-sql

The answer is


An alternative to the already provided ways is to simply filter on the column like so

df = df.where(F.col('columnNameHere').isNull())

This has the added benefit that you don't have to add another column to do the filtering and it's quick on larger data sets.


For null values in the dataframe of pyspark

Dict_Null = {col:df.filter(df[col].isNull()).count() for col in df.columns}
Dict_Null

# The output in dict where key is column name and value is null values in that column

{'#': 0,
 'Name': 0,
 'Type 1': 0,
 'Type 2': 386,
 'Total': 0,
 'HP': 0,
 'Attack': 0,
 'Defense': 0,
 'Sp_Atk': 0,
 'Sp_Def': 0,
 'Speed': 0,
 'Generation': 0,
 'Legendary': 0}

I prefer this solution:

df = spark.table(selected_table).filter(condition)

counter = df.count()

df = df.select([(counter - count(c)).alias(c) for c in df.columns])

To make sure it does not fail for string, date and timestamp columns:

import pyspark.sql.functions as F
def count_missings(spark_df,sort=True):
    """
    Counts number of nulls and nans in each column
    """
    df = spark_df.select([F.count(F.when(F.isnan(c) | F.isnull(c), c)).alias(c) for (c,c_type) in spark_df.dtypes if c_type not in ('timestamp', 'string', 'date')]).toPandas()

    if len(df) == 0:
        print("There are no any missing values!")
        return None

    if sort:
        return df.rename(index={0: 'count'}).T.sort_values("count",ascending=False)

    return df

If you want to see the columns sorted based on the number of nans and nulls in descending:

count_missings(spark_df)

# | Col_A | 10 |
# | Col_C | 2  |
# | Col_B | 1  | 

If you don't want ordering and see them as a single row:

count_missings(spark_df, False)
# | Col_A | Col_B | Col_C |
# |  10   |   1   |   2   |

Here is my one liner. Here 'c' is the name of the column

df.select('c').withColumn('isNull_c',F.col('c').isNull()).where('isNull_c = True').count()

Examples related to apache-spark

Select Specific Columns from Spark DataFrame Select columns in PySpark dataframe What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Spark dataframe: collect () vs select () How does createOrReplaceTempView work in Spark? Spark difference between reduceByKey vs groupByKey vs aggregateByKey vs combineByKey Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion Convert date from String to Date format in Dataframes

Examples related to pyspark

Pyspark: Filter dataframe based on multiple conditions How to convert column with string type to int form in pyspark data frame? Select columns in PySpark dataframe How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion PySpark: withColumn() with two conditions and three outcomes How to get name of dataframe column in pyspark? Spark RDD to DataFrame python PySpark 2.0 The size or shape of a DataFrame

Examples related to apache-spark-sql

Select Specific Columns from Spark DataFrame Pyspark: Filter dataframe based on multiple conditions Select columns in PySpark dataframe What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Spark dataframe: collect () vs select () How does createOrReplaceTempView work in Spark? Filter df when values matches part of a string in pyspark Convert date from String to Date format in Dataframes Take n rows from a spark dataframe and pass to toPandas()

Examples related to pyspark-sql

Pyspark: Filter dataframe based on multiple conditions How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Filtering a pyspark dataframe using isin by exclusion How to get name of dataframe column in pyspark? show distinct column values in pyspark dataframe: python Split Spark Dataframe string column into multiple columns Convert pyspark string to date format How to change dataframe column names in pyspark?