[python] Filtering a pyspark dataframe using isin by exclusion

I am trying to get all rows within a dataframe where a columns value is not within a list (so filtering by exclusion).

As an example:

df = sqlContext.createDataFrame([('1','a'),('2','b'),('3','b'),('4','c'),('5','d')]
,schema=('id','bar'))

I get the data frame:

+---+---+
| id|bar|
+---+---+
|  1|  a|
|  2|  b|
|  3|  b|
|  4|  c|
|  5|  d|
+---+---+

I only want to exclude rows where bar is ('a' or 'b').

Using an SQL expression string it would be:

df.filter('bar not in ("a","b")').show()

Is there a way of doing it without using the string for the SQL expression, or excluding one item at a time?

Edit:

I am likely to have a list, ['a','b'], of the excluded values that I would like to use.

This question is related to python apache-spark pyspark pyspark-sql

The answer is


Got a gotcha for those with their headspace in Pandas and moving to pyspark

 from pyspark import SparkConf, SparkContext
 from pyspark.sql import SQLContext

 spark_conf = SparkConf().setMaster("local").setAppName("MyAppName")
 sc = SparkContext(conf = spark_conf)
 sqlContext = SQLContext(sc)

 records = [
     {"colour": "red"},
     {"colour": "blue"},
     {"colour": None},
 ]

 pandas_df = pd.DataFrame.from_dict(records)
 pyspark_df = sqlContext.createDataFrame(records)

So if we wanted the rows that are not red:

pandas_df[~pandas_df["colour"].isin(["red"])]

As expected in Pandas

Looking good, and in our pyspark DataFrame

pyspark_df.filter(~pyspark_df["colour"].isin(["red"])).collect()

Not what I expected

So after some digging, I found this: https://issues.apache.org/jira/browse/SPARK-20617 So to include nothingness in our results:

pyspark_df.filter(~pyspark_df["colour"].isin(["red"]) | pyspark_df["colour"].isNull()).show()

much ado about nothing


df.filter((df.bar != 'a') & (df.bar != 'b'))

Also could be like this

df.filter(col('bar').isin(['a','b']) == False).show()

Examples related to python

programming a servo thru a barometer Is there a way to view two blocks of code from the same file simultaneously in Sublime Text? python variable NameError Why my regexp for hyphenated words doesn't work? Comparing a variable with a string python not working when redirecting from bash script is it possible to add colors to python output? Get Public URL for File - Google Cloud Storage - App Engine (Python) Real time face detection OpenCV, Python xlrd.biffh.XLRDError: Excel xlsx file; not supported Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation

Examples related to apache-spark

Select Specific Columns from Spark DataFrame Select columns in PySpark dataframe What is the difference between spark.sql.shuffle.partitions and spark.default.parallelism? How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Spark dataframe: collect () vs select () How does createOrReplaceTempView work in Spark? Spark difference between reduceByKey vs groupByKey vs aggregateByKey vs combineByKey Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion Convert date from String to Date format in Dataframes

Examples related to pyspark

Pyspark: Filter dataframe based on multiple conditions How to convert column with string type to int form in pyspark data frame? Select columns in PySpark dataframe How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Filter df when values matches part of a string in pyspark Filtering a pyspark dataframe using isin by exclusion PySpark: withColumn() with two conditions and three outcomes How to get name of dataframe column in pyspark? Spark RDD to DataFrame python PySpark 2.0 The size or shape of a DataFrame

Examples related to pyspark-sql

Pyspark: Filter dataframe based on multiple conditions How to find count of Null and Nan values for each column in a PySpark dataframe efficiently? Filtering a pyspark dataframe using isin by exclusion How to get name of dataframe column in pyspark? show distinct column values in pyspark dataframe: python Split Spark Dataframe string column into multiple columns Convert pyspark string to date format How to change dataframe column names in pyspark?