[python] pandas GroupBy columns with NaN (missing) values

I have a DataFrame with many missing values in columns which I wish to groupby:

import pandas as pd
import numpy as np
df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})

In [4]: df.groupby('b').groups
Out[4]: {'4': [0], '6': [2]}

see that Pandas has dropped the rows with NaN target values. (I want to include these rows!)

Since I need many such operations (many cols have missing values), and use more complicated functions than just medians (typically random forests), I want to avoid writing too complicated pieces of code.

Any suggestions? Should I write a function for this or is there a simple solution?

This question is related to python pandas group-by pandas-groupby nan

The answer is


pandas >= 1.1

From pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False:

pd.__version__
# '1.1.0.dev0+2004.g8d10bfb6f'

# Example from the docs
df

   a    b  c
0  1  2.0  3
1  1  NaN  4
2  2  1.0  3
3  1  2.0  2

# without NA (the default)
df.groupby('b').sum()

     a  c
b        
1.0  2  3
2.0  2  5
# with NA
df.groupby('b', dropna=False).sum()

     a  c
b        
1.0  2  3
2.0  2  5
NaN  1  4

This is mentioned in the Missing Data section of the docs:

NA groups in GroupBy are automatically excluded. This behavior is consistent with R

One workaround is to use a placeholder before doing the groupby (e.g. -1):

In [11]: df.fillna(-1)
Out[11]: 
   a   b
0  1   4
1  2  -1
2  3   6

In [12]: df.fillna(-1).groupby('b').sum()
Out[12]: 
    a
b    
-1  2
4   1
6   3

That said, this feels pretty awful hack... perhaps there should be an option to include NaN in groupby (see this github issue - which uses the same placeholder hack).

However, as described in another answer, "from pandas 1.1 you have better control over this behavior, NA values are now allowed in the grouper using dropna=False"


One small point to Andy Hayden's solution – it doesn't work (anymore?) because np.nan == np.nan yields False, so the replace function doesn't actually do anything.

What worked for me was this:

df['b'] = df['b'].apply(lambda x: x if not np.isnan(x) else -1)

(At least that's the behavior for Pandas 0.19.2. Sorry to add it as a different answer, I do not have enough reputation to comment.)


I am not able to add a comment to M. Kiewisch since I do not have enough reputation points (only have 41 but need more than 50 to comment).

Anyway, just want to point out that M. Kiewisch solution does not work as is and may need more tweaking. Consider for example

>>> df = pd.DataFrame({'a': [1, 2, 3, 5], 'b': [4, np.NaN, 6, 4]})
>>> df
   a    b
0  1  4.0
1  2  NaN
2  3  6.0
3  5  4.0
>>> df.groupby(['b']).sum()
     a
b
4.0  6
6.0  3
>>> df.astype(str).groupby(['b']).sum()
      a
b
4.0  15
6.0   3
nan   2

which shows that for group b=4.0, the corresponding value is 15 instead of 6. Here it is just concatenating 1 and 5 as strings instead of adding it as numbers.


Ancient topic, if someone still stumbles over this--another workaround is to convert via .astype(str) to string before grouping. That will conserve the NaN's.

df = pd.DataFrame({'a': ['1', '2', '3'], 'b': ['4', np.NaN, '6']})
df['b'] = df['b'].astype(str)
df.groupby(['b']).sum()
    a
b   
4   1
6   3
nan 2

I answered this already, but some reason the answer was converted to a comment. Nevertheless, this is the most efficient solution:

Not being able to include (and propagate) NaNs in groups is quite aggravating. Citing R is not convincing, as this behavior is not consistent with a lot of other things. Anyway, the dummy hack is also pretty bad. However, the size (includes NaNs) and the count (ignores NaNs) of a group will differ if there are NaNs.

dfgrouped = df.groupby(['b']).a.agg(['sum','size','count'])

dfgrouped['sum'][dfgrouped['size']!=dfgrouped['count']] = None

When these differ, you can set the value back to None for the result of the aggregation function for that group.


All answers provided thus far result in potentially dangerous behavior as it is quite possible you select a dummy value that is actually part of the dataset. This is increasingly likely as you create groups with many attributes. Simply put, the approach doesn't always generalize well.

A less hacky solve is to use pd.drop_duplicates() to create a unique index of value combinations each with their own ID, and then group on that id. It is more verbose but does get the job done:

def safe_groupby(df, group_cols, agg_dict):
    # set name of group col to unique value
    group_id = 'group_id'
    while group_id in df.columns:
        group_id += 'x'
    # get final order of columns
    agg_col_order = (group_cols + list(agg_dict.keys()))
    # create unique index of grouped values
    group_idx = df[group_cols].drop_duplicates()
    group_idx[group_id] = np.arange(group_idx.shape[0])
    # merge unique index on dataframe
    df = df.merge(group_idx, on=group_cols)
    # group dataframe on group id and aggregate values
    df_agg = df.groupby(group_id, as_index=True)\
               .agg(agg_dict)
    # merge grouped value index to results of aggregation
    df_agg = group_idx.set_index(group_id).join(df_agg)
    # rename index
    df_agg.index.name = None
    # return reordered columns
    return df_agg[agg_col_order]

Note that you can now simply do the following:

data_block = [np.tile([None, 'A'], 3),
              np.repeat(['B', 'C'], 3),
              [1] * (2 * 3)]

col_names = ['col_a', 'col_b', 'value']

test_df = pd.DataFrame(data_block, index=col_names).T

grouped_df = safe_groupby(test_df, ['col_a', 'col_b'],
                          OrderedDict([('value', 'sum')]))

This will return the successful result without having to worry about overwriting real data that is mistaken as a dummy value.


Examples related to python

programming a servo thru a barometer Is there a way to view two blocks of code from the same file simultaneously in Sublime Text? python variable NameError Why my regexp for hyphenated words doesn't work? Comparing a variable with a string python not working when redirecting from bash script is it possible to add colors to python output? Get Public URL for File - Google Cloud Storage - App Engine (Python) Real time face detection OpenCV, Python xlrd.biffh.XLRDError: Excel xlsx file; not supported Could not load dynamic library 'cudart64_101.dll' on tensorflow CPU-only installation

Examples related to pandas

xlrd.biffh.XLRDError: Excel xlsx file; not supported Pandas Merging 101 How to increase image size of pandas.DataFrame.plot in jupyter notebook? Trying to merge 2 dataframes but get ValueError Python Pandas User Warning: Sorting because non-concatenation axis is not aligned How to show all of columns name on pandas dataframe? Pandas/Python: Set value of one column based on value in another column Python Pandas - Find difference between two data frames Pandas get the most frequent values of a column Python convert object to float

Examples related to group-by

SELECT list is not in GROUP BY clause and contains nonaggregated column .... incompatible with sql_mode=only_full_group_by Count unique values using pandas groupby Pandas group-by and sum Count unique values with pandas per groups Group dataframe and get sum AND count? Error related to only_full_group_by when executing a query in MySql Pandas sum by groupby, but exclude certain columns Using DISTINCT along with GROUP BY in SQL Server Python Pandas : group by in group by and average? How do I create a new column from the output of pandas groupby().sum()?

Examples related to pandas-groupby

Count unique values with pandas per groups Group dataframe and get sum AND count? How do I create a new column from the output of pandas groupby().sum()? How to loop over grouped Pandas dataframe? Concatenate strings from several rows using Pandas groupby pandas dataframe groupby datetime month How to group dataframe rows into list in pandas groupby Renaming Column Names in Pandas Groupby function Get statistics for each group (such as count, mean, etc) using pandas GroupBy? pandas GroupBy columns with NaN (missing) values

Examples related to nan

Display rows with one or more NaN values in pandas dataframe How to find which columns contain any NaN value in Pandas dataframe How to set a cell to NaN in a pandas dataframe Elegant way to create empty pandas DataFrame with NaN of type float How to check if any value is NaN in a Pandas DataFrame How to replace NaNs by preceding values in pandas DataFrame? Pandas Replace NaN with blank/empty string Convert pandas.Series from dtype object to float, and errors to nans How to filter in NaN (pandas)? Replace None with NaN in pandas dataframe