# Counting the number of non-NaN elements in a numpy ndarray in Python

94

I need to calculate the number of non-NaN elements in a numpy ndarray matrix. How would one efficiently do this in Python? Here is my simple code for achieving this:

``````import numpy as np

def numberOfNonNans(data):
count = 0
for i in data:
if not np.isnan(i):
count += 1
return count
``````

Is there a built-in function for this in numpy? Efficiency is important because I'm doing Big Data analysis.

Thnx for any help!

This question is tagged with `python` `numpy` `matrix` `nan`

~ Asked on 2014-02-14 11:26:25

### The Best Answer is

171

``````np.count_nonzero(~np.isnan(data))
``````

`~` inverts the boolean matrix returned from `np.isnan`.

`np.count_nonzero` counts values that is not 0\false. `.sum` should give the same result. But maybe more clearly to use `count_nonzero`

Testing speed:

``````In [23]: data = np.random.random((10000,10000))

In [24]: data[[np.random.random_integers(0,10000, 100)],:][:, [np.random.random_integers(0,99, 100)]] = np.nan

In [25]: %timeit data.size - np.count_nonzero(np.isnan(data))
1 loops, best of 3: 309 ms per loop

In [26]: %timeit np.count_nonzero(~np.isnan(data))
1 loops, best of 3: 345 ms per loop

In [27]: %timeit data.size - np.isnan(data).sum()
1 loops, best of 3: 339 ms per loop
``````

`data.size - np.count_nonzero(np.isnan(data))` seems to barely be the fastest here. other data might give different relative speed results.

~ Answered on 2014-02-14 11:29:54

11

# Quick-to-write alterantive

Even though is not the fastest choice, if performance is not an issue you can use:

`sum(~np.isnan(data))`.

## Performance:

``````In [7]: %timeit data.size - np.count_nonzero(np.isnan(data))
10 loops, best of 3: 67.5 ms per loop

In [8]: %timeit sum(~np.isnan(data))
10 loops, best of 3: 154 ms per loop

In [9]: %timeit np.sum(~np.isnan(data))
10 loops, best of 3: 140 ms per loop
``````

~ Answered on 2017-05-03 09:24:13