Search code examples
pythonpandasdataframemean

How to get the mean for a whole dataframe instead of columns?


How do I get the mean for all of the values (except for NaN) in a pandas dataframe?

pd.DataFrame.mean() only gives the means for each column (or row, when setting axis=1), but I want the mean over the whole thing. And df.mean().mean() isn't really the wisest option (see below).

Note that in my specific real case, the dataframe has a large multiindex, which additionally complicates things. For situations where this does not matter, one could deem @EdChum's answer more straightforward, which might be preferable to a faster solution in some cases.

Example code

data1 = np.arange(16).reshape(4, 4)
df = pd.DataFrame(data=data1)

df.mean()
0    9.0
1    7.0
2    8.0
3    9.0
dtype: float64

df.mean().mean()
7.5

np.arange(16).mean()
7.5

works, but if I mask parts of the df (which in reality, is a hundreds of rows/columns correlation matrix which by its nature has half of itself filled with redundant data), it gets funny:

triang = np.triu_indices(4)
data2 = np.arange(4.,20.).reshape(4, 4)
data2[triang]=np.nan
df2 = pd.DataFrame(data=data2)

df2.mean().mean()
15.0

But (8. + 12. + 13. + 16. + 17. + 18.)/6 is 14.

How can I best get the "real" mean, except writing some kind of loop that does the above by hand?


Solution

  • You can use numpy.nanmean:

    triang = np.triu_indices(4)
    data2 = np.arange(4.,20.).reshape(4, 4)
    data2[triang]=np.nan
    df2 = pd.DataFrame(data=data2)
    
    res = np.nanmean(df2)  # 14.0
    

    Also possible via stack, as described by @EdChum, but slower:

    df2 = pd.concat([df2]*100000)
    
    %timeit np.nanmean(df2)              # 14.0ms
    %timeit df2.stack().dropna().mean()  # 55.7ms
    

    If your data is numeric only, you can also remove the Pandas overhead altogether.