Search code examples
pythonpandasdatetimemeanresampling

How to disable calculating with nans while pandas resample().mean() and resample().sum()?


I need to calculate the annual mean from monthly data. If there is a nan value in my monthly data, I want the whole year to be nan as well.

This is my code so far:

station_data = pd.read_csv(station_data_files[0], sep=';', header=0)
station_data = station_data.replace(-999, np.nan)
station_data = station_data.set_index("MESS_DATUM_BEGINN") # it is a row with time dates

station_data_anual = pd.DataFrame()
station_data_anual["Y_TT"] = station_data["MO_TT"].resample("A").mean()
station_data_anual["Y_RR"] = station_data["MO_RR"].resample("A").sum()

The problem is, that it ignores the nans. Which means e.g. that station_data_anual["Y_RR"] values are to low. For years in which I have only nans as monthly values, it returns 0.

Note: There are some questions similar to mine, but they didn't help me. Note: Python

some clarifications:

Input Data:

station_data
Out[235]: 
                   STATIONS_ID MESS_DATUM_ENDE  QN_4  ...  MO_RR  MX_RS  eor
MESS_DATUM_BEGINN                                     ...                   
1981-01-01               403.0      1981-01-31  10.0  ...   51.5   10.0  eor
1981-02-01               403.0      1981-02-28  10.0  ...   23.8    5.4  eor
1981-03-01               403.0      1981-03-31  10.0  ...  116.5   28.0  eor
1981-04-01               403.0      1981-04-30  10.0  ...   24.1    9.5  eor
1981-05-01               403.0      1981-05-31  10.0  ...   29.4    8.4  eor
                       ...             ...   ...  ...    ...    ...  ...
2010-08-01               403.0      2010-08-31  10.0  ...    NaN   29.1  eor
2010-09-01               403.0      2010-09-30  10.0  ...    NaN   29.8  eor
2010-10-01               403.0      2010-10-31  10.0  ...    NaN    5.5  eor
2010-11-01               403.0      2010-11-30  10.0  ...    NaN   17.5  eor
2010-12-01               403.0      2010-12-31  10.0  ...    NaN    8.2  eor

[360 rows x 16 columns]

have a closer look:

station_data["MO_RR"][276:288]
Out[242]: 
MESS_DATUM_BEGINN
2004-01-01    66.3
2004-02-01     NaN
2004-03-01     NaN
2004-04-01     NaN
2004-05-01     NaN
2004-06-01     NaN
2004-07-01     NaN
2004-08-01     NaN
2004-09-01     NaN
2004-10-01     NaN
2004-11-01     NaN
2004-12-01     NaN
Name: MO_RR, dtype: float64

Output Data:

station_data_anual
Out[238]: 
                        Y_TT   Y_RR
MESS_DATUM_BEGINN                  
...
2003-12-31          9.866667  430.5
2004-12-31          9.620833   66.3
2005-12-31          9.665833    0.0
2006-12-31         10.158333    0.0
2007-12-31         10.555000    0.0
2008-12-31         10.361667    0.0
2009-12-31          9.587500    0.0
2010-12-31          8.207500    0.0

my result has to look like:

                        Y_TT       Y_TX      Y_TN   Y_RR
MESS_DATUM_BEGINN                                       
... 
                        Y_TT   Y_RR
MESS_DATUM_BEGINN                  
...
2003-12-31          9.866667  430.5
2004-12-31          9.620833    nan    # getting nan instead of 66.3 is especially important
2005-12-31          9.665833    nan
2006-12-31         10.158333    nan
2007-12-31         10.555000    nan
2008-12-31         10.361667    nan
2009-12-31          9.587500    nan
2010-12-31          8.207500    nan

Solution

  • I have never used sampling and there might be better solutions out there which could simply ignore the "group" based on "condition". But a very simple solution could be to use a custom mean function after resample.

    def very_mean(array_like):
        if any(pd.isnull(array_like)):
            return np.nan
        else:
            return array_like.mean()
    
    station_data_anual["Y_TT"] = station_data["MO_TT"].resample("A").apply(very_mean)