I am currently using xarray to make probability maps. I want to use a statistical assessment like a “counting” exercise. Meaning, for all data points in NEU count how many times both variables jointly exceed their threshold. That means 1th percentile of the precipitation data and 99th percentile of temperature data. Then the probability (P) of join occurrence is simply the number of joint exceedances divided by the number of data points in your dataset.
<xarray.Dataset>
Dimensions: (latitude: 88, longitude: 200, time: 6348)
Coordinates:
* latitude (latitude) float64 49.62 49.88 50.12 50.38 ... 70.88 71.12 71.38
* longitude (longitude) float64 -9.875 -9.625 -9.375 ... 39.38 39.62 39.88
* time (time) datetime64[ns] 1950-06-01 1950-06-02 ... 2018-08-31
Data variables:
rr (time, latitude, longitude) float32 dask.array<chunksize=(6348, 88, 200), meta=np.ndarray>
tx (time, latitude, longitude) float32 dask.array<chunksize=(6348, 88, 200), meta=np.ndarray>
Ellipsis float64 0.0
I want to calculate the percentile of both precipitation and temperature for each gridpoint, that means basically that I want to repeat the function below for every gridpoint.
Neu_Precentile=np.nanpercentile(NEU.rr[:,0,0],1)
Can anyone help me out with this problem. I also tried to use xr.apply_ufunc but unfortunately it doesn't worked out well.
I'm not sure how you want to process quantiles, but here is a version from which you may be able to adapt.
Also, I chose to keep the dataset structure when computing the quantiles, as it shows how to retrieve the values of the outliers if this is ever relevant (and it is one step away from retrieving the values of valid data points, which is likely relevant).
coords = ("time", "latitude", "longitude")
sizes = (500, 80, 120)
ds = xr.Dataset(
coords={c: np.arange(s) for c, s in zip(coords, sizes)},
data_vars=dict(
precipitation=(coords, np.random.randn(*sizes)),
temperature=(coords, np.random.randn(*sizes)),
),
)
View of the data:
<xarray.Dataset>
Dimensions: (latitude: 80, longitude: 120, time: 500)
Coordinates:
* time (time) int64 0 1 2 3 ... 496 497 498 499
* latitude (latitude) int64 0 1 2 3 ... 76 77 78 79
* longitude (longitude) int64 0 1 2 3 ... 117 118 119
Data variables:
precipitation (time, latitude, longitude) float64 -1.673 ... -0.3323
temperature (time, latitude, longitude) float64 -0.331 ... -0.03728
qt_dims = ("latitude", "longitude")
qt_values = (0.1, 0.9)
ds_qt = ds.quantile(qt_values, dim=qt_dims)
It is a Dataset, with dimensions of analysis ("latitude", "longitude") lost, and with a new "quantile" dimension:
<xarray.Dataset>
Dimensions: (quantile: 2, time: 500)
Coordinates:
* time (time) int64 0 1 2 3 ... 496 497 498 499
* quantile (quantile) float64 0.1 0.9
Data variables:
precipitation (quantile, time) float64 -1.305 ... 1.264
temperature (quantile, time) float64 -1.267 ... 1.254
For the locations of outliers:
(edit: use of np.logical_and
, more readable than the &
operator)
da_outliers_loc = np.logical_and(
ds.precipitation > ds_qt.precipitation.sel(quantile=qt_values[0]),
ds.temperature > ds_qt.temperature.sel(quantile=qt_values[1]),
)
The output is a boolean DataArray:
<xarray.DataArray (time: 500, latitude: 80, longitude: 120)>
array([[[False, ...]]])
Coordinates:
* time (time) int64 0 1 2 3 4 ... 496 497 498 499
* latitude (latitude) int64 0 1 2 3 4 ... 75 76 77 78 79
* longitude (longitude) int64 0 1 2 3 ... 116 117 118 119
And if ever the values are relevant:
ds_outliers = ds.where(
(ds.precipitation > ds_qt.precipitation.sel(quantile=qt_values[0]))
& (ds.temperature > ds_qt.temperature.sel(quantile=qt_values[1]))
)
outliers_count = da_outliers_loc.sum(dim=qt_dims)
Finally, here is the DataArray with only a time dimension, and having for values the number of outliers at each timestamp.
<xarray.DataArray (time: 500)>
array([857, ...])
Coordinates:
* time (time) int64 0 1 2 3 4 ... 495 496 497 498 499