I have a rollapply function that does something very simple, however over million data points this simple function is quite slow. I would like to know if it is possible to provide information to rollapply for how to make the next transition rather than defining the function itself.
Concretely, I am performing a rolling window for a basic statistical anomaly detection.
Roll apply function:
minmax <- function(x) { max(x) - min(x) }
invoked by:
mclapply(data[,eval(vars),with=F],
function(x) rollapply(x,width=winSize,FUN=minmax,fill=NA),
mc.cores=8)
Where data
is a 8 column data.table and winsize
is 300
This call takes about 2 mins on 8 cores. It is one of the major bottlenecks to the overall computing. However I can imagine that we can keep them sorted (by value and index) and then do Olog(n) comparisons each time we slide.
However I often see posts suggesting to move away from for loops and use lapply. What is a next logical step to further optimize?
Not sure if/how this would apply in the mclapply
environment, but you can gain a little speedup by employing zoo
's optimized rollmax
function. Since they don't have a complementing rollmin
, you'll need to adapt.
minmax <- function(x) max(x) - min(x)
aa <- runif(1e4)
identical(
zoo::rollapply(aa, width=100, FUN=minmax, fill=NA),
zoo::rollmax(aa, k=100, fill=NA) + zoo::rollmax(-aa, k=100, fill=NA)
)
# [1] TRUE
microbenchmark::microbenchmark(
minmax = zoo::rollapply(aa, width=100, FUN=minmax, fill=NA),
dblmax = zoo::rollmax(aa, k=100, fill=NA) + zoo::rollmax(-aa, k=100, fill=NA)
)
# Unit: milliseconds
# expr min lq mean median uq max neval
# minmax 70.7426 76.0469 84.81481 77.99565 81.8047 148.8431 100
# dblmax 15.6755 17.4501 19.09820 17.93665 18.8650 52.4849 100
(The improvement will depend on the window size, so your results might vary, but I think using an optimized function zoo::rollmax
will almost always out-perform calling a UDF each time.)