On windows, I have difficulties using library(doparallel)
which seems to crash after a few glmnet
calls
So I am trying to use the future
package (https://github.com/HenrikBengtsson/future) with glmnet
but I am unsure what is the best way to proceed here.
Here is a simple example (non parallelized)
X = matrix(rnorm(1e4 * 200), 1e4, 200)
Y = rnorm(1e4)
system.time(cv.glmnet(X, Y))
user system elapsed
3.42 0.22 3.67
How can I use futures to use all of my 4 cores (on my local machine - no distributed cluster as in executing glmnet in parallel in R)
Thanks!
This seems to work. At least the usertime is a lot lower, but system time increased due to parallel overhead.
library("doFuture")
registerDoFuture()
plan(multiprocess, workers = 4L)
system.time(cv.glmnet(X, Y, parallel = TRUE))
user system elapsed
0.46 0.17 5.59
versus
system.time(cv.glmnet(X, Y))
user system elapsed
2.33 0.05 2.39