I am using university's cluster to perform some computations. The code I run:
cl <- makeCluster(detectCores())
plan(cluster, workers = cl)
selected_a <- future_map(dat, ~abess(as.matrix(.x[, names(X)]),
.x[, "errs"]), support.size = 10)
stopCluster(cl)
It returns the following error:
terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc Erreur dans unserialize(node$con) : ClusterFuture () failed to receive results from cluster SOCKnode #1 (on ‘localhost’). The reason reported was ‘erreur de lecture de la connexion’. Post-mortem diagnostic: The total size of the 8 globals exported is 1.18 MiB. The three largest globals are ‘...furrr_chunk_args’ (605.80 KiB of class ‘list’), ‘X’ (581.48 KiB of class ‘numeric’) and ‘as.matrix’ (11.45 KiB of class ‘function’)
The code runs well in my local machine. Also, when I change 'future_map' to 'map' in the above code everything works perfectly both in my local machine and in the cluster. How could I solve it?
This could potentially arise from requesting too few RAM compared with what R needs. Try increasing the amount of --mem
or --mem-per-cpu
in the job submission. It looks like the requested memory is sufficient for map
but not for future_map
.