Search code examples
rforeachparallel-processingparallel.foreachdoparallel

How to split a dataframe for parallel processing and then recombine the results?


I'm looking to split up a dataframe for parallel processing in order to speed up the processing time.

What I have so far (broken code):

library(tidyverse)
library(iterators)
library(doParallel)
library(foreach)

data_split <- split(iris, iris$Species)
data_iter <- iter(data_split)

cl <- makeCluster(3)
registerDoParallel(cl)

foreach(
  data=data_iter,
  i = data_iter,
  .combine=dplyr::bind_rows
  
) %dopar% {
  test <- lm(Petal.Length ~ Sepal.Length, i)
  test.lm <- broom::augment(test)
  
  return(dplyr::bind_rows(test.lm))
}

stopCluster(cl)

Maybe an lapply within the foreach?

out <- foreach(it = data_iter,
               .combine = dplyr::bind_rows,
               .multicombine = TRUE,
               .noexport = ls()
) %dopar% {
  print(str(it, max.level = 1))
  out <- lapply(it, function(x) {
    test <- lm(Petal.Length ~ Sepal.Length, subset(iris, iris$Species == iris$Species[[x]]))
    test.lm <- broom::augment(test)
  })
}
print(bind_rows(out))
return(bind_rows(out))

What I'm looking to do:

test1 <- lm(Petal.Length ~ Sepal.Length, subset(iris, iris$Species == iris$Species[[1]]))
test.lm1 <- broom::augment(test1)

test2 <- lm(Petal.Length ~ Sepal.Length, subset(iris, iris$Species == iris$Species[[2]]))
test.lm2 <- broom::augment(test2)

test3 <- lm(Petal.Length ~ Sepal.Length, subset(iris, iris$Species == iris$Species[[3]]))
test.lm3 <- broom::augment(test3)

testdat <- bind_rows(test.lm1,test.lm2,test.lm3)

Solution

  • I found my answer with the furrr package:

    library(furrr)
    
    plan(cluster, workers = 3)
    
    data_split <- split(iris, iris$Species)
    
    testdat <- furrr::future_map_dfr(data_split, function(.data){
      test <- lm(Petal.Length ~ Sepal.Length, .data)
      broom::augment(test)
    })
    
    plan(cluster, workers = 1)
    
    testdat