I am currently have a R query that can do the parallel processing within a loop using foreach. But it is done using a single server with 32 cores. Because of my data size, I am trying to find the r packages that can distribute the computing to different window servers and can work with foreach for paralleling.
Really appreciate for your help!
SparkR is the answer. From "Announcing SparkR: R on Apache Spark":
SparkR, an R package initially developed at the AMPLab, provides an R frontend to Apache Spark and using Spark’s distributed computation engine allows us to run large scale data analysis from the R shell.
Also see SparkR (R on Spark).
To get started you need to set up a Spark cluster. This web page should help. The Spark documentation, without using Mesos or YARN as your cluster manager, is here. Once you have Spark set up, see Wendy Yu's tutorial on SparkR. She also shows how to integrate H20 with Spark which is referred to as 'Sparkling Water'.