I am trying to understand how Apache Spark works behind the scenes. After coding a little in Spark I am pretty quite sure that it implements the RDD
as RMI Remote objects, doesn't it?
In this way, it can modify them inside transformation, such as map
s, flatMap
s, and so on. Object that are not part of an RDD
are simply serialized and sent to a worker during execution.
In the example below, lines
and tokens
will be treated as remote objects, while the string toFind
will be simply serialized and copied to the workers.
val lines: RDD[String] = sc.textFile("large_file.txt")
val toFind = "Some cool string"
val tokens =
lines.flatMap(_ split " ")
.filter(_.contains(toFind))
Am I wrong? I googled a little but I've not found any reference to how Spark RDD
are internally implemented.
You are correct. Spark serializes closures to perform remote method invocation.