Search code examples
apache-sparkdataframeapache-spark-sqlrdddirected-acyclic-graphs

Is DAG created when we perform operations over dataframes?


I have seen DAG getting generated whenever we perform any operation on RDD but what happens when we perform operations on our dataframe?

When executing multiple operations on dataframe, Are those lazily evaluated just like RDD?

When the catalyst optimizer comes into the picture?

I am sort of confused between these. If anyone can throw some light on these topics, it would be really of great help.

Thanks


Solution

  • Every operation on a Dataset, continuous processing mode notwithstanding, is translated into a sequence of operations on internal RDDs. Therefore concept of DAG is by all means applicable.

    By extension, execution is primarily lazy, though as always exceptions exists, and are more common in Dataset API, compared to pure RDD API.

    Finally Catalyst is responsible for transforming Dataset API calls, into logical, optimized logical and physical execution plan, and finally generating code which will executed by the tasks.