Search code examples
apache-sparkhadoop-yarnapache-spark-2.0

Should the number of executor core for Apache Spark be set to 1 in YARN mode?


My question: Is it true that running Apache Spark applications in YARN master, with deploy-mode as either client or cluster, the executor-cores should always be set to 1?

I am running an application processing millions of data on a cluster with 200 data nodes each having 14 cores. It runs perfect when I use 2 executor-cores and 150 executors on YARN, but one of the cluster admins is asking me to use 1 executor-core. He is adamant that Spark in YARN should be used with 1 executor core, because otherwise it will be stealing resources from other users. He points me to this page on Apache docs where it says the default value for executor-core is 1 for YARN.

https://spark.apache.org/docs/latest/configuration.html

So, is it true we should use only 1 for executor-cores?

If the executors use 1 core, aren't they single threaded?

Kind regards,


Solution

  • When we run spark application using a cluster manager like Yarn, there’ll be several daemons that’ll run in the background like NameNode, Secondary NameNode, DataNode, JobTracker and TaskTracker. So, while specifying num-executors, we need to make sure that we leave aside enough cores (~1 core per node) for these daemons to run smoothly.

    ApplicationMaster is responsible for negotiating resources from the ResourceManager and working with the NodeManagers to execute and monitor the containers and their resource consumption. If we are running spark on yarn, then we need to budget in the resources that AM would need

    Example 
    **Cluster Config:**
    200 Nodes
    14 cores per Node
    

    Leave 1 core per node for Hadoop/Yarn daemons => Num cores available per node = 14-1 = 13 So, Total available of cores in cluster = 13 x 200 = 2600

    Let’s assign 5 core per executors => --executor-cores = 5 (for good HDFS throughput)

    Number of available executors = (total cores/num-cores-per-executor) = 2600/5 = 520

    Leaving 1 executor for ApplicationManager => --num-executors = 519

    Please note : This is just a sample recommended configuration , you may wish to revise based upon the performance of your application.

    Also A better practice is to monitor the node resources while you execute your job , this gives a better picture of the resource utilisation in your cluster