Search code examples
apache-sparkspark-streamingcloudera-cdhjob-scheduling

Submitting Spark Job On Scheduler Pool


I am running a spark streaming job on cluster mode , i have created a pool with memory of 200GB(CDH). I wanted to run my spark streaming job on that pool, i tried setting

sc.setLocalProperty("spark.scheduler.pool", "pool")

in code but its not working and i also tried the spark.scheduler.pool seems not working in spark streaming, whenever i run the job it goes in the default pool. What would be the possible issue? Is there any configuration i can add while submitting the job?


Solution

  • In yarn we can add the

    --conf spark.yarn.queue="que_name" to the spark-submit command . Then it will use that particular queue and its resources only.