I have Spark running on Mesos in a cluster mode.
When I submit the job, the driver runs randomly on any slave nodes. Is there a way I can specify the location of the driver? For e.g. IP of the slave I want to run on..
Can I use spark.driver.host
thanks,
No. In cluster mode, that is, using --deploy-mode cluster
option in the ./bin/spark-submit
command, the driver is launched on a node in the cluster. You should not set spark.driver.host
and/or spark.driver.port
as this is programmatically set by SparkContext
.