I have a spark standalone cluster with no other job scheduler installed. I wonder if spark-submit
can be used as a job scheduler for both spark and non-spark jobs (e.g. a scala jar not written for Spark and not using RDD)?
Based on my testing, spark-submit
be used to submit non-Spark jobs and the jobs run successfully. But here are my questions:
--driver-cores
--driver-memory
--executor-memory
--total-executor-cores
spark-submit
can maintain a queue of spark and non-spark jobs using FIFO but it does not manage the resource of the non-spark job? Thanks!
I figured out after many testings. Yes, spark standalone can be a job scheduler for both spark and non-spark jobs.
spark-submit
only creates drivers, no executors.spark-submit
command, are met.