I'm getting the following exception when I'm trying to submit a Spark application to a Mesos cluster:
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 40: export: `/usr/local/lib/libmesos.so': not a valid identifier
/home/knoldus/application/spark-2.2.0-rc4/conf/spark-env.sh: line 41: export: `hdfs://spark-2.2.0-bin-hadoop2.7.tgz': not a valid identifier
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/09/30 14:17:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/09/30 14:17:31 WARN Utils: Your hostname, knoldus resolves to a loopback address: 127.0.1.1; using 192.168.0.111 instead (on interface wlp6s0)
17/09/30 14:17:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
Failed to load native Mesos library from
java.lang.UnsatisfiedLinkError: Expecting an absolute path of the library:
at java.lang.Runtime.load0(Runtime.java:806)
at java.lang.System.load(System.java:1086)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:159)
at org.apache.mesos.MesosNativeLibrary.load(MesosNativeLibrary.java:188)
at org.apache.mesos.MesosSchedulerDriver.<clinit>(MesosSchedulerDriver.java:61)
at org.apache.spark.scheduler.cluster.mesos.MesosSchedulerUtils$class.createSchedulerDriver(MesosSchedulerUtils.scala:104)
at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.createSchedulerDriver(MesosCoarseGrainedSchedulerBackend.scala:49)
at org.apache.spark.scheduler.cluster.mesos.MesosCoarseGrainedSchedulerBackend.start(MesosCoarseGrainedSchedulerBackend.scala:170)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2509)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:909)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:901)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:901)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:103)
... 47 elided
I have built spark using
./build/mvn -Pmesos -DskipTests clean package
I have set the following properties in spark-env.sh:
export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz
And in spark-defaults.conf :
spark.executor.uri hdfs://spark-2.2.0-bin-hadoop2.7.tgz
I have resolved the issue. The problem is that there should be no space while exporting path.
export MESOS_NATIVE_JAVA_LIBRARY= /usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI= hdfs://spark-2.2.0-bin-hadoop2.7.tgz
For example
export foo = bar
the shell will interpret that as a request to export three names: foo
, =
and bar
. = isn't a valid variable name, so the command fails. The variable name, equals sign and it's value must not be separated by spaces for them to be processed as a simultaneous assignment and export.
Remove the spaces.
export MESOS_NATIVE_JAVA_LIBRARY=/usr/local/lib/libmesos.so
export SPARK_EXECUTOR_URI=hdfs://spark-2.2.0-bin-hadoop2.7.tgz