I am testing to load data from a csv to spark then save it in Elasticsearch but I am having some trouble on saving my RDD collection in Elasticsearch using spark. This error is raised when submitting job:
Exception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/spark/rdd/api/java/JavaEsSpark
But my dependencies should be correct since I compiled with Maven...
My pom.xml is here : http://pastebin.com/b71KL903 .
The error is raised when I reach this line:
JavaEsSpark.saveToEs(javaRDD, "index/logements");
Rest of my code is here: http://pastebin.com/8yuJB68A
I have already searched about this problem but didn't find anything: https://discuss.elastic.co/t/problem-between-spark-and-elasticsearch/51942 .
https://github.com/elastic/elasticsearch-hadoop/issues/713 .
https://github.com/elastic/elasticsearch-hadoop/issues/585 .
I just learnt that : The "ClassNotFoundException" appears because Spark will shutdown its job classloader immediately in case of an exception so any other classes that need to be loaded, will fail causing the initial error to be hidden.
But I don't know how to proceed. I submitted my job with the verbose mode, but didn't see anything else: http://pastebin.com/j6zmyjFr
Thanks for your further help :)
Spark has executors and driver process. Executor runs in different node apart from driver node. Spark computes the rdd graph in various stages depending up on the transformations. And these stages have tasks that is executed on executors. So you need to pass the dependent jars to both executors and driver if you are using the library methods to compute rdd.
You should pass the dependent jars in --jars
options in spark-submit
spark-submit --jars $JARS \
--driver-class-path $JARS_COLON_SEP \
--class $CLASS_NAME $APP_JAR
In your case it would be
spark-submit --jars elasticsearch-hadoop-2.3.2.jar \
--master local[4]\
--driver-class-path elasticsearch-hadoop-2.3.2.jar \
--class "SimpleApp" target/simple-project-1.0.jar