Search code examples
snappydata

Setup snappydata with custom spark and scala 2.11


I have read through the documentation but can't find answer for the following questions:

  • I would prefer to setup an already running spark cluster (i.e. add a jar to be able to use SnappyContext), or is it mandatory to use bundled spark? If possible, please assist: SPARK_HOME seems to be set on runtime by the launchers

  • Where to define JAVA_HOME?. For now I did it in bin/spark-class on all snappy server nodes

  • Build SnappyData with scala 2.11

Appreciated, Saif


Solution

  • Right now we don't have support for running Snappy with stock Spark, but we are working towards it. Right now you can use Snappy version of Spark.

    For Q2, you can set JAVA_HOME in the command line, before starting Snappy servers.

    We have not tested Snappy with scala 2.11. So not sure what issues may arise.