Search code examples
rapache-sparkh2oazure-databrickssparklyr

Start H2O context on Databricks with rsparkling


Problem

I want to use H2O's Sparkling Water on multi-node clusters in Azure Databricks, interactively and in jobs through RStudio and R notebooks, respectively. I can start an H2O cluster and a Sparkling Water context on a rocker/verse:4.0.3 and a databricksruntime/rbase:latest (as well as databricksruntime/standard) Docker container on my local machine but currently not on a Databricks cluster. There seems to be a classic classpath problem.

Error : java.lang.ClassNotFoundException: ai.h2o.sparkling.H2OConf
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
    at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
    at java.lang.Class.forName0(Native Method)
    at java.lang.Class.forName(Class.java:264)
    at sparklyr.StreamHandler.handleMethodCall(stream.scala:106)
    at sparklyr.StreamHandler.read(stream.scala:61)
    at sparklyr.BackendHandler.$anonfun$channelRead0$1(handler.scala:58)
    at scala.util.control.Breaks.breakable(Breaks.scala:42)
    at sparklyr.BackendHandler.channelRead0(handler.scala:39)
    at sparklyr.BackendHandler.channelRead0(handler.scala:14)
    at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:321)
    at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:295)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
    at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
    at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
    at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
    at java.lang.Thread.run(Thread.java:748)

What I've Tried

Setup: Single node Azure Databricks cluster, 7.6 ML (includes Apache Spark 3.0.1, Scala 2.12) with "Standard_F4s" driver (My use case is multi node, but I was trying to keep things simple)

  • Setting options(), e.g., options(rsparkling.sparklingwater.version = "2.3.11") or options(rsparkling.sparklingwater.version = "3.0.1")

  • Setting config, e.g.,

      conf$`sparklyr.shell.jars` <- c("/databricks/spark/R/lib/h2o/java/h2o.jar") 
    

or sc <- sparklyr::spark_connect(method = "databricks", version = "3.0.1", config = conf, jars = c("/databricks/spark/R/lib/h2o/java/h2o.jar")) (or "~/R/x86_64-pc-linux-gnu-library/3.6/h2o/java/h2o.jar" or "~/R/x86_64-pc-linux-gnu-library/3.6/rsparkling/java/sparkling_water_assembly.jar" as the .jar location on Databricks RStudio)

For Sparkling Water 3.32.1.1-1-3.0 select Spark 3.0.2

Spark 3.0.2 is not available as a cluster, chose 3.0.1 as in rest of my approach

Error in h2o_context(sc) : could not find function "h2o_context"

Dockerfile that works on local machine

# get the base image (https://hub.docker.com/r/databricksruntime/standard; https://github.com/databricks/containers/blob/master/ubuntu/standard/Dockerfile)
FROM databricksruntime/standard

# not needed if using `FROM databricksruntime/r-base:latest` at top
ENV DEBIAN_FRONTEND noninteractive

# go into the repo directory
RUN . /etc/environment \
  # Install linux depedendencies here
  && apt-get update \
  && apt-get install libcurl4-openssl-dev libxml2-dev libssl-dev -y \
  # not needed if using `FROM databricksruntime/r-base:latest` at top
  && apt-get install r-base -y

# install specific R packages
RUN R -e 'install.packages(c("httr", "xml2"))'
# sparklyr and Spark
RUN R -e 'install.packages(c("sparklyr"))'
# h2o
# RSparkling 3.32.0.5-1-3.0 requires H2O of version 3.32.0.5.
RUN R -e 'install.packages(c("statmod", "RCurl"))'
RUN R -e 'install.packages("h2o", type = "source", repos = "http://h2o-release.s3.amazonaws.com/h2o/rel-zermelo/5/R")'
# rsparkling
# RSparkling 3.32.0.5-1-3.0 is built for 3.0.
RUN R -e 'install.packages("rsparkling", type = "source", repos = "http://h2o-release.s3.amazonaws.com/sparkling-water/spark-3.0/3.32.0.5-1-3.0/R")'

# connect to H2O cluster with Sparkling Water context
RUN R -e 'library(sparklyr); sparklyr::spark_install("3.0.1", hadoop_version = "3.2"); Sys.setenv(SPARK_HOME = "~/spark/spark-3.0.1-bin-hadoop3.2"); library(rsparkling); sc <- sparklyr::spark_connect(method = "databricks", version = "3.0.1"); sparklyr::spark_version(sc); h2oConf <- H2OConf(); hc <- H2OContext.getOrCreate(h2oConf)'


Solution

  • In my case, I needed to install a "Library" to my Databricks workspace, cluster, or job. I could either upload it or just have Databricks fetch it from Maven coordinates.

    In Databricks Workspace:

    1. click Home icon
    2. click "Shared" > "Create" > "Library"
    3. click "Maven" (as "Library Source")
    4. click "Search packages" link next to "Coordinates" box
    5. click dropdown box and choose "Maven Central"
    6. enter ai.h2o.sparkling-water-package into the "Query" box
    7. choose recent "Artifact Id" with "Release" that matches your rsparkling version, for me ai.h2o:sparkling-water-package_2.12:3.32.0.5-1-3.0
    8. click "Select" under "Options"
    9. click "Create" to create the Library
      • thankfully, this required no changes to my Databricks R Notebook when run as a Databricks job
    # install specific R packages
    install.packages(c("httr", "xml2"))
    
    # sparklyr and Spark
    install.packages(c("sparklyr"))
    
    # h2o
    # RSparkling 3.32.0.5-1-3.0 requires H2O of version 3.32.0.5.
    install.packages(c("statmod", "RCurl"))
    install.packages("h2o", type = "source", repos = "http://h2o-release.s3.amazonaws.com/h2o/rel-zermelo/5/R")
    
    # rsparkling
    # RSparkling 3.32.0.5-1-3.0 is built for 3.0.
    install.packages("rsparkling", type = "source", repos = "http://h2o-release.s3.amazonaws.com/sparkling-water/spark-3.0/3.32.0.5-1-3.0/R")
    # connect to H2O cluster with Sparkling Water context
    
    library(sparklyr)
    sparklyr::spark_install("3.0.1", hadoop_version = "3.2")
    Sys.setenv(SPARK_HOME = "~/spark/spark-3.0.1-bin-hadoop3.2")
    sparklyr::spark_default_version()
    library(rsparkling)
     
    SparkR::sparkR.session()
    sc <- sparklyr::spark_connect(method = "databricks", version = "3.0.1")
    sparklyr::spark_version(sc)
    
    # next command will not work without adding https://mvnrepository.com/artifact/ai.h2o/sparkling-water-package_2.12/3.32.0.5-1-3.0 file as "Library" to Databricks cluster
    h2oConf <- H2OConf()
    hc <- H2OContext.getOrCreate(h2oConf)