Search code examples
apache-sparkmahoutmahout-recommender

When running mahout spark-itemsimilarity is giving error?



I get the following Stack-Trace error when i run


./mahout spark-itemsimilarity --input input-file --output /output_dir --master spark://url_to_master --filter1 purchase --filter2 view --itemIDColumn 2 --rowIDColumn 0 --filterColumn 1

in linux terminal.
I cloned the project from github Mahout branch spark-1.2 and did
mvn install
in mahout source code directory. and than
cd mahout/bin/

java.lang.NoClassDefFoundError: com/google/common/collect/HashBiMap
    at org.apache.mahout.sparkbindings.io.MahoutKryoRegistrator.registerClasses(MahoutKryoRegistrator.scala:39)
    at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$4.apply(KryoSerializer.scala:104)
    at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$4.apply(KryoSerializer.scala:104)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:104)
    at org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:159)
    at org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:121)
    at org.apache.spark.broadcast.TorrentBroadcast$.unBlockifyObject(TorrentBroadcast.scala:214)
    at org.apache.spark.broadcast.TorrentBroadcast$$anonfun$readBroadcastBlock$1.apply(TorrentBroadcast.scala:177)
    at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1090)
    at org.apache.spark.broadcast.TorrentBroadcast.readBroadcastBlock(TorrentBroadcast.scala:164)
    at org.apache.spark.broadcast.TorrentBroadcast._value$lzycompute(TorrentBroadcast.scala:64)
    at org.apache.spark.broadcast.TorrentBroadcast._value(TorrentBroadcast.scala:64)
    at org.apache.spark.broadcast.TorrentBroadcast.getValue(TorrentBroadcast.scala:87)
    at org.apache.spark.broadcast.Broadcast.value(Broadcast.scala:70)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:61)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:200)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.google.common.collect.HashBiMap
    at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
    at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
    ... 22 more

Please Help ! Thanks.


Solution

  • Mahout 0.10.0 supports Spark 1.1.1 or lower. If you build from source and change the Spark version number in the main pom at mahout/pom.xml you can build for Spark 1.2 but you will have to use the work around described below. The jar with "dependency-reduced" in its name will be in mahout/spark/target. A Spark 1.2 branch is being worked on so the above fix will not be needed. It is maybe a week from being ready to try.

    There is a bug in Spark 1.2 forward, not sure if it's fixed in 1.3.

    See it here: https://issues.apache.org/jira/browse/SPARK-6069

    What worked for me is to put the jar with guava (it will be called mahout-spark_2.10-0.11.0-SNAPSHOT-dependency-reduced.jar or something like that) on all workers then pass that location to the Mahout job using:

    spark-itemsimilarity -D:spark.executor.extraClassPath=/path/to/mahout/spark/target/mahout-spark_2.10-0.11-dependency-reduced.jar
    

    the path must contain the jar on all workers.

    The code work around will go into the spark-1.2 branch in the next week or so, which will make the -D:spark.executor.extraClassPath=/path/to/mahout... unneeded.