Search code examples
hadoophivegoogle-cloud-dataprochcatalogkylin

java.lang.NoSuchMethodError: org.apache.hive.common.util.ShutdownHookManager.addShutdownHook


I'm trying to build a cube on Kylin with Spark as engine type. The cluster contains the following tools:

OS image: 1.0-debian9

Apache Spark 2.4.4 (changed from 1.6.2)

Apache Hadoop 2.7.4

Apache Hive 1.2.1

I'm getting this error while building a cube:

java.lang.NoSuchMethodError: org.apache.hive.common.util.ShutdownHookManager.addShutdownHook(Ljava/lang/Runnable;)V
    at org.apache.hive.hcatalog.common.HiveClientCache.createShutdownHook(HiveClientCache.java:221)
    at org.apache.hive.hcatalog.common.HiveClientCache.<init>(HiveClientCache.java:153)
    at org.apache.hive.hcatalog.common.HiveClientCache.<init>(HiveClientCache.java:97)
    at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:553)
    at org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:104)
    at org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:88)
    at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:95)
    at org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:51)
    at org.apache.kylin.source.hive.HiveMRInput$HiveTableInputFormat.configureJob(HiveMRInput.java:80)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.setupMapper(FactDistinctColumnsJob.java:126)
    at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:104)
    at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:131)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
    at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:71)
    at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:167)
    at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:114)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

I checked the hive and hadoop library jars directory to see if there are any redundant jars and I found two versions of every type of jar. For example: hive-common-1.2.1.jar and hive-common.jar.

I tried moving either of them to a different location and tried resuming the cube building process. But I got the same error. Any help on this would be greatly appreciated.


Solution

  • I changed the Hive version to 2.1.0 and it worked for me. I decided to install this version of Hive by checking the Kylin download page and in turn going through other cloud platforms like AWS EMR and Microsoft Azure HDInsight for Kylin 2.6.4 release.

    Thanks, @Igor Dvorzhak for your valuable suggestions.