I have the following in my build.gradle:
configurations {
runtime.exclude group: 'org.apache.spark'
runtime.exclude group: 'org.apache.hadoop'
}
and for some reason this also excludes all Hadoop/Spark code from the test classpath. If I comment out this configuration - the tests are passing fine, otherwise I get all sorts of java.lang.NoClassDefFoundError: org/apache/hadoop/hdfs/MiniDFSCluster$Builder
issues.
I tried to use this:
test {
classpath += configurations.compile
}
No luck.
What am I missing here?
In gradle scoping, test
inherits from runtime
. Your test code is excluding the minicluster dependency because runtime excludes it.
See this diagram for the scope inheritance tree for the java plugin:
Instead of adding a global exclude to the runtime configuration, you might want to make the spark dependencies into compileOnly
scoped dependencies which is available since gradle 2.12.
configurations {
compileOnly 'org.apache.spark:spark:2.11'
test 'org.apache.hadoop:hadoop-minicluster:2.7.2'
}
More information about gradle scoping is available in the gradle manual:
Alternatively, you could add another configuration that inherits from runtime, and add exclusions to that, then use that as the basis of your shadowJar. This could be helpful if you want to optionally build a jar with spark dependencies bundled in or not. Your tests will use the configuration without exclusions, but the jar you package won't include the spark dependencies.
configurations {
sparkConfiguration {
extendsFrom runtime
exclude group: 'org.apache.hadoop'
exclude group: 'org.apache.spark'
}
}
task sparkExcludedJar(type: ShadowJar) {
group = "Shadow"
configurations = [project.configurations.sparkConfiguration]
classifier = 'sparkExcluded'
}