I am having difficulties at setting the compatible libraries of the whole project. The build.sbt file is the following:
name := "YourProjectName"
version := "1.0"
scalaVersion := "2.12.16"
scalacOptions ++= Seq("-deprecation")
lazy val courseId = settingKey\[String\]("Course ID")
courseId := "e8VseYIYEeWxQQoymFg8zQ"
resolvers += Resolver.sonatypeRepo("releases")
libraryDependencies ++= Seq(
"org.scala-sbt" % "sbt" % "1.1.6",
"org.apache.spark" %% "spark-core" % "3.4.1",
"org.apache.spark" %% "spark-sql" % "3.4.1",
"org.apache.commons" % "commons-lang3" % "3.12.0", // Apache Commons Lang
"jline" % "jline" % "2.14.6"
)
libraryDependencies ++= Seq(
"org.slf4j" % "slf4j-api" % "1.7.32",
"org.apache.logging.log4j" % "log4j-core" % "2.17.1"
)
The error when I load the changes on sbt, and build and run the project on IntelliJ IDEA is as such:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.logging.slf4j.Log4jLoggerFactory.<init>(Lorg/apache/logging/slf4j/Log4jMarkerFactory;)V
at org.apache.logging.slf4j.SLF4JServiceProvider.initialize(SLF4JServiceProvider.java:54)
at org.slf4j.LoggerFactory.bind(LoggerFactory.java:183)
at org.slf4j.LoggerFactory.performInitialization(LoggerFactory.java:170)
at org.slf4j.LoggerFactory.getProvider(LoggerFactory.java:455)
at org.slf4j.LoggerFactory.getILoggerFactory(LoggerFactory.java:441)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:390)
at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:416)
at org.apache.spark.network.util.JavaUtils.<clinit>(JavaUtils.java:44)
at org.apache.spark.internal.config.ConfigHelpers$.byteFromString(ConfigBuilder.scala:67)
at org.apache.spark.internal.config.ConfigBuilder.$anonfun$bytesConf$1(ConfigBuilder.scala:261)
at org.apache.spark.internal.config.ConfigBuilder.$anonfun$bytesConf$1$adapted(ConfigBuilder.scala:261)
at org.apache.spark.internal.config.TypedConfigBuilder.$anonfun$transform$1(ConfigBuilder.scala:101)
at org.apache.spark.internal.config.TypedConfigBuilder.createWithDefault(ConfigBuilder.scala:146)
at org.apache.spark.internal.config.package$.<init>(package.scala:378)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.SparkConf$.<init>(SparkConf.scala:656)
at org.apache.spark.SparkConf$.<clinit>(SparkConf.scala)
at org.apache.spark.SparkConf.set(SparkConf.scala:94)
at org.apache.spark.SparkConf.set(SparkConf.scala:83)
at org.apache.spark.SparkConf.setAppName(SparkConf.scala:120)
at wikipedia.WikipediaRanking$.<init>(WikipediaRanking.scala:15)
at wikipedia.WikipediaRanking$.<clinit>(WikipediaRanking.scala)
at wikipedia.WikipediaRanking.main(WikipediaRanking.scala)
Process finished with exit code 1
Running sbt dependencyTree on cmd.exe launches a log like this:
C:\Users\Enrique>sbt dependencyTree
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
[info] Loading settings from idea.sbt ...
[info] Loading global plugins from C:\Users\Enrique\.sbt\1.0\plugins
[info] Loading project definition from C:\Users\Enrique\project
[info] Set current project to enrique (in build file:/C:/Users/Enrique/)
[error] Not a valid command: dependencyTree
[error] Not a valid project ID: dependencyTree
[error] Expected ':'
[error] Not a valid key: dependencyTree (similar: dependencyOverrides, sbtDependency, dependencyResolution)
[error] dependencyTree
[error]
The main code demonstrates a Spark application that performs language ranking based on Wikipedia articles. It utilizes RDDs for distributed processing and leverages Spark's parallel processing capabilities. It reads like this:
package wikipedia
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.rdd.RDD
case class WikipediaArticle(title: String, text: String)
object WikipediaRanking {
val langs = List(
"JavaScript", "Java", "PHP", "Python", "C#", "C++", "Ruby", "CSS",
"Objective-C", "Perl", "Scala", "Haskell", "MATLAB", "Clojure", "Groovy")
val conf: SparkConf = new SparkConf().setAppName("wikipedia").setMaster("local[*]")
val sc: SparkContext = new SparkContext(conf)
sc.setLogLevel("WARN")
// Hint: use a combination of `sc.textFile`, `WikipediaData.filePath` and `WikipediaData.parse`
val wikiRdd: RDD[WikipediaArticle] = sc.textFile(WikipediaData.filePath).map(l => WikipediaData.parse(l)).cache()
/** Returns the number of articles on which the language `lang` occurs.
* Hint1: consider using method `aggregate` on RDD[T].
* Hint2: should you count the "Java" language when you see "JavaScript"?
* Hint3: the only whitespaces are blanks " "
* Hint4: no need to search in the title :)
*/
def occurrencesOfLang(lang: String, rdd: RDD[WikipediaArticle]): Int = {
rdd.aggregate(0)((sum, article) => sum + isFound(article, lang), _+_)
}
def isFound(article: WikipediaArticle, lang: String): Int = if(article.text.split(" ").contains(lang)) 1 else 0
/* (1) Use `occurrencesOfLang` to compute the ranking of the languages
* (`val langs`) by determining the number of Wikipedia articles that
* mention each language at least once. Don't forget to sort the
* languages by their occurrence, in decreasing order!
*
* Note: this operation is long-running. It can potentially run for
* several seconds.
*/
def rankLangs(langs: List[String], rdd: RDD[WikipediaArticle]): List[(String, Int)] = {
val ranks = langs.map(lang => (lang, occurrencesOfLang(lang, rdd)))
//for{ lang <- langs; occ = occurrencesOfLang(lang, rdd) if occ != 0} yield (lang, occ)
ranks.sortBy(_._2).reverse
}
/* Compute an inverted index of the set of articles, mapping each language
* to the Wikipedia pages in which it occurs.
*/
def makeIndex(langs: List[String], rdd: RDD[WikipediaArticle]): RDD[(String, Iterable[WikipediaArticle])] = {
val list = rdd.flatMap(article => for( lang <- langs if isFound(article, lang) == 1) yield (lang, article))
list.groupByKey()
}
/* (2) Compute the language ranking again, but now using the inverted index. Can you notice
* a performance improvement?
*
* Note: this operation is long-running. It can potentially run for
* several seconds.
*/
def rankLangsUsingIndex(index: RDD[(String, Iterable[WikipediaArticle])]): List[(String, Int)] = {
val ranks = index.mapValues(_.size).collect().toList.sortBy(-_._2)
ranks
}
/* (3) Use `reduceByKey` so that the computation of the index and the ranking are combined.
* Can you notice an improvement in performance compared to measuring *both* the computation of the index
* and the computation of the ranking? If so, can you think of a reason?
*
* Note: this operation is long-running. It can potentially run for
* several seconds.
*/
def rankLangsReduceByKey(langs: List[String], rdd: RDD[WikipediaArticle]): List[(String, Int)] = {
val list = rdd.flatMap(article => for( lang <- langs if isFound(article, lang) == 1) yield (lang, 1))
list.reduceByKey(_+_).collect().toList.sortBy(_._2).reverse
}
def main(args: Array[String]) {
/* Languages ranked according to (1) */
val langsRanked: List[(String, Int)] = timed("Part 1: naive ranking", rankLangs(langs, wikiRdd))
langsRanked.foreach(println)
/* An inverted index mapping languages to wikipedia pages on which they appear */
def index: RDD[(String, Iterable[WikipediaArticle])] = makeIndex(langs, wikiRdd)
/* Languages ranked according to (2), using the inverted index */
val langsRanked2: List[(String, Int)] = timed("Part 2: ranking using inverted index", rankLangsUsingIndex(index))
langsRanked2.foreach(println)
/* Languages ranked according to (3) */
val langsRanked3: List[(String, Int)] = timed("Part 3: ranking using reduceByKey", rankLangsReduceByKey(langs, wikiRdd))
langsRanked3.foreach(println)
/* Output the speed of each ranking */
println(timing)
sc.stop()
}
val timing = new StringBuffer
def timed[T](label: String, code: => T): T = {
val start = System.currentTimeMillis()
val result = code
val stop = System.currentTimeMillis()
timing.append(s"Processing $label took ${stop - start} ms.\n")
result
}
}
I tried to look up in google what the compatible versions of log4j and slf4j are. Apart from that, I tried to click on the "Coursera_Scala_Spark" line for the full sbt error log details, and also prompted "sbt dependencyTree" to check the dependencies structure. I also entered this webpage ( https://index.scala-lang.org/apache/logging-log4j-scala ), but the solution doesn't seem to suit my project compilation.
Edit 1: I changed the log4j to a newer version, and removed the slf4j dependecies for being redundant. But still, the project can't run and the error log keeps asking for slf4j dependencies.
libraryDependencies ++= Seq(
"org.apache.logging.log4j" % "log4j-api" % "2.15.0",
"org.apache.logging.log4j" % "log4j-core" % "2.15.0"
)
The problem is caused by the following libraries:
library | version | released |
---|---|---|
sbt | 1.1.6 | May 28, 2018 |
spark-core | 3.4.1 | June 23, 2023 |
spark-sql | 3.4.1 | June 23, 2023 |
As you can see, the version selected for spark
is 5 years away from the one selected for sbt
.
Each library, has different dependencies
Again we have two version libraries with 5 years from one release to the other one.
It is supposed that if you follow semantic versioning
Given a version number
MAJOR.MINOR.PATCH
, increment the:
MAJOR
version when you make incompatible API changesMINOR
version when you add functionality in a backward compatible mannerPATCH
version when you make backward compatible bug fixes Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
and both libs are using log4j 2.x.y
, selecting the higher one should work, but it could be that they are not binary compatible. I guess this is the case.
What I'm wondering if you really need sbt
as a dependency library. If you remove it, the project will be able to compile.
You can try upgrade and downgrade sbt
and spark
and see if some combination of version between them works. Due to I don't know anything about your project, I can't suggest just remove sbt from your dependencies because I'm not sure if you are using something from sbt or why you added that dependency.
Sbt, depends on scala 2.12, so upgrading to scala 2.13 could bring more problems.
I just tried quickly the following combinations in my local and I got these errors
[error] * org.scala-lang.modules:scala-parser-combinators_2.12:2.1.1 (early-semver) is selected over 1.1.2
[error] +- org.apache.spark:spark-catalyst_2.12:3.4.1 (depends on 2.1.1)
[error] +- org.scala-sbt:zinc-compile-core_2.12:1.9.2 (depends on 1.1.2)
[error] * org.scala-lang.modules:scala-parser-combinators_2.12:2.1.1 (early-semver) is selected over 1.1.2
[error] +- org.apache.spark:spark-catalyst_2.12:3.4.1 (depends on 2.1.1)
[error] +- org.scala-sbt:zinc-compile-core_2.12:1.8.1 (depends on 1.1.2)
[error] * org.scala-lang.modules:scala-xml_2.12:2.1.0 (early-semver) is selected over {1.3.0, 1.2.0, 1.0.6}
[error] +- org.apache.spark:spark-core_2.12:3.4.1 (depends on 2.1.0)
[error] +- org.scala-lang:scala-compiler:2.12.17 (depends on 2.1.0)
[error] +- org.scala-sbt:testing_2.12:1.7.3 (depends on 1.3.0)
[error] +- org.scala-sbt:sbinary_2.12:0.5.1 (depends on 1.0.6)
[error] +- org.scala-sbt:main_2.12:1.7.3 (depends on 1.3.0)
[error] +- org.scala-sbt:librarymanagement-core_2.12:1.7.1 (depends on 1.2.0)
[error] +- io.get-coursier:lm-coursier-shaded_2.12:2.0.12 (depends on 1.3.0)
related with the error you got when sbt dependencyTree was executed, remember that before sbt 1.4
you have to add the plugin in the project/plugins.sbt
file like this
addSbtPlugin("net.virtual-void" % "sbt-dependency-graph" % "0.10.0-RC1")
since sbt 1.4
, you have to add the plugin but with the following line
addDependencyTreePlugin
Once you do that, you will be able to execute the command