Search code examples
scaladataframeapache-sparkrddapache-spark-dataset

Task not Serializable exception on converting dataset to red


I have DataSet which looks like below:

dataset.show(10)

|   features|
+-----------+
|[14.378858]|
|[14.388442]|
|[14.384361]|
|[14.386358]|
|[14.390068]|
|[14.423256]|
|[14.425567]|
|[14.434074]|
|[14.437667]|
|[14.445997]|
+-----------+
only showing top 10 rows

But, when I am trying to convert this DataSet into RDD using .rdd like below :

val myRDD = dataset.rdd

I'm getting exception like below:

Task not serializable: java.io.NotSerializableException: scala.runtime.LazyRef
Serialization stack:
    - object not serializable (class: scala.runtime.LazyRef, value: LazyRef thunk)
    - element of array (index: 2)
    - array (class [Ljava.lang.Object;, size 3)
    - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
    - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class org.apache.spark.sql.catalyst.expressions.ScalaUDF, functionalInterfaceMethod=scala/Function1.apply:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic org/apache/spark/sql/catalyst/expressions/ScalaUDF.$anonfun$f$2:(Lscala/Function1;Lorg/apache/spark/sql/catalyst/expressions/Expression;Lscala/runtime/LazyRef;Lorg/apache/spark/sql/catalyst/InternalRow;)Ljava/lang/Object;, instantiatedMethodType=(Lorg/apache/spark/sql/catalyst/InternalRow;)Ljava/lang/Object;, numCaptured=3])
    - writeReplace data (class: java.lang.invoke.SerializedLambda)

How do I fix this?


Solution

  • java.io.NotSerializableException: scala.runtime.LazyRef
    

    Clearly indicates runtime version mismatch issue. You have not mentioned your spark version...

    This is scala version issue downgrade to scala 2.11 it should work

    See this version table from this url https://mvnrepository.com/artifact/org.apache.spark/spark-core and change your scala version appropriately.

    enter image description here