Search code examples
apache-sparkapache-kafkaapache-beamapache-beam-ioapache-beam-kafkaio

Apache Beam Issue with Spark Runner while using Kafka IO


I am trying to test KafkaIO for the Apache Beam Code with a Spark Runner. The code works fine with a Direct Runner.

However, if I add below codeline it throws error:

options.setRunner(SparkRunner.class);

Error:

ERROR org.apache.spark.executor.Executor: Exception in task 0.0 in stage 2.0 (TID 0)
java.lang.StackOverflowError
    at java.base/java.io.ObjectInputStream$BlockDataInputStream.readByte(ObjectInputStream.java:3307)
    at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2135)
    at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
    at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:482)
    at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:440)
    at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:488)
    at jdk.internal.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)

Versions that I am trying to use:

<beam.version>2.33.0</beam.version>
<spark.version>3.1.2</spark.version>
<kafka.version>3.0.0</kafka.version>

Solution

  • This issue is resolved by adding VM argument: -Xss2M

    This link helped me to solve this issue: https://github.com/eclipse-openj9/openj9/issues/10370