Search code examples
scalaakkaakka-streamreactive-kafkaakka-dispatcher

Akka Streams Reactive Kafka - OutOfMemoryError under high load


I am running an Akka Streams Reactive Kafka application which should be functional under heavy load. After running the application for around 10 minutes, the application goes down with an OutOfMemoryError. I tried to debug the heap dump and found that akka.dispatch.Dispatcher is taking ~5GB of memory. Below are my config files.

Akka version: 2.4.18

Reactive Kafka version: 2.4.18

1.application.conf:

consumer {
num-consumers = "2"
c1 {
  bootstrap-servers = "localhost:9092"
  bootstrap-servers=${?KAFKA_CONSUMER_ENDPOINT1}
  groupId = "testakkagroup1"
  subscription-topic = "test"
  subscription-topic=${?SUBSCRIPTION_TOPIC1}
  message-type = "UserEventMessage"
  poll-interval = 100ms
  poll-timeout = 50ms
  stop-timeout = 30s
  close-timeout = 20s
  commit-timeout = 15s
  wakeup-timeout = 10s
  use-dispatcher = "akka.kafka.default-dispatcher"
  kafka-clients {
    enable.auto.commit = true
  }
}  

2.build.sbt:

java -Xmx6g \
-Dcom.sun.management.jmxremote.port=27019 \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Djava.rmi.server.hostname=localhost \
-Dzookeeper.host=$ZK_HOST \
-Dzookeeper.port=$ZK_PORT \
-jar ./target/scala-2.11/test-assembly-1.0.jar   

3.Source and Sink actors:

class EventStream extends Actor with ActorLogging {

  implicit val actorSystem = context.system
  implicit val timeout: Timeout = Timeout(10 seconds)
  implicit val materializer = ActorMaterializer()
  val settings = Settings(actorSystem).KafkaConsumers

  override def receive: Receive = {
    case StartUserEvent(id) =>
      startStreamConsumer(consumerConfig("EventMessage"+".c"+id))
  }

  def startStreamConsumer(config: Map[String, String]) = {
    val consumerSource = createConsumerSource(config)

    val consumerSink = createConsumerSink()

    val messageProcessor = startMessageProcessor(actorA, actorB, actorC)

    log.info("Starting The UserEventStream processing")

    val future = consumerSource.map { message =>
      val m = s"${message.record.value()}"
      messageProcessor ? m
    }.runWith(consumerSink)

    future.onComplete {
      case _ => actorSystem.stop(messageProcessor)
    }
  }

  def startMessageProcessor(actorA: ActorRef, actorB: ActorRef, actorC: ActorRef) = {
    actorSystem.actorOf(Props(classOf[MessageProcessor], actorA, actorB, actorC))  
  }

  def createConsumerSource(config: Map[String, String]) = {
    val kafkaMBAddress = config("bootstrap-servers")
    val groupID = config("groupId")
    val topicSubscription = config("subscription-topic").split(',').toList
    println(s"Subscriptiontopics $topicSubscription")

    val consumerSettings = ConsumerSettings(actorSystem, new ByteArrayDeserializer, new StringDeserializer)
      .withBootstrapServers(kafkaMBAddress)
      .withGroupId(groupID)
      .withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
      .withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"true")

    Consumer.committableSource(consumerSettings, Subscriptions.topics(topicSubscription:_*))
  }

  def createConsumerSink() = {
    Sink.foreach(println)
  }
}    

In this case actorA, actorB, and actorC are doing some business logic processing and database interaction. Is there anything I am missing in handling the Akka Reactive Kafka consumers such as commit, error, or throttling configuration? Because looking into the heap dump, I could guess that the messages are piling up.


Solution

  • One thing I would change is the following:

    val future = consumerSource.map { message =>
      val m = s"${message.record.value()}"
      messageProcessor ? m
    }.runWith(consumerSink)
    

    In the above code, you're using ask to send messages to the messageProcessor actor and expect replies, but in order for ask to function as a backpressure mechanism, you need to use it with mapAsync (more information is in the documentation). Something like the following:

    val future =
      consumerSource
        .mapAsync(parallelism = 5) { message =>
          val m = s"${message.record.value()}"
          messageProcessor ? m
        }
        .runWith(consumerSink)
    

    Adjust the level of parallelism as needed.