Search code examples
apache-kafkakafka-producer-api

Is this possible? producer batch.size * max.request.size > broker max.message.bytes


Average message size is small, but size is vary.

  • Average message size: 1KBytes
  • 1MBytes message incomes in arbitrary rate. / So, producer's max.request.size = 1MBytes
  • broker's max.message.bytes = 2MBytes

My questions.

  1. To avoid producing size error, user have to set batch.size LTE 2?
  2. Or producer library decides batch size automatically to avoid error? (even user set large batch.size)

Thanks.


Solution

  • Below are the definition of the related configs in question

    Producer config

    batch.size : producer will attempt to batch records until it reaches batch.size before it is sent to kafka ( assuming batch.size is configured to take precedence over linger.ms ) .Default - 16384 bytes

    max.request.size : The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum record batch size. Default - 1048576 bytes

    Broker config

    message.max.bytes : The largest record batch size allowed by Kafka. Default - 1000012 bytes

    replica.fetch.max.bytes : This will allow for the replicas in the brokers to send messages within the cluster and make sure the messages are replicated correctly.

    To answer your questions

    1. To avoid producer send errors , you don't need to set batch size 2MB as this will delay the transmission of your low size messages . You can keep the batch.size according to the avg message size and depending on how much you want to batch

    2. If you don't specify batch size , it would take the default value which is 16384 bytes

    So basically you will have to configure producer 'max.request.size'>=2MB and broker 'message.max.bytes' and 'replica.fetch.max.bytes' >=2MB.