I sometimes get the error shutting down ExecutorService and then Consumer stopped. Kafka event is processed again, but the duplication appears, as this event is already processed until storing data. I have fixed the duplication with idempotence. But is there a way to prevent this shut down? The properties of ExponentialBackOffPolicy:
backOffPolicy.setMaxInterval(60000);
backOffPolicy.setMultiplier(2.0);
backOffPolicy.setInitialInterval(1000);
simpleRetryPolicy.setMaxAttempts(60);
It is because you have a large retry interval and the consumer thread is blocked; retry in the listener adapter is deprecated now that the error handlers support back off and exception classification. The error handlers are also able to exit the retry whenever the container is stopped.
You use a suitably configured DefaultErrorHandler
(2.8 and later) or a SeekToCurrentErrorHandler
for earlier versions (versions before 2.7.10 are no longer supported for OSS users).