Search code examples
apache-flinkflink-streaming

Chaining Flink Sinks


Background

  • I am new to Flink and come from Apache Storm background
  • Working on developing a lossless gRPC sink

Crux

  • A finite no. of retries will be made based on the error codes returned by the gRPC endpoint
  • After that the data will be flushed to Kafka Queue for offline processing
  • Decision to retry will be based on returned error code.

Problem

Is it possible to chain another sink so that the response ( successful or error ) is also available downstream for any customized processing ?


Solution

  • Answer is as per the comment by Dominik Wosiński

    It's not possible in general, You will have to work around that, either by providing both functionalities in a single sink or using some existing fuctions like AsyncIO to write to gRPC and then sink the failures to kafka, but that may be harder if You need any strong guarantees.