Search code examples
apache-flinkdebeziumapache-iceberg

flink: Interrupted while waiting for data to be acknowledged by pipeline


I was doing a POC of flink CDC + iceberg. I followed this debezium tutorial to send cdc to kafka - https://debezium.io/documentation/reference/1.4/tutorial.html. My flink job was working fine and writing data to hive table for inserts. But when I fired an update/delete query to the mysql table, I started getting this error in my flink job. I have also attached the output of retract stream

Update query - UPDATE customers SET first_name='Anne Marie' WHERE id=1004;

1> (true,1001,Sally,Thomas,[email protected])
1> (true,1002,George,Bailey,[email protected])
1> (true,1003,Edward,Walker,[email protected])
1> (true,1004,Anne,Kretchmar,[email protected])
1> (true,1005,Sarah,Thompson,[email protected])
1> (false,1004,Anne,Kretchmar,[email protected])
1> (true,1004,Anne Marie,Kretchmar,[email protected])

Error stack trace

15:27:42.163 [Source: TableSourceScan(table=[[default_catalog, default_database, topic_customers]], fields=[id, first_name, last_name, email]) -> SinkConversionToTuple2 -> (Map -> Map -> IcebergStreamWriter, Sink: Print to Std. Out) (3/4)] ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - Error during disposal of stream operator.
java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline
    at org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:886) ~[hadoop-hdfs-client-2.10.1.jar:?]
    at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:749) ~[hadoop-hdfs-client-2.10.1.jar:?]
    at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:859) ~[hadoop-hdfs-client-2.10.1.jar:?]
    at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:818) ~[hadoop-hdfs-client-2.10.1.jar:?]
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) ~[hadoop-common-2.10.1.jar:?]
    at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) ~[hadoop-common-2.10.1.jar:?]
    at org.apache.iceberg.shaded.org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64) ~[iceberg-flink-runtime-0.11.0.jar:?]

Here’s my code, topic_customers is Kafka dynamic table which is listening to cdc events

Table out = tEnv.sqlQuery("select * from topic_customers"); 
DataStream<Tuple2<Boolean, Row>> dsRow = tEnv.toRetractStream(out, Row.class);
DataStream<Row> dsRow2 = dsRow.map((MapFunction<Tuple2<Boolean, Row>, Row>) x -> x.f1);
TableLoader tableLoader = TableLoader.fromCatalog(catalogLoader, tableIdentifier);
FlinkSink.forRow(dsRow2,TableSchema.builder()
        .field("id", DataTypes.BIGINT())
        .field("first_name", DataTypes.STRING())
        .field("last_name", DataTypes.STRING())
        .field("email", DataTypes.STRING())
        .build())
        .tableLoader(tableLoader)
        //.overwrite(true)
        .equalityFieldColumns(Collections.singletonList("id"))
        .build();

Solution

  • I fixed the issue by moving to the iceberg v2 spec. You can refer to this PR: https://github.com/apache/iceberg/pull/2410