I have setup a simple Kafka connect process to connect to and detect changes in an Oracle CDB/PDB environment.
Have setup all components successfully with no errors - tables created, users can query, topics get created etc. However, I'm facing an issue with the CDC process where "New records are not populating my table-specific topic".
There is an entry for this issue in the confluent troubleshooting guide here: https://docs.confluent.io/kafka-connect-oracle-cdc/current/troubleshooting.html#new-records-are-not-populating-my-table-specific-topic
But when reading this I'm unsure as it can be interpreted multiple ways depending on how you look at it:
New records are not populating my table-specific topic
The existing schema (of the table-specific topic?) may not be compatible with the redo log topic (incompatible redo schema or incompatible redo topic itself?). Removing the schema (the table-specific or redo logic schema?) or using a different redo log topic may fix this issue (a different redo topic? why?)
From this I've had no luck trying to get my process to detect the changes. Looking for some support to fully understand this solution above from Confluent.
In our case the reason was in absence of redo.log.consumer.bootstrap.servers
setting. Also, the redo topic name setting redo.log.topic.name
was important to set.
Assumption: it seems, that in case of 'snapshot' mode, the connector brings initial data to table topics and then starts to pull the redo log and write relevant entries to 'redo' topic. In parallel, as a separate task, it starts a consumer task to read from redo topic, and that consumer task actually writes CDC changes to table topics. That's why the 'redo.log.consumer.*' settings are relevant to configure.