Search code examples
apache-flinkflink-streamingflink-sql

Using flink sql client to submit sql query. How to I restore from checkpoint or savepoint


I have started a local flink cluster using ./start-cluster.sh

I have started a local sql-client using ./sql-client.sh

I am able to submit sql statement in Flink SQL terminal. I have run Set 'state.checkpoints.dir' = 'file:///tmp/flink-savepoints-directory-from-set'; --> I can see checkpoint folder and getting created and updated when the sql job is running. ( sql job is reading from a kafka topic, does some joins and writing to another topic).

When I cancel the job from the flink UI and submit the sql again, the job does not restore from the state. ( I am basing this on the fact that the output or final sink, emits the same message on every restart, its like the job is reading the beginning of source topic again). I have not shutdown the flink cluster or kafka cluster.

I have 2 questions

  1. How I get the sql query to restore from state ?

  2. Is there a way to use flink run -s ... command to submit sql query directly instead of packaging this as a jar ?


Solution

  • This is documented at https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/table/sqlclient/#start-a-sql-job-from-a-savepoint

    SET 'execution.savepoint.path' = '/tmp/flink-savepoints/savepoint-cca7bc-bb1e257f0dab';