Pyspark structured streaming, how to stop or restart job if it fails? I am new to this so want to understand 1.How checkpointing helps while restart 2.Do we need to call any method to passing offset when we restart 3.Also,I want to know how to stop job, if have any changes in code logic so as to update