Search code examples
pythonpysparkaws-glueexitaws-glue-spark

End/exit a glue job programmatically


I am using Glue bookmarking to process data. My job is scheduled every day, but can also be launch "manually". Since I use bookmarks, sometimes the Glue job can start without having new data to process, the read dataframe is then empty. In this case, I want to end my job properly because it has nothing to do. I tried:

if df.rdd.isEmpty():
    job.commit()
    sys.exit(0)

However, my job terminate in error with SystemExit: 0.

How to end the job with success?


Solution

  • After some test, I discovered from @Glyph's answer that :

    os._exit() terminates immediately at the C level and does not perform any of the normal tear-downs of the interpreter.

    Which is exactly what I was looking for. The final solution is:

    import os
    
    if df.rdd.isEmpty():
        job.commit()
        os._exit()