Search code examples
javaspringspring-bootspring-batchspring-batch-tasklet

Spring Batch - Deleting metadata post job completion throws error - Incorrect result size: expected 1, actual 0


I am using spring batch with h2 repository. The application runs every 1 minute (for my testing) and I do not require saving the batch metadata once the job is completed. Hence, I wrote a CustomJobListener to clear out the tables after every job run.

@Slf4j
@Component
public class CustomJobExecutionListener implements JobExecutionListener {

  Map<String, Object> emptyMap = new HashMap<>();
  @Autowired
  @Qualifier("jdbcTemplateH2")
  private NamedParameterJdbcTemplate namedParameterJdbcTemplate;

  @Override
  public void afterJob(JobExecution jobExecution) {
    JobExecutionListener.super.afterJob(jobExecution);
    log.info("Job completed, will delete table metadata---");

    namedParameterJdbcTemplate.update("DELETE FROM BATCH_STEP_EXECUTION_CONTEXT", emptyMap);
    namedParameterJdbcTemplate.update("DELETE FROM BATCH_STEP_EXECUTION", emptyMap);
    namedParameterJdbcTemplate.update("DELETE FROM BATCH_JOB_EXECUTION_CONTEXT", emptyMap);
    namedParameterJdbcTemplate.update("DELETE FROM BATCH_JOB_EXECUTION_PARAMS", emptyMap);
    namedParameterJdbcTemplate.update("DELETE FROM BATCH_JOB_EXECUTION", emptyMap);
    namedParameterJdbcTemplate.update("DELETE FROM BATCH_JOB_INSTANCE", emptyMap);
    log.info("---------------------Metadata from all tables deleted successfully---------------------------");
  }
}

This works fine and the table data is deleted, however, I see the following error in the logs after this is executed:

  Executing prepared SQL statement [SELECT VERSION\nFROM BATCH_JOB_EXECUTION\nWHERE JOB_EXECUTION_ID=?\n]","logger_name":"org.springframework.jdbc.core.JdbcTemplate","thread_name":"scheduling-1","level":"DEBUG","level_value":10000}
{"@timestamp":"2024-09-18T21:13:25.8614103+05:30","@version":"1","message":"Initiating transaction rollback","logger_name":"org.springframework.orm.jpa.JpaTransactionManager","thread_name":"scheduling-1","level":"DEBUG","level_value":10000}
{"@timestamp":"2024-09-18T21:13:25.8614103+05:30","@version":"1","message":"Rolling back JPA transaction on EntityManager [SessionImpl(1440235680<open>)]","logger_name":"org.springframework.orm.jpa.JpaTransactionManager","thread_name":"scheduling-1","level":"DEBUG","level_value":10000}
{"@timestamp":"2024-09-18T21:13:25.8614103+05:30","@version":"1","message":"Closing JPA EntityManager [SessionImpl(1440235680<open>)] after transaction","logger_name":"org.springframework.orm.jpa.JpaTransactionManager","thread_name":"scheduling-1","level":"DEBUG","level_value":10000}
{"@timestamp":"2024-09-18T21:13:25.8614103+05:30","@version":"1","message":"Job: [FlowJob: [name=learnSpringBatch]] failed unexpectedly and fatally with the following parameters: [{'JobID':'{value=1153310559084400, type=class java.lang.String, identifying=true}'}]","logger_name":"org.springframework.batch.core.launch.support.SimpleJobLauncher","thread_name":"scheduling-1","level":"INFO","level_value":20000,"stack_trace":"org.springframework.dao.EmptyResultDataAccessException: Incorrect result size: expected 1, actual 0

I debugged the source code and found that the afterjob callback is invoked enter image description here

that calls update in SimpleJobRepository enter image description here

that calls the query in the logs enter image description here

Since the tables were already cleared out (truncated) in the the CustomJobListener, the error is expected but the error in the logs: "Job: [FlowJob: [name=learnSpringBatch]] failed unexpectedly and fatally with the following parameters: [{'JobID':'{value=1153310559084400, type=class java.lang.String, identifying=true}'}]","logger_name":"org.springframework.batch.core.launch.support.SimpleJobLauncher","thread_name":"scheduling-1","level":"INFO","level_value":20000,"stack_trace":"org.springframework.dao.EmptyResultDataAccessException: Incorrect result size: expected 1, actual 0" makes me wonder if I am doing this correctly.

Can the error be ignored?

Am I doing things the correct way?


Solution

  • There is a last update of the job execution in the job repository after calling the listener#afterJob in order to save any status updates done in the listener.

    But since you removed everything in the listener (just before that repository update), the job execution is not found. I don't know if this is a design issue or if your use case is just not suitable for a job listener. I believe cleaning up the metadata would be better to do outside the scope of the job (ie after the job launcher/operator returns the job execution).