Search code examples
pentahokettlesql-loaderpentaho-data-integration

Getting exception when using 2 sessions Pentaho kettle simultaneously to load two different csv files into 2 different tables using sqlldr


I am getting below exception in console when I am calling two different transformations to load two different set of csv files into two different tables. There is nothing common between the two tasks. I am executing kitchen.bat from two different consoles to call these transformations.

One of these two will most often fail when run together, though not always after having tested this scenario multiple times. Running them one at a time does not give any error and runs successfully. What is causing this exception?

tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>SQL*Loader-951: Error calling once/load initialization
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>ORA-00604: error occurred at recursive SQL level 1
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR>ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Error in step, asking everyone to stop because of:
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - IO exception occured: The pipe has been ended
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - The pipe has been ended
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Error while closing output
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : java.io.IOException: The pipe is being closed
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at java.io.FileOutputStream.writeBytes(Native Method)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at java.io.FileOutputStream.write(FileOutputStream.java:345)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at java.io.BufferedOutputStream.write(BufferedOutputStream.java:122)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at sun.nio.cs.StreamEncoder.implClose(StreamEncoder.java:316)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at sun.nio.cs.StreamEncoder.close(StreamEncoder.java:149)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at java.io.OutputStreamWriter.close(OutputStreamWriter.java:233)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at java.io.BufferedWriter.close(BufferedWriter.java:266)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at org.pentaho.di.trans.steps.orabulkloader.OraBulkDataOutput.close(OraBulkDataOutput.java:95)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at org.pentaho.di.trans.steps.orabulkloader.OraBulkLoader.dispose(OraBulkLoader.java:598)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at org.pentaho.di.trans.step.RunThread.run(RunThread.java:96)
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 -                at java.lang.Thread.run(Thread.java:745)
tasklist: 2019/10/04 14:27:51 - SOME_FILE_INPUT.0 - Finished processing (I=10058, O=0, R=5, W=10056, U=0, E=0)
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - Exit Value of sqlldr: 1
tasklist: 2019/10/04 14:27:51 - SOME_STEP_NAME.0 - Finished processing (I=0, O=54, R=55, W=54, U=0, E=1)
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - Transformation detected one or more steps with errors.
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - Transformation is killing the other steps!
tasklist: 2019/10/04 14:27:51 - SOME_TRANSFORMATION_NAME - ERROR (version 5.4.0.1-130, build 1 from 2015-06-14_12-34-55 by buildguy) : Errors detected!

Solution

  • Using unique control file in each instance of sqlldr solved the problem.

    Parallel execution of sqlldrs from different jobs with same control file was causing one instance of sqlldr to overwrite the data in control file previously written by another sqlldr instance thus causing errors and locking.