So I have the following 3 jobs defined ...
/* ----------------- JOB_A ----------------- */
insert_job: JOB_A job_type: CMD
command: ${BatchScripts}/JOB_A.ksh
machine: xyz
owner: abc@xyz
permission: mx
date_conditions: 1
run_calendar: 13BUSDAY
start_times: "00:10"
std_out_file: /autotmp/JOB_A.std
std_err_file: /autotmp/JOB_A.err
alarm_if_fail: 1
profile: /export/home/abc/.profile_autosys
/* ----------------- JOB_B ----------------- */
insert_job: JOB_B job_type: CMD
command: ${BatchScripts}/JOB_B.ksh
machine: xyz
owner: abc@xyz
permission: mx
date_conditions: 1
run_calendar: 13BUSDAY
start_times: "00:15"
condition: s(JOB_A)
std_out_file: /autotmp/JOB_B.std
std_err_file: /autotmp/JOB_B.err
alarm_if_fail: 1
profile: /export/home/abc/.profile_autosys
/* ----------------- JOB_C ----------------- */
insert_job: JOB_C job_type: CMD
command: ${BatchScripts}/JOB_C.ksh
machine: xyz
owner: abc@xyz
permission: mx
date_conditions: 1
run_calendar: 13BUSDAY
start_times: "00:45"
condition: s(JOB_B)
std_out_file: /autotmp/JOB_C.std
std_err_file: /autotmp/JOB_C.err
alarm_if_fail: 1
profile: /export/home/abc/.profile_autosys
They run, and checking their status, I see this.
Job Name Last Start Last End ST Run/Ntry Pri/Xit
___________________ ____________________ ____________________ __ ________ _______
JOB_A 05/18/2016 00:10:03 05/18/2016 00:46:22 SU 76659457/1 0
Job Name Last Start Last End ST Run/Ntry Pri/Xit
___________________ ____________________ ____________________ __ ________ _______
JOB_B 05/18/2016 00:46:24 05/18/2016 00:48:19 SU 76660708/1 0
Job Name Last Start Last End ST Run/Ntry Pri/Xit
___________________ ____________________ ____________________ __ ________ _______
JOB_C 05/18/2016 00:45:03 05/18/2016 00:45:07 SU 76660477/1 0
Now, we are having a problem with JOB_C .. it's not running "properly" ... and we managed to track it back to the fact that it's running earlier than it should. In other words, as you can see by the START/END time of JOB_C, it started (and finished) before JOB_B even started.
I'm confused at this, since we have a condition on JOB_C for "s(JOB_B)" ...
What could be causing this behaviour ? JOB_B waited for JOB_A like it should have, and ran fine, however, JOB_C, appears to be a bit "impatient".
This has happened on several nights, but does not appear to happen every night (perhaps 1 of 3 fails in the above manner).
The only thing I'm guessing at, is that since JOB_B hasn't "started" yet @:45 min ... it's seeing SU from previous run?
However, that doesn't make sense, since JOB_B is set to start @:15 .. shouldn't it change to AC status first? then wait for JOB_A based on conditions ??
[edit] Version is: CA Workload Automation Agent
for LINUX (Intel) 32-bit
Version R11.3, Service Pack 2, Maintenance Level 0, Build 508 [/edit]
You are correct in that Job C is starting prematurely because at 00:45 Job B is still in SU status from its previous run. Job B waits for Job A because Job A's status is RU when Job B's runtime fires at 00:15.
Job B is not changing to AC status because it is not inside of a box that would activate it.
My suggestion is to place Jobs A, B, and C inside a box scheduled to start at 00:10 and remove Job A's start_times. This should cause Job A to start immediately at 00:10 and Jobs B and C would change to AC status and prevent the problem you are encountering.