sequence job failure
Moderators: chulett, rschirm, roy
sequence job failure
Hi,
in my sequencer i have 5 jobs, if my 4th job fails, after reset the job, sequencer should be rerun from 3rd job not from 4th job.
Please some one give an idea how to achieve this activity without using routine.
in my sequencer i have 5 jobs, if my 4th job fails, after reset the job, sequencer should be rerun from 3rd job not from 4th job.
Please some one give an idea how to achieve this activity without using routine.
The normal functionality of restartable sequences does not let you do this, it stores the last successful stage and starts execution after that. You can read the status information ( using a routine ) and change the execution order using that information.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I have seen a solution for this issue ...
1. While invoking the sequencer thru command-line, always pass reset parm. (dsjob run -mode RESET -wait <proj> <jobname> )
2. In first step of Sequencer, get last run's status (within past X hours).
3. If last run was aborted then depending on aborted subjob, pass control to required (N-1) subjob.
Needless to say, the DataStage canvas for Sequencer looks cluttered. Also if Operations Console is being used, then DSODB tables are quite helpful in finding out status of previous run.
1. While invoking the sequencer thru command-line, always pass reset parm. (dsjob run -mode RESET -wait <proj> <jobname> )
2. In first step of Sequencer, get last run's status (within past X hours).
3. If last run was aborted then depending on aborted subjob, pass control to required (N-1) subjob.
Needless to say, the DataStage canvas for Sequencer looks cluttered. Also if Operations Console is being used, then DSODB tables are quite helpful in finding out status of previous run.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
To be clear, by not check-pointing job 3, it means always run Job 3 during a restart, regardless of whether it was job 4, job 5 or any other downstream job that failed.
It does not mean "pick up" from Job 3 and continue.
For example, if Job 3 and 4 worked, and Job 5 failed, then the restarting the sequence would run Job 3, skip Job 4, and re-set and rerun Job 5.
It does not mean "pick up" from Job 3 and continue.
For example, if Job 3 and 4 worked, and Job 5 failed, then the restarting the sequence would run Job 3, skip Job 4, and re-set and rerun Job 5.
I'm wondering is this is a generic question or a very specific one... as in are they asking about this specific job in this specific sequence specifically? Or is this meant to be more general where X fails and for some reason you'd want to start at X-1 generally.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The best way to accomplish what you want is what we call touch files. So have a folder we put a sequential file in after each job runs successfully. Each file in a sequence is assigned a file. All files in a sequence start with the same name. All restartability is turned off. So if this file exists then do not run the job. The sequence does this check. If the file is not there then the job is ran.
Now you can manually or automatically delete any file to run all jobs or skip specific jobs.
Works great. Used it for years. You can script removing or adding these files as needed.
Now you can manually or automatically delete any file to run all jobs or skip specific jobs.
Works great. Used it for years. You can script removing or adding these files as needed.
Mamu Kim
That is part of what I meant Andy. Check-pointing plus automatic handling and the job property settings need to be set properly so DataStage does not handle where to start next run. All that is handled by files. One file per job. You can create these in the sequence after a job runs successfully with an echo command or a touch command. We used to call them touch files for that reason.
The checkpoints try to control which job starts after a failed run. You need to turn all that logic off.
This helps other things work better as well. If you have steps in a sequence which calculate variables then these steps get skipped with restart and checkpoints set. These can always run if checkpointing and DataStage restartability is turned off. Makes life easier. Restarting a job the old way when it fails more than once was always a risk. Never seemed to work correctly.
The checkpoints try to control which job starts after a failed run. You need to turn all that logic off.
This helps other things work better as well. If you have steps in a sequence which calculate variables then these steps get skipped with restart and checkpoints set. These can always run if checkpointing and DataStage restartability is turned off. Makes life easier. Restarting a job the old way when it fails more than once was always a risk. Never seemed to work correctly.
Mamu Kim
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: