Hi,
I have around 10 server jobs sequenced in a sequencer. The main sequencer job will run through autosys daily. I need all the Aborted Jobs Log and all the warning to be emailed to the Production supervisor. Could anybody please advise me on this.
Thanks,
Deep
Job Log
Moderators: chulett, rschirm, roy
If you look at the options that the UNIX command "dsjob" gives you I think it would be best to incorporate the checking for aborted/incomplete jobs and e-mailing of job logs in a UNIX shell script.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Hi,
As metioned in the last post, you should use Unix dsjob command
You cna write a small shell script which will invoke UNix dsjob command.
Pass the parameter like -report to this command.
Capture the out of the dsjob comand to a file.
Use grep command to search for lines with status code <> 0.
To the calling shell script, you will need to pass the name of all the jobs you want to look for.
Ketfos
As metioned in the last post, you should use Unix dsjob command
You cna write a small shell script which will invoke UNix dsjob command.
Pass the parameter like -report to this command.
Capture the out of the dsjob comand to a file.
Use grep command to search for lines with status code <> 0.
To the calling shell script, you will need to pass the name of all the jobs you want to look for.
Ketfos
Are the log entries really nessecary?
The reason I as is if you are using a sequence job, you can add a notification stage to each of the jobs and set the trigger to send an email if the job finishes with an abort or warning. In the subject you can use
Job #Job_Activity_0.$JobName# finished with a status of #Job_Activity_0.$JobStatus#
Plus in the body you can include the final stats for the job. However, it will not include all of the log entries for that run.
If the first action for the person troubleshooting a problem is to log into the director then why take the time to write a fancy little script to capture that info.
If you must have the Log entries, then I guess I would try to use the execute command to run the script with the dsjob command in it, and can use the same trigger logic as above. I believe in the parameters for that stage you can use #Job_Activity_0.$JobName#, so you can develop a reusable script.
hope this helps
The reason I as is if you are using a sequence job, you can add a notification stage to each of the jobs and set the trigger to send an email if the job finishes with an abort or warning. In the subject you can use
Job #Job_Activity_0.$JobName# finished with a status of #Job_Activity_0.$JobStatus#
Plus in the body you can include the final stats for the job. However, it will not include all of the log entries for that run.
If the first action for the person troubleshooting a problem is to log into the director then why take the time to write a fancy little script to capture that info.
If you must have the Log entries, then I guess I would try to use the execute command to run the script with the dsjob command in it, and can use the same trigger logic as above. I believe in the parameters for that stage you can use #Job_Activity_0.$JobName#, so you can develop a reusable script.
hope this helps