I don't believe so.
I'm wondering... why run another job from 'after-job'? That's not something I've ever considered doing and am wondering why you would have set things up like that. If you don't mind.
Changing the behaviour of Stopped Jobs & After-job routi
Moderators: chulett, rschirm, roy
Hi Craig,
The After-job routine captures audit information regarding the server job it is called from. The following information is extracted in the routine:
Job
JOB_NAME
JOB_DESCRIPTION
JOB_STATUS
JOB_WAVE_NO
JOB_START_DT
JOB_END_DT
JOB_ELAPSED_SECS
JOB_BATCH_NO
JOB_VERSION_NO
JOB_PID
Parameters
PARAM_NAME
PARAM_VALUE
Stage
STAGE_NAME
STAGE_DESCRIPTION
STAGE_TYPE
STAGE_STATUS
STAGE_START_DT
STAGE_END_DT
STAGE_ELAPSED_SECS
STAGE_PID
STAGE_CPU_TIME
Link
STAGE_NAME
LINK_NAME
LINK_TYPE
LINK_DESCRIPTION
LINK_TARGET
LINK_ROW_COUNT
The rows across the tables are joined by a surrogate key.
Once the audit data is extracted, the routine writes it to a sequential file and then it calls a server job to insert the data form the sequential file into database (oracle) tables.
This information (along with the job log and job report which are also captured) has proved very useful in diagnostic and performance measurement activities. But when a job is stopped, the job called from within the After-job routine - which loads the rows into the database tables - doesn't complete (it gets stopped to) and I lose the resulting audit data. I would, of course, perfer to not lose the data.
katz
The After-job routine captures audit information regarding the server job it is called from. The following information is extracted in the routine:
Job
JOB_NAME
JOB_DESCRIPTION
JOB_STATUS
JOB_WAVE_NO
JOB_START_DT
JOB_END_DT
JOB_ELAPSED_SECS
JOB_BATCH_NO
JOB_VERSION_NO
JOB_PID
Parameters
PARAM_NAME
PARAM_VALUE
Stage
STAGE_NAME
STAGE_DESCRIPTION
STAGE_TYPE
STAGE_STATUS
STAGE_START_DT
STAGE_END_DT
STAGE_ELAPSED_SECS
STAGE_PID
STAGE_CPU_TIME
Link
STAGE_NAME
LINK_NAME
LINK_TYPE
LINK_DESCRIPTION
LINK_TARGET
LINK_ROW_COUNT
The rows across the tables are joined by a surrogate key.
Once the audit data is extracted, the routine writes it to a sequential file and then it calls a server job to insert the data form the sequential file into database (oracle) tables.
This information (along with the job log and job report which are also captured) has proved very useful in diagnostic and performance measurement activities. But when a job is stopped, the job called from within the After-job routine - which loads the rows into the database tables - doesn't complete (it gets stopped to) and I lose the resulting audit data. I would, of course, perfer to not lose the data.
katz
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It is not possible to prevent the stop request from being cascaded.
However status information is recorded in the Repository, and can be retrieved even after the job has finished - irrespective of its status.
Therefore your job that collects the statistics could be rewritten to run independently (probably passing the job name as a job parameter), and run from a job sequence or manually as needed.
However status information is recorded in the Repository, and can be retrieved even after the job has finished - irrespective of its status.
Therefore your job that collects the statistics could be rewritten to run independently (probably passing the job name as a job parameter), and run from a job sequence or manually as needed.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I think if you call up the after-job job using the command line 'dsjob' instead of the accepted DS/BASIC set of routines you will be successful.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>