I'm wondering... why run another job from 'after-job'? That's not something I've ever considered doing and am wondering why you would have set things up like that. If you don't mind.
-craig
"You can never have too many knives" -- Logan Nine Fingers
Link STAGE_NAME
LINK_NAME
LINK_TYPE
LINK_DESCRIPTION
LINK_TARGET
LINK_ROW_COUNT
The rows across the tables are joined by a surrogate key.
Once the audit data is extracted, the routine writes it to a sequential file and then it calls a server job to insert the data form the sequential file into database (oracle) tables.
This information (along with the job log and job report which are also captured) has proved very useful in diagnostic and performance measurement activities. But when a job is stopped, the job called from within the After-job routine - which loads the rows into the database tables - doesn't complete (it gets stopped to) and I lose the resulting audit data. I would, of course, perfer to not lose the data.
It is not possible to prevent the stop request from being cascaded.
However status information is recorded in the Repository, and can be retrieved even after the job has finished - irrespective of its status.
Therefore your job that collects the statistics could be rewritten to run independently (probably passing the job name as a job parameter), and run from a job sequence or manually as needed.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.