Page 1 of 1

Abort Job immediately after warning

Posted: Tue Aug 10, 2010 9:54 am
by kamesh_sk
HI ,

The requirement is to abort the job immediately once it start giving warnings. The job will be run by third party tools like Autosys, so i cannot specify warning limits which you have while running the job.
Is anyone aware of any routine in DS or custom built which would scan my logs real time and abort my job immediately once warnings start to occur.
Any help/suggestion is highly appreciated.

Thanks
Kamesh

Posted: Tue Aug 10, 2010 11:00 am
by chulett
You certainly *can* set a warning limit from the command line for tools like AutoSys, check out the -warn option for dsjob.

Posted: Tue Aug 10, 2010 2:56 pm
by kumar_s
As specified, by setting -warn = 1 will let you abort the job via the wraper script that you use to call the jobs. This option will be good if you calling the individual jobs using dsjob command from wraper script.
But If Jobsequence is been called, warning will be generated only after completion of the underlying job.
You can also check out the option available in Transformer. But it may not be advisable at many cases, as you might end up in developing some jobs without Transformer.

Posted: Wed Aug 11, 2010 3:09 am
by kamesh_sk
HI,

Thanks for everybody who replied .

I understand the option available through the wrapper script/perl script to issue warn option . But the scripts are developed by third party , so change in script means, a CR to be raised and money to be spent. So client is keen to do it at the Datastage level. So is there any option available at the Datastage level to abort the Job immediately once warning start to appear at the logs. Is there some method to scan the log type during the run and abort the job.
Kindly share your ideas or thoughts .

Thanks
Kamesh

Posted: Wed Aug 11, 2010 3:42 am
by priyadarshikunal
Then you will have to use after job subroutine to do that. However it will be after-job not immediate as you mentioned in your post.

Posted: Wed Aug 11, 2010 6:27 am
by chulett
The dsjob change suggested *is* "at the DataStage level" and is the appropriate solution to this problem. IMHO you will spend more money developing a much less effective work around if you take another path.

To do what you are asking would require a second monitor process be built, something that would always run at the same time as the job it is monitoring and which would constantly query the job's logs for problems, not exactly a speedy process. It would then need to try and stop the monitored job at the first sign of trouble.

Me, I would stick with the wrapper script change, that should be quick and pretty darn painless I would think.

Posted: Wed Aug 11, 2010 7:23 am
by chulett
In pondering this more, while I still wouldn't advise this as the proper solution it would certainly be an interesting nut to crack.

At a very high level, you could build a process that basically runs constantly and queries the repository for newly running jobs. Once it finds one, drop it into an array and then spool off a separate task to monitor that running job as noted earlier and attempt to spike it if there are problems. Do this without 'waiting' for it to finish so multiple monitors could be running simultaneously, you would then need a small monitor loop inside the main program to see when they finish and free up their slot. Lather, rinse, repeat.

Or just modify the wrapper script. :wink: