HI ,
The requirement is to abort the job immediately once it start giving warnings. The job will be run by third party tools like Autosys, so i cannot specify warning limits which you have while running the job.
Is anyone aware of any routine in DS or custom built which would scan my logs real time and abort my job immediately once warnings start to occur.
Any help/suggestion is highly appreciated.
Thanks
Kamesh
Abort Job immediately after warning
Moderators: chulett, rschirm, roy
As specified, by setting -warn = 1 will let you abort the job via the wraper script that you use to call the jobs. This option will be good if you calling the individual jobs using dsjob command from wraper script.
But If Jobsequence is been called, warning will be generated only after completion of the underlying job.
You can also check out the option available in Transformer. But it may not be advisable at many cases, as you might end up in developing some jobs without Transformer.
But If Jobsequence is been called, warning will be generated only after completion of the underlying job.
You can also check out the option available in Transformer. But it may not be advisable at many cases, as you might end up in developing some jobs without Transformer.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
HI,
Thanks for everybody who replied .
I understand the option available through the wrapper script/perl script to issue warn option . But the scripts are developed by third party , so change in script means, a CR to be raised and money to be spent. So client is keen to do it at the Datastage level. So is there any option available at the Datastage level to abort the Job immediately once warning start to appear at the logs. Is there some method to scan the log type during the run and abort the job.
Kindly share your ideas or thoughts .
Thanks
Kamesh
Thanks for everybody who replied .
I understand the option available through the wrapper script/perl script to issue warn option . But the scripts are developed by third party , so change in script means, a CR to be raised and money to be spent. So client is keen to do it at the Datastage level. So is there any option available at the Datastage level to abort the Job immediately once warning start to appear at the logs. Is there some method to scan the log type during the run and abort the job.
Kindly share your ideas or thoughts .
Thanks
Kamesh
-
- Premium Member
- Posts: 1735
- Joined: Thu Mar 01, 2007 5:44 am
- Location: Troy, MI
The dsjob change suggested *is* "at the DataStage level" and is the appropriate solution to this problem. IMHO you will spend more money developing a much less effective work around if you take another path.
To do what you are asking would require a second monitor process be built, something that would always run at the same time as the job it is monitoring and which would constantly query the job's logs for problems, not exactly a speedy process. It would then need to try and stop the monitored job at the first sign of trouble.
Me, I would stick with the wrapper script change, that should be quick and pretty darn painless I would think.
To do what you are asking would require a second monitor process be built, something that would always run at the same time as the job it is monitoring and which would constantly query the job's logs for problems, not exactly a speedy process. It would then need to try and stop the monitored job at the first sign of trouble.
Me, I would stick with the wrapper script change, that should be quick and pretty darn painless I would think.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
In pondering this more, while I still wouldn't advise this as the proper solution it would certainly be an interesting nut to crack.
At a very high level, you could build a process that basically runs constantly and queries the repository for newly running jobs. Once it finds one, drop it into an array and then spool off a separate task to monitor that running job as noted earlier and attempt to spike it if there are problems. Do this without 'waiting' for it to finish so multiple monitors could be running simultaneously, you would then need a small monitor loop inside the main program to see when they finish and free up their slot. Lather, rinse, repeat.
Or just modify the wrapper script.
At a very high level, you could build a process that basically runs constantly and queries the repository for newly running jobs. Once it finds one, drop it into an array and then spool off a separate task to monitor that running job as noted earlier and attempt to spike it if there are problems. Do this without 'waiting' for it to finish so multiple monitors could be running simultaneously, you would then need a small monitor loop inside the main program to see when they finish and free up their slot. Lather, rinse, repeat.
Or just modify the wrapper script.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers