Page 1 of 1

Pass Parameters through Autosys to DS

Posted: Mon Jan 29, 2007 8:54 am
by just4geeks
I have a DS job which processes different input files. And I use autosys to schedule the job at different times of the month. However, if a job fails due to a corrupt input file, how do I forcestart the job for that specific input file?

In other words, can I pass the name of the input file in autosys that will in turn pass it to DSJob and then force start the job?

Thanks in advance......

Posted: Mon Jan 29, 2007 9:11 am
by DSguru2B
Usually the DataStage job developer takes care of parameter assignment. If a job fails. Just change the input file name which will have to be parametrized, and re-run that job. If you have to explore doing that with Autosys, you will have to talk to autosys folks to see how they can pass the parameter which i doubt they will do. They are going to push this back to you.

Posted: Mon Jan 29, 2007 9:29 am
by chulett
How are you getting the filenames now? If it is some flavor of automagic, won't it simply pick up with the last file it failed on? :?

Posted: Mon Jan 29, 2007 9:41 am
by just4geeks
chulett wrote:How are you getting the filenames now? If it is some flavor of automagic, won't it simply pick up with the last file it failed on? :?
Currently, the files are dumped by a process at different times into a folder. The DS Job simply picks up the file that it finds in the folder, processes it and then moves the file to a destination folder. Since DSJob is run at different times, it reads a different file every time its run.

The only workaround I can do now, is move the failed files to a 'failed job' folder and then write another DSJob that processes files contained in this folder. Then I can forcestart this DSJob in autosys.

I was planning on modifying the current DSJob such that I could control in autosys the file it reads.

Let me know if you can think of a better solution.

Posted: Mon Jan 29, 2007 9:57 am
by DSguru2B
You can keep the same job just make it point to the 'failed job' directory. How you are going to do that depends upon how are you currently controlling your process. Are you using Job Sequence or a custom Control Job or a Unix script? Regardless, however you are specifying parameters currently, similarly you will have to pass the parameters for the "cover up" or "failed job".

Posted: Mon Jan 29, 2007 10:04 am
by just4geeks
DSguru2B wrote:You can keep the same job just make it point to the 'failed job' directory. How you are going to do that depends upon how are you currently controlling your process. Are you using Job Sequence or a custom Control Job or a Unix script? Regardless, however you are specifying parameters currently, similarly you will have to pass the parameters for the "cover up" or "failed job".
Thanks for your reply....

I am using UNIX script. Also, I fail to understand what it means to "point to the 'failed job' directory"? Can I do it Autosys? I mean can I specify what folders to look into in the Autosys JIL script?

Posted: Mon Jan 29, 2007 10:08 am
by DSguru2B
No. The autosys will just fire off your script which will do everything. Its at a lower level than what autosys can do. You will have to handle this within your script.
"Pointing to the failed directory" means that you need to check in your script if the job failed, if it did then pass the directory path of the folder that you mentioned will contain the file name which will be run if the previous run was un-successful. Its a matter of setting the correct parameter before running your job.

Posted: Mon Jan 29, 2007 10:18 am
by just4geeks
DSguru2B wrote:No. The autosys will just fire off your script which will do everything. Its at a lower level than what autosys can do. You will have to handle this within your script.
"Pointing to the failed directory" means that you need to check in your script if the job failed, if it did then pass the directory path of the folder that you mentioned will contain the file name which will be run if the previous run was un-successful. Its a matter of setting the correct parameter before running your job.
Thanks, I get it now. But wouldn't it create problems when multiple instances of the UNIX script are run, one handling the failed file and the other handling a regular file that has just arrived.

I am beginning to think that creating a separate DSJob to specifically handle failed files is simpler in design though not elegant.

Posted: Mon Jan 29, 2007 10:28 am
by DSguru2B
Dont run the script that handles failed jobs at the same time as your regular run. It will create confusion. Keep it plain, keep it simple. Easier to manage and maintain.
Run your failed jobs at the end. Or if you think it will be easier for you to manage a different job, go for it. Make sure you have enough annotations and documentation to support the fact that you have a second identical job. Else developers like me start wondering whats going on :)

Posted: Mon Jan 29, 2007 10:41 am
by just4geeks
DSguru2B wrote:Dont run the script that handles failed jobs at the same time as your regular run. It will create confusion. Keep it plain, keep it simple. Easier to manage and maintain.
Run your failed jobs at the end. Or if you think it will be easier for you to manage a different job, go for it. Make sure you have enough annotations and documentation to support the fact that you have a second identical job. Else developers like me start wondering whats going on :)
Thanks DSguru2B. I will take note of what you said. I guess its easier to go for a different job. Let me go ahead and mark this topic "WorkAround".