How to get the Log File Name for the Particular Job, in Unix

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
pras
Premium Member
Premium Member
Posts: 32
Joined: Mon Nov 28, 2005 8:33 am
Location: Atlanta

How to get the Log File Name for the Particular Job, in Unix

Post by pras »

i want to know the Log file name for a particular job. In unix, every project log file is like RT_LOG420 like that.

how to find to which job the log for eg RT_LOG420 is associated with.

Thanks
Prasanna
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Prasanna,

these RT_LOGnnn files are Hashed files and cannot be read from UNIX; so there is not much purpose in determining which file belongs to which job. You can find the association between the job name and the job number using the DS_JOBS file/table. The command you can use to get this from TCL or the Administrator command window is

Code: Select all

SELECT JOBNO FROM DS_JOBS WHERE NAME = 'JobName'; 
pras
Premium Member
Premium Member
Posts: 32
Joined: Mon Nov 28, 2005 8:33 am
Location: Atlanta

Post by pras »

Hi Arndw

thanks for your info. my problem is the DsJOb Stops executing after 2000 rows and starts processing next stage .we checked the limits, we gave it as no limits, even then some times it skips after processing 2000 Lines. it doesn't shows this as an error

we want to create a job which will monitor this conditon and it should trigger a mail, if the job skips after 2000 rows. since it is happening in Production Env, the impact is huge.

how can i create the job, which will check this condition.

thanks
Prasanna
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Using the RT_LOGnnn is certainly not going to address this.

You are most likely more familiar with UNIX shell scripting than with writing DataStage routines, so the quickest and easiest approach would be to use the UNIX dsjob call for your job and to use one of the log command line options to returns information on the running job, which you can then process in the script.
Post Reply