Capturing Job jog in a text file

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
vinothkumar
Participant
Posts: 342
Joined: Tue Nov 04, 2008 10:38 am
Location: Chennai, India

Capturing Job jog in a text file

Post by vinothkumar »

Hi,
How to capture job log and its performance statistics in a text file ?
betterthanever
Participant
Posts: 152
Joined: Tue Jan 13, 2009 8:59 am

Re: Capturing Job jog in a text file

Post by betterthanever »

on to the client machine???
vinothkumar
Participant
Posts: 342
Joined: Tue Nov 04, 2008 10:38 am
Location: Chennai, India

Re: Capturing Job jog in a text file

Post by vinothkumar »

It may be in either Client or Server.
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Re: Capturing Job jog in a text file

Post by priyadarshikunal »

vinothkumar wrote:It may be in either Client or Server.
you can create a routine
to get the logs DSGetLogSummary() or DSGetLogDetails() can be used.
for getting the link counts use DSGetLinkInfo().

Or got to Kim Duke's site and see whether you can get something to fulfill your requirements.
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

As noted, you can code something using the various DSGet* routines or use their scripting equivalents from the command line with 'dsjob'.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

We get row counts with a job after each job is run. Currently we use a shell script to run all jobs. It runs the get row counts job after each job. All row counts are stored in run time audit tables. The old versions of the jobs and DDL to create the tables are on my tips and tricks page on my sig.

The current versions which have not posted do PX jobs. It stores each partition of a run. It can be a lot of rows. We have one project which has about 40,000 rows per nightly run. This information is critical for performance tuning. If your jobs are running slow or perceived as "slow" then you have a history to compare to. This job always this slow or has it changed. We have analyzed about everything you can think of. Like how many jobs are running at a specific time when CPU was paging real bad. Same on the database side. How many jobs were loading a database when its performance became "slow".

There is even reports to estimate disk usage. There is a routine which can estimate row length based on the metadata (column length and type) of a link. You can therefore calculate mb/sec which is a better predictor of performance than rows/sec.

All these jobs are called EtlStats and free to download. If you can figure out how to make them work then there is some value to be added to your project. There are lots of posts about these jobs.

PLEASE do not send me personal emails or messages about these jobs. There are lots of people using them. Post your questions in this forum and somebody will answer your question usually before I can.

I was think about trying to present at the next IOD all the ways we have used these jobs. Maybe BlueCross will allow me to share the new versions and maybe the wrapper shell script which we use. I think a lot of people have contributed to these and hopefully we can continue to share generic jobs and scripts. I don't see how anyone can see these as part of our "core" business or proprietary. I don't mind people using these. I try to keep the original developers names in the jobs and scripts even when it is not me.

Most of these are free because my boss at Discovery Channel (Rick) said it was cool to post them. Thanks Rick. Most of them were either developed there or fine tuned there. Jim at Hotels.com was also cool and allowed me to post enhancements. I like this attitude of sharing. We as a group of developers benefit. I wish more people shared code or jobs. Ken, Ray and others have shared some very cool stuff.
Mamu Kim
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

By the way the sequences included get row counts for all jobs in the sequence at the end of the sequence. So all the jobs are examples of how to use the other jobs. Some reports are automatically sent at the end of these sequences. Audit reports of the row counts just generated can be very useful in catching mistakes. Other audit reports are included like the completeness reports.

Do a search, lots of posts about all of this.
Mamu Kim
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Most of these are server jobs. You may need to modify them to accurately capture PX information. Some people have done that and I posted their changes. More than willing to do so.

The original post was about logs. The log files of failed jobs are automatically emailed to whoever you want at the end of sequences. You could easily change this report into a job to archive them. We do this now in our shell script. We use dsjob.
Mamu Kim
Alokby
Premium Member
Premium Member
Posts: 9
Joined: Wed Sep 15, 2004 7:27 am

Post by Alokby »

You can capture job log using dsjob -report in your unix script. Also check DataStagemanual for dsjob -logsum, dsjob -jobinfo, dsjob -logdetail, and others which may meet your needs.
Post Reply