Capturing Job jog in a text file
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 342
- Joined: Tue Nov 04, 2008 10:38 am
- Location: Chennai, India
Capturing Job jog in a text file
Hi,
How to capture job log and its performance statistics in a text file ?
How to capture job log and its performance statistics in a text file ?
-
- Participant
- Posts: 152
- Joined: Tue Jan 13, 2009 8:59 am
Re: Capturing Job jog in a text file
on to the client machine???
-
- Participant
- Posts: 342
- Joined: Tue Nov 04, 2008 10:38 am
- Location: Chennai, India
Re: Capturing Job jog in a text file
It may be in either Client or Server.
-
- Premium Member
- Posts: 1735
- Joined: Thu Mar 01, 2007 5:44 am
- Location: Troy, MI
Re: Capturing Job jog in a text file
you can create a routinevinothkumar wrote:It may be in either Client or Server.
to get the logs DSGetLogSummary() or DSGetLogDetails() can be used.
for getting the link counts use DSGetLinkInfo().
Or got to Kim Duke's site and see whether you can get something to fulfill your requirements.
Priyadarshi Kunal
Genius may have its limitations, but stupidity is not thus handicapped.![Wink :wink:](./images/smilies/icon_wink.gif)
Genius may have its limitations, but stupidity is not thus handicapped.
![Wink :wink:](./images/smilies/icon_wink.gif)
We get row counts with a job after each job is run. Currently we use a shell script to run all jobs. It runs the get row counts job after each job. All row counts are stored in run time audit tables. The old versions of the jobs and DDL to create the tables are on my tips and tricks page on my sig.
The current versions which have not posted do PX jobs. It stores each partition of a run. It can be a lot of rows. We have one project which has about 40,000 rows per nightly run. This information is critical for performance tuning. If your jobs are running slow or perceived as "slow" then you have a history to compare to. This job always this slow or has it changed. We have analyzed about everything you can think of. Like how many jobs are running at a specific time when CPU was paging real bad. Same on the database side. How many jobs were loading a database when its performance became "slow".
There is even reports to estimate disk usage. There is a routine which can estimate row length based on the metadata (column length and type) of a link. You can therefore calculate mb/sec which is a better predictor of performance than rows/sec.
All these jobs are called EtlStats and free to download. If you can figure out how to make them work then there is some value to be added to your project. There are lots of posts about these jobs.
PLEASE do not send me personal emails or messages about these jobs. There are lots of people using them. Post your questions in this forum and somebody will answer your question usually before I can.
I was think about trying to present at the next IOD all the ways we have used these jobs. Maybe BlueCross will allow me to share the new versions and maybe the wrapper shell script which we use. I think a lot of people have contributed to these and hopefully we can continue to share generic jobs and scripts. I don't see how anyone can see these as part of our "core" business or proprietary. I don't mind people using these. I try to keep the original developers names in the jobs and scripts even when it is not me.
Most of these are free because my boss at Discovery Channel (Rick) said it was cool to post them. Thanks Rick. Most of them were either developed there or fine tuned there. Jim at Hotels.com was also cool and allowed me to post enhancements. I like this attitude of sharing. We as a group of developers benefit. I wish more people shared code or jobs. Ken, Ray and others have shared some very cool stuff.
The current versions which have not posted do PX jobs. It stores each partition of a run. It can be a lot of rows. We have one project which has about 40,000 rows per nightly run. This information is critical for performance tuning. If your jobs are running slow or perceived as "slow" then you have a history to compare to. This job always this slow or has it changed. We have analyzed about everything you can think of. Like how many jobs are running at a specific time when CPU was paging real bad. Same on the database side. How many jobs were loading a database when its performance became "slow".
There is even reports to estimate disk usage. There is a routine which can estimate row length based on the metadata (column length and type) of a link. You can therefore calculate mb/sec which is a better predictor of performance than rows/sec.
All these jobs are called EtlStats and free to download. If you can figure out how to make them work then there is some value to be added to your project. There are lots of posts about these jobs.
PLEASE do not send me personal emails or messages about these jobs. There are lots of people using them. Post your questions in this forum and somebody will answer your question usually before I can.
I was think about trying to present at the next IOD all the ways we have used these jobs. Maybe BlueCross will allow me to share the new versions and maybe the wrapper shell script which we use. I think a lot of people have contributed to these and hopefully we can continue to share generic jobs and scripts. I don't see how anyone can see these as part of our "core" business or proprietary. I don't mind people using these. I try to keep the original developers names in the jobs and scripts even when it is not me.
Most of these are free because my boss at Discovery Channel (Rick) said it was cool to post them. Thanks Rick. Most of them were either developed there or fine tuned there. Jim at Hotels.com was also cool and allowed me to post enhancements. I like this attitude of sharing. We as a group of developers benefit. I wish more people shared code or jobs. Ken, Ray and others have shared some very cool stuff.
Mamu Kim
By the way the sequences included get row counts for all jobs in the sequence at the end of the sequence. So all the jobs are examples of how to use the other jobs. Some reports are automatically sent at the end of these sequences. Audit reports of the row counts just generated can be very useful in catching mistakes. Other audit reports are included like the completeness reports.
Do a search, lots of posts about all of this.
Do a search, lots of posts about all of this.
Mamu Kim
Most of these are server jobs. You may need to modify them to accurately capture PX information. Some people have done that and I posted their changes. More than willing to do so.
The original post was about logs. The log files of failed jobs are automatically emailed to whoever you want at the end of sequences. You could easily change this report into a job to archive them. We do this now in our shell script. We use dsjob.
The original post was about logs. The log files of failed jobs are automatically emailed to whoever you want at the end of sequences. You could easily change this report into a job to archive them. We do this now in our shell script. We use dsjob.
Mamu Kim