routine for getting message type,message detail from job log

A forum for discussing DataStage<sup>®</sup> basics. If you're not sure where your question goes, start here.

Moderators: chulett, rschirm, roy

Post Reply
DSRajesh
Premium Member
Premium Member
Posts: 297
Joined: Mon Feb 05, 2007 10:37 pm

routine for getting message type,message detail from job log

Post by DSRajesh »

Hi All,

I need to capture message types information (Info,warnings and fatals) and each message detail in the datastage job log and store in oracle table.

please suggest me any server routine on windows for this .
RD
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Re: routine for getting message type,message detail from job

Post by SURA »

You can use DSJobLog / dsjob -report PROJ_NME JOB_NME DETAIL. Can write routine. Already lot of discussion, hints & code provided here. You can have a search.
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

If you want a server routine the main calls will be
DSGetNewestLogId()
DSGetLogSummary()
DSGetLogEventIds()
DSGetLogEvent()

Read about these in the DataStage BASIC manual.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
DSRajesh
Premium Member
Premium Member
Posts: 297
Joined: Mon Feb 05, 2007 10:37 pm

Post by DSRajesh »

How to build a routine using these
RD
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Choose "Server Routine" from the New menu.
Add DataStage BASIC code.
Save.
Compile (within the Routine dialog).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Add to that the advice to start by reading about and becoming familiar with what each function does, either in the BASIC manual pdf or in the Designer client's "online" help. That will help you figure out how to cobble them together to get the results you require.

There are bigger questions to answer, though. When will you harvest this log information? How will you get it into your target table? One answer would be to run the routine you build in the After Job area of each job. Me, I would advise against that and always prefer a separate process that does this kind of work after your processing has completed. Yes, that means you have more work to do to determine what jobs to interrogate but to me that far outweighs the scenario where issues with log gathering interrupt your loads.

As to the target, I would suggest writing all of the results of your log gathering to a sequential file that you append to each time and then a simple DataStage job can take all that information and write it to your target table in one swell foop.

Last but definitely not least, I'm also going to throw in the requisite plug for Kim Duke's ETLStats collection which is free and is already coded to do all this. At worst case you can use it as inspiration. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

The dsjob command is the easiest way especially on UNIX. It is easy to script a solution. We did some nice things by searching the log for specific errors like out of disk space. Teradata would always give detailed errors. We would email these to the on-call person. It is kind of hard to script the last run log. It takes a few tricks. The BASIC routines Ray posted do the same thing. You need the log id of the last run and then loop to the last log id. I might be able to post that code.

You would have convert it to a Windows batch file because it is a UNIX script.

We found that posting these logs can waste a lot of time and energy because there is too much detail. It is only useful when you narrow it down to specific log items. Certain database errors can be trapped and a DBA might have to always address specific ones. Saves emailing everybody.
Mamu Kim
Post Reply