Page 1 of 1

Datastage log overflow limit setting

Posted: Thu Apr 27, 2006 5:11 am
by mansali21
Question:
---------------
1.There is a possibility while running a job we may get the over flow of log for the current run. So how to capture the current log in the output log file?

Reason:
------------
We need not want any thing to set options like log only for the current run in the DS- Director. And along with the cumulative log we have to capture the current log, while capturing the current log we have two scenarios out of that one is maintaining less than say 2 billions of log for this we adopted script, same as been attached below and for the other overflow of log take place (say more than 2 billions of log) we need to know how to capture and the logic?

More Description:
-------------------
We are trying to capture the generated "Event Id" from the Datastage Director using unix shell script for the last log and the "Event Id" after the job ran.We have figured out the shell script logic for the same.Please find the logic below:


(logstrt=`/apps/Ascential/DataStage/DSEngine/bin/dsjob -lognewest $Project $JobName`
IFS=" "
set $logstrt
stlp=$4
IFS="~"

$Command >> $TMP_LOG_NAME ("$Command to run the Datastage Job)

ReturnCode=$? ; export ReturnCode

logend=`/apps/Ascential/DataStage/DSEngine/bin/dsjob -lognewest $Project $JobName`
IFS=" "
set $logend
endlp=$4
IFS="~"
cnt=$stlp
touch $tmplog
while [ $cnt -le $stlp ]
do
/apps/Ascential/DataStage/DSEngine/bin/dsjob -logdetail $Project $JobName $cnt >>$TMP_LOG_NAME
cnt=`expr $cnt + 1`
done)

But we are facing the problem of Event id "Overflow" in Datastage Director.

Posted: Thu Apr 27, 2006 5:34 am
by ray.wurlod
The key to the DataStage log is an integer, and therefore can not be incremented beyond 2,147,483,647 (the largest representable signed integer in twos-complement numbers). DataStage is really not set up to operate this way; your performance is likely to be abysmal.

The same kind of effect is likely to be occurring in your expr $cnt + 1 command. Arithmetic in Bourne and Korn shells is integer arithmetic.

I think you are going to need to find a different strategy for incrementing your cnt shell variable.

Posted: Thu Apr 27, 2006 6:30 am
by chulett
Wait... are you saying that you only have a problem with this methodology when there are more than 2 billion log entries for a job? :shock:

Posted: Thu Apr 27, 2006 7:01 am
by kcbland
It is a bad idea to use the DS logs for permanent storage. Consider loading into a database if you need full history. Reimporting the job clears the log, so you can never make changes to the job. There is no method for "migrating" logs if you move to another project/server.

Posted: Thu Apr 27, 2006 9:27 am
by vigneshra
Hi

We too will encounter this problem. We are using this approach to capture the latest log information through shell scripts but when the event id exceeds 2,147,483,647, we suspect it will automatically get reset and we will end up getting some weird numbers. Is there any workaround for this like once event id reaches some number say 2,000,000,000 reset the log counter back to one and that can be done at the end of a job run? Is there a way to reset the event id counter?

Posted: Thu Apr 27, 2006 10:03 am
by kcbland
No, it will not "reset". The job will abort when the log file exceeds 2.2GB.

Posted: Thu Apr 27, 2006 10:21 am
by DSguru2B
I think if you put the auto purge set to two days, that should work. Because every time the logs are purged, the event id is set to 0.

Posted: Thu Apr 27, 2006 4:07 pm
by ray.wurlod
DSguru2B wrote:I think if you put the auto purge set to two days, that should work. Because every time the logs are purged, the event id is set to 0.
Not true. The event ID is only set to 0 if the log is cleared.

I still think the problem is going to be with the expr command in the shell script. There is no easy workaround to that.

Posted: Thu Apr 27, 2006 4:45 pm
by vigneshra
Hi,

If the event ID cannot be resetted to zero :(, then is there any workaround using command line itself (dsjob) or do we need to develop a dummy job to call dsgetlogsummary() routine or to query DS repository?

Posted: Thu Apr 27, 2006 5:06 pm
by ray.wurlod
Create a DataStage job or routine that can dump the contents of a job log into somewhere else - a text file, perhaps. This will allow you to use larger numbers than integers.

This job or routine could also purge the job log, but there are implications in that the control records' values also need to be reset. You need to code this in as well, unless you clear the log completely (you still need to reinstate the //PURGE.SETTINGS control record, if any).

If you need more than 38 digit numbers, use string math (the SAdd() function).