Page 1 of 1

Stopping Log entries

Posted: Wed Sep 10, 2008 12:57 pm
by JPalatianos
Hi,
A developer has just run a job in dev with the "no Limit" for the warnings. We managed to terminate the job(by recompiling) and clear the log, but it keeps writing to it. Is there any way to stop this.......3 hours later??
Thanks for any info....
John

Posted: Wed Sep 10, 2008 12:59 pm
by chulett
Find the process and kill it.

Posted: Wed Sep 10, 2008 1:03 pm
by JPalatianos
When I do a list.readu all.....the job does not show up. The status of the job is showing up as compiled in Director but there are hundred of log entries being written every few seconds.

Posted: Wed Sep 10, 2008 1:04 pm
by JPalatianos
When I do a list.readu all.....the job does not show up. The status of the job is showing up as compiled in Director but there are hundred of log entries being written every few seconds.

Posted: Wed Sep 10, 2008 2:09 pm
by chulett
Under UNIX I'd be looking for the job process / PID and killing that, for Windows I've no clue how that would be done but it's all from outside of DataStage at the O/S level.

Posted: Wed Sep 10, 2008 3:11 pm
by ray.wurlod
You can't stop it without terminating the process (usually a database process) that is generating the errors. And note too that these are buffered, so a few thousand will still arrive after that.

Posted: Wed Sep 10, 2008 11:19 pm
by chitravallivenkat
sometime recompilation will be helpful, if status shows stopped but still writing log entries.

Posted: Thu Sep 11, 2008 7:09 am
by chulett
Don't see how. :?

Posted: Thu Sep 11, 2008 10:29 am
by JPalatianos
OK..I figured I would be patient. I just went back in Director to take a peek and recieve teh following pop-up......
Any ideas what to do with this??
Error selecting from log file RT_LOG207
Command was: SSELECT RT_LOG207 WITH @ID LIKE '1N0N' COUNT.SUP
Error was: Internal data error. File 'D:\DataStage\CostFoundation/RT_LOG207/DATA.30': Computed blink of 0x800 does not match expected blink of 0x0! Detected within group starting at address 0x1800!

Thanks - - John

Posted: Thu Sep 11, 2008 11:02 am
by chulett
Sure, a 'blink' is a 'backwards link' and you can search the forums for that term to find some previous conversations on this topic.

Basically, the log has become corrupt, undoubtedly due to the high volume of writes causing it to run smack into the 32bit addressing barrier of 2.2GB. At this point you'll need to find the job number in question and issue a CLEAR.FILE RT_LOGnnn where 'nnn' is the job number. Do this from the Administrator or a TCL prompt connected to the project in question. After that, you'll need to reestablish any Auto Purge settings for that log as they will be removed as well.

Have the log writes stopped yet?

ps. Patience isn't always a virtue. :wink:

Posted: Thu Sep 11, 2008 12:20 pm
by JPalatianos
Thanks Craig!!
The log write did eventually stop, I issued the CLEAR.FILE RT_LOG207 command and set up the auto purge. Looks like we are all set!!
Thanks Again :D