RT_LOG corrupt at 2GB

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
LD
Premium Member
Premium Member
Posts: 32
Joined: Thu Oct 21, 2010 9:03 am

RT_LOG corrupt at 2GB

Post by LD »

Hi All,

RT_LOG for few jobs are exceeding 2GB limits causing the job to hang. Every time this happens we have to kill the hung job and clear the RT_LOG using CLEAR.FILE

Autopurge is set to execute every 5 run of job, which we need to do any troubleshooting.

Is there a way to control the log corruption ? can we make this file to 64bit so that it can store more data.


-
Thanks & Regards
Shailesh
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Conversion to 64-bit would certainly prevent these corruptions.

I'd also be asking why the files are growing so large in the first place - is there any way to reduce the number of logged entries?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That was my first question as well - why are you logging what must be an enormous number of log messages? If these are warnings from the job, fix it so they are no longer an issue, if at all possible.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply