Unable to view the Log file.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

This error means that the log file for this job has become corrupted. The most common cause is that the disk (at one time) filled up.
For LOG files the solution is easy, just clear the log file.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Yup, typically corrupted from a full disk or you've blown through the 2GB Barrier. Issue a CLEAR.FILE RT_LOG43 from the Administrator or a TCL prompt.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Just doing a full log clear from the director (Job -> Clear Log -> immediate purge of all entries) should do the trick as well, if it doesn't then you can do a CLEAR.FILE (but remember that the purge settings will be gone).
prasanna_anbu
Participant
Posts: 42
Joined: Thu Dec 28, 2006 1:39 am

Post by prasanna_anbu »

chulett wrote:Yup, typically corrupted from a full disk or you've blown through the 2GB Barrier. Issue a CLEAR.FILE RT_LOG43 from the Administrator or a TCL prompt. ...
chulett , Thanks a lot. it is working fine now.
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

Then please mark the topic as resolved.
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
prasanna_anbu
Participant
Posts: 42
Joined: Thu Dec 28, 2006 1:39 am

Post by prasanna_anbu »

priyadarshikunal wrote:Then please mark the topic as resolved.
Sorry, after rerun again I faced the same error. any help please
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

the log file was corrupted and you fixed it by clearing the logfile. However you forgot to eliminate the root cause i.e. why they got corrupted. You are running out of disk again it seems.
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
prasanna_anbu
Participant
Posts: 42
Joined: Thu Dec 28, 2006 1:39 am

Post by prasanna_anbu »

priyadarshikunal wrote:the log file was corrupted and you fixed it by clearing the logfile. However you forgot to eliminate the root cause i.e. why they got corrupted. You are running out of disk again it seems.
But I can view other sequencer's logs in the same job.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Please run the following command from the Administrator client:

Code: Select all

UVFIXFILE RT_LOG43
Report your findings.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Then I assume you are once again generating a bazillion warning messages and the log has 'overflowed', pushing it past the 2GB threshold. You might want to monitor it real time after it gets 'fixed' again, see what is being logged.
-craig

"You can never have too many knives" -- Logan Nine Fingers
prasanna_anbu
Participant
Posts: 42
Joined: Thu Dec 28, 2006 1:39 am

Post by prasanna_anbu »

chulett wrote:Then I assume you are once again generating a bazillion warning messages and the log has 'overflowed', pushing it past the 2GB threshold. You might want to monitor it real time after it gets 'fixed' a ...
again I used the CLEAR.FILE RT_LOG43 and reran the job, there is no warnings but there are 119650 entries as of now. is it cause the problem?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Too many log messages can cause the corruption issue, yes. All of those are from one run? What kind of messages are being logged? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
prasanna_anbu
Participant
Posts: 42
Joined: Thu Dec 28, 2006 1:39 am

Post by prasanna_anbu »

chulett wrote:Too many log messages can cause the corruption issue, yes. All of those are from one run? What kind of messages are being logged? :? ...
For each batch id there is a insert statement, there are more than 10000 batch ids. all the insert statements presented in the log file. now the log file entries increased to 138285.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Then you have reached an impasse. Either reduce the number and size of messages or, better yet, don't log them in the job log but elsewhere. This heavy I/O to the log file is most certainly slowing your job down as well.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

prasanna_anbu wrote:
chulett wrote:Too many log messages can cause the corruption issue, yes. All of those are from one run? What kind of messages are being logged? :? ...
For each batch id there is a insert statement, there are more than 10000 batch ids. all the insert statements presented in the log file. now the log file entries increased to 138285.
Why? Under what circumstances are 'all the insert statements' being logged? Perhaps you could post a small sample so we can see just what it is you are talking about...
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply