Director Client hangs while trying to view the log files

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Director Client hangs while trying to view the log files

Post by kris007 »

Hi All,

I am trying to view the log files of a particular job from the Director Client and whenever I am trying to do that my Director client is hanging on. This is happening only for this particular one job. For the rest of the jobs its working fine. Is this something to do with my job. Did anyone come across this kind of situation. Any inputs would be of great help.

Thanks
Kris. :?
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

How long have you waited? Sometimes when the log file is huge it can take 10, 15 or more minutes to refresh the detail view. You can change to the directory and see if one of the RT_LOG<nnn> files is much bigger than all others.
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

I developed this job recently ran this job not more than 10 times and I can see the log files of jobs which are much bigger than this. And I gave it more than 20minutes but of no use. Had to kill the session inorder to open it again.

Thanks
Kris.
gateleys
Premium Member
Premium Member
Posts: 992
Joined: Mon Aug 08, 2005 5:08 pm
Location: USA

Post by gateleys »

Could be that you have way toooo many warnings in your log. Can you try running it again with Warning Limit set to a lower number, say 50, and check the log.

gateleys
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

The warnings limit is always set to 50. Still trying to figure out. :?

Kris
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Do you have your deadlock daemon set up or running on this system? You could try to use the "dsjob -log <project> <job>" to see if you can get at the logs another way or if that process is also hanging.
singhald
Participant
Posts: 180
Joined: Tue Aug 23, 2005 2:50 am
Location: Bangalore
Contact:

Post by singhald »

Hi Kris
this was the same problem i have faced many times. if ur log count size is very large than the director will take more time it depands on the log size, so try to clean ur log till perticular date or Last run with out open Log View.
Just click on the Job name in Status view
Job--->ClearLog--->Up to last run / you can specify some date.

Try this and let me know if it is not working

Singhal
Regards,
Deepak Singhal
Everything is okay in the end. If it's not okay, then it's not the end.
manojmathai
Participant
Posts: 23
Joined: Mon Jul 04, 2005 6:25 am

Post by manojmathai »

Hi

I think this can be due to the log file size only.

You can try renaming the job, It may take time but still leave it. I feel renaming the job is faster than clearing the logs. After this you can try running the job for one row and check the warning, Correct the error and try again.

Regards
Manoj
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

dsjob -log is to write a log to the log file and not to retrive.
Rather

Code: Select all

dsjob -logdetail <Project name> <jobname> or dsjob -logsum <project> <job>
can be used to check whether the logs in the RT_LOGnnn file for the respective job is reteivalbe. Else try clearing the log and recheck.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
roy
Participant
Posts: 2598
Joined: Wed Jul 30, 2003 2:05 am
Location: Israel

Post by roy »

Hi,
I have seen this before related to big log file size as already mentioned.
Why does it happen if you have a limit of 50 warnings, you ask?
Because an actual warning might be generating several entries in the job log (possibly 1 foreach column + 1 or 2 for the warning itself)

The thing is that we probably want to see what happened, so we can go to a nother job's log and use the filter option to view let's say last 100 log entries - this should give us log view in reasonable time when we later go to the stuck log view.
Since we might want to see the initial warning it's better to run the job while viewing the log and stopping it as we start seeing the warnings fill up.

Another option is to clear the log and rerun while observing the log.

IHTH,
Roy R.
Time is money but when you don't have money time is all you can afford.

Search before posting:)

Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Image
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Filtering the log should also help.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

The thing is that we probably want to see what happened, so we can go to a nother job's log and use the filter option to view let's say last 100 log entries - this should give us log view in reasonable time when we later go to the stuck log view.
That worked Roy. Thanks for that. And yes, as everyone has been saying the log file has been huge as it has been writing messages for every row saying
Transformer_1: ORA-01861: literal does not match format string
.

Surprisingly, they were not written to log as warnings instead were written as info. And that made the logfile huge. Had they been written to log as warnings it would have aborted after 50 warnings.
Thanks to all for your inputs.

Kris
Post Reply