I am trying to view the log files of a particular job from the Director Client and whenever I am trying to do that my Director client is hanging on. This is happening only for this particular one job. For the rest of the jobs its working fine. Is this something to do with my job. Did anyone come across this kind of situation. Any inputs would be of great help.
How long have you waited? Sometimes when the log file is huge it can take 10, 15 or more minutes to refresh the detail view. You can change to the directory and see if one of the RT_LOG<nnn> files is much bigger than all others.
I developed this job recently ran this job not more than 10 times and I can see the log files of jobs which are much bigger than this. And I gave it more than 20minutes but of no use. Had to kill the session inorder to open it again.
Could be that you have way toooo many warnings in your log. Can you try running it again with Warning Limit set to a lower number, say 50, and check the log.
Do you have your deadlock daemon set up or running on this system? You could try to use the "dsjob -log <project> <job>" to see if you can get at the logs another way or if that process is also hanging.
Hi Kris
this was the same problem i have faced many times. if ur log count size is very large than the director will take more time it depands on the log size, so try to clean ur log till perticular date or Last run with out open Log View.
Just click on the Job name in Status view
Job--->ClearLog--->Up to last run / you can specify some date.
Try this and let me know if it is not working
Singhal
Regards,
Deepak Singhal
Everything is okay in the end. If it's not okay, then it's not the end.
I think this can be due to the log file size only.
You can try renaming the job, It may take time but still leave it. I feel renaming the job is faster than clearing the logs. After this you can try running the job for one row and check the warning, Correct the error and try again.
Hi,
I have seen this before related to big log file size as already mentioned.
Why does it happen if you have a limit of 50 warnings, you ask?
Because an actual warning might be generating several entries in the job log (possibly 1 foreach column + 1 or 2 for the warning itself)
The thing is that we probably want to see what happened, so we can go to a nother job's log and use the filter option to view let's say last 100 log entries - this should give us log view in reasonable time when we later go to the stuck log view.
Since we might want to see the initial warning it's better to run the job while viewing the log and stopping it as we start seeing the warnings fill up.
Another option is to clear the log and rerun while observing the log.
IHTH,
Roy R.
Time is money but when you don't have money time is all you can afford.
The thing is that we probably want to see what happened, so we can go to a nother job's log and use the filter option to view let's say last 100 log entries - this should give us log view in reasonable time when we later go to the stuck log view.
That worked Roy. Thanks for that. And yes, as everyone has been saying the log file has been huge as it has been writing messages for every row saying
Transformer_1: ORA-01861: literal does not match format string
.
Surprisingly, they were not written to log as warnings instead were written as info. And that made the logfile huge. Had they been written to log as warnings it would have aborted after 50 warnings.
Thanks to all for your inputs.