Page 1 of 1

Error while viewing log details from Datastage Director

Posted: Thu Mar 19, 2009 4:57 am
by sambarand
Multiple Instances of a IIS DataStage Parallel job is being run. The job is using a Shared Container. For certain runs of the job(with different invocation IDs), job status in Director is "Finished(see log)", but the log details are not visible.On trying to view the log the following Error is being encountered. - "Error selecting from log file RT_LOG35
Command was: SSELECT RT_LOG35 WITH @ID LIKE '1N0N' AND F7 = "0002815_20070103" COUNT.SUP
Error was: Internal data error. File '/opt/IBM/InformationServer/Server/Projects/DSPRJ/RT_LOG35/DATA.30': Computed blink of 0x944 does not match expected blink of 0x0! Detected within group starting at address 0x80000000! "
The Job Monitor is showing successful completion of all the Stages, but the job is not loading the Target Tables properly. Once the log for one invocations gets corrupted, all the logs of other invocations of the job also gets inaccessible and subsequent re-run of the jobs are also not being possible.
What can be the possible cause of this problem and what might be the resolution?

Posted: Thu Mar 19, 2009 8:35 am
by chulett
A "blink" is a "backwards link" which means your logs are corrupting, more than likely from blowing through the 2GB barrier for the hashed file in question due to the volume of entries being written to it. And a MI job still has a single hashed file log that everything writes to, the Director effectively filters out the logs for each Invocation ID using the query you posted, much like a View in a database would.

An exact search of the forums for "does not match expected blink" should turn up alot of advice on this subject.

Posted: Thu Mar 19, 2009 11:45 am
by sambarand
Thanks for your reply...is there any way to increase the 2GB limit of the hashed file? :shock:

Posted: Thu Mar 19, 2009 12:38 pm
by chulett
Yes, they can be converted to 64BIT hashed files, the 'limitation' is a function of the 32bit addressing they use by default. However, you really should look at why your jobs are logging that many messages. Is it simply a factor of the number of unique InvocationIDs you are using? Are they generating unnecessary warnings? Are you taking advantage of the auto-purge functionality?

Posted: Thu Mar 19, 2009 1:01 pm
by sambarand
probably you have guessed it correctly...the size of the log file is exceeding the default limit...that is why this error is being thrown...
any command or option in DS to increase the limit of the hashed file manually?

Posted: Thu Mar 19, 2009 2:15 pm
by abc123
Aren't you working with parallel jobs? How is a hash file coming into the picture?

Posted: Thu Mar 19, 2009 2:36 pm
by chulett
abc123 wrote:Aren't you working with parallel jobs? How is a hash file coming into the picture?
Because we're talking about a PX job's log, which is stored in a hashed file in all but the most recent release of the product. And from what we've seen here, the change to store them in the XMETA repository instead has been... problematical.

Posted: Thu Mar 19, 2009 2:46 pm
by chulett
sambarand wrote:probably you have guessed it correctly...the size of the log file is exceeding the default limit...that is why this error is being thrown...
any command or option in DS to increase the limit of the hashed file manually?
The command is RESIZE and the syntax for it is out there is the forums somewhere. Still better in my opinion, to address why so many records were written to the log rather than simply give them a bigger box to live in.