Error while viewing log details from Datastage Director

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
sambarand
Premium Member
Premium Member
Posts: 22
Joined: Mon Apr 10, 2006 11:03 am

Error while viewing log details from Datastage Director

Post by sambarand »

Multiple Instances of a IIS DataStage Parallel job is being run. The job is using a Shared Container. For certain runs of the job(with different invocation IDs), job status in Director is "Finished(see log)", but the log details are not visible.On trying to view the log the following Error is being encountered. - "Error selecting from log file RT_LOG35
Command was: SSELECT RT_LOG35 WITH @ID LIKE '1N0N' AND F7 = "0002815_20070103" COUNT.SUP
Error was: Internal data error. File '/opt/IBM/InformationServer/Server/Projects/DSPRJ/RT_LOG35/DATA.30': Computed blink of 0x944 does not match expected blink of 0x0! Detected within group starting at address 0x80000000! "
The Job Monitor is showing successful completion of all the Stages, but the job is not loading the Target Tables properly. Once the log for one invocations gets corrupted, all the logs of other invocations of the job also gets inaccessible and subsequent re-run of the jobs are also not being possible.
What can be the possible cause of this problem and what might be the resolution?
Sam
IBM Global Services
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

A "blink" is a "backwards link" which means your logs are corrupting, more than likely from blowing through the 2GB barrier for the hashed file in question due to the volume of entries being written to it. And a MI job still has a single hashed file log that everything writes to, the Director effectively filters out the logs for each Invocation ID using the query you posted, much like a View in a database would.

An exact search of the forums for "does not match expected blink" should turn up alot of advice on this subject.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sambarand
Premium Member
Premium Member
Posts: 22
Joined: Mon Apr 10, 2006 11:03 am

Post by sambarand »

Thanks for your reply...is there any way to increase the 2GB limit of the hashed file? :shock:
Sam
IBM Global Services
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Yes, they can be converted to 64BIT hashed files, the 'limitation' is a function of the 32bit addressing they use by default. However, you really should look at why your jobs are logging that many messages. Is it simply a factor of the number of unique InvocationIDs you are using? Are they generating unnecessary warnings? Are you taking advantage of the auto-purge functionality?
-craig

"You can never have too many knives" -- Logan Nine Fingers
sambarand
Premium Member
Premium Member
Posts: 22
Joined: Mon Apr 10, 2006 11:03 am

Post by sambarand »

probably you have guessed it correctly...the size of the log file is exceeding the default limit...that is why this error is being thrown...
any command or option in DS to increase the limit of the hashed file manually?
Sam
IBM Global Services
abc123
Premium Member
Premium Member
Posts: 605
Joined: Fri Aug 25, 2006 8:24 am

Post by abc123 »

Aren't you working with parallel jobs? How is a hash file coming into the picture?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

abc123 wrote:Aren't you working with parallel jobs? How is a hash file coming into the picture?
Because we're talking about a PX job's log, which is stored in a hashed file in all but the most recent release of the product. And from what we've seen here, the change to store them in the XMETA repository instead has been... problematical.
-craig

"You can never have too many knives" -- Logan Nine Fingers
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

sambarand wrote:probably you have guessed it correctly...the size of the log file is exceeding the default limit...that is why this error is being thrown...
any command or option in DS to increase the limit of the hashed file manually?
The command is RESIZE and the syntax for it is out there is the forums somewhere. Still better in my opinion, to address why so many records were written to the log rather than simply give them a bigger box to live in.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply