same time reading and writing to hash file problem?

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
mallikharjuna
Participant
Posts: 81
Joined: Thu Nov 30, 2006 7:46 am
Location: india

same time reading and writing to hash file problem?

Post by mallikharjuna »

when i tried to read and write the data from same hash file at a time ,the following warning is comming?

"Abnormal termination of stage detected".

please tell me how to solve this problem.
MALLI
JoshGeorge
Participant
Posts: 612
Joined: Thu May 03, 2007 4:59 am
Location: Melbourne

Post by JoshGeorge »

If you are writing and reading from the same file, and you want to read the updated records concurrently, for 'Pre Load File to Memory' chose Disabled, Lock for updates. 'Allow Stage Write Cache' should be Un Checked in this case. You might want to cross check this.

Are there more error logs to that? 'Reset' your job after it aborts and you might get more error information about this.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

First prove that it is a hashed file stage that is generating the error. Hashed File stages are passive stages, therefore do not generate processes, therefore can not terminate, abnormally or otherwise.

Second, it's hashed file, not hash file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

This is not wise reading and writing to the same hashed file. Hashing distributes record based on the algorithm chosen. One record might cause a group to split and therefore be processed twice. Even worse scenario is when change keys or add new records. You might process the new records again.
Mamu Kim
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Reset the job in Director. Post whatever is in the "from previous run..." log event.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply