when i tried to read and write the data from same hash file at a time ,the following warning is comming?
"Abnormal termination of stage detected".
please tell me how to solve this problem.
same time reading and writing to hash file problem?
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 81
- Joined: Thu Nov 30, 2006 7:46 am
- Location: india
-
- Participant
- Posts: 612
- Joined: Thu May 03, 2007 4:59 am
- Location: Melbourne
If you are writing and reading from the same file, and you want to read the updated records concurrently, for 'Pre Load File to Memory' chose Disabled, Lock for updates. 'Allow Stage Write Cache' should be Un Checked in this case. You might want to cross check this.
Are there more error logs to that? 'Reset' your job after it aborts and you might get more error information about this.
Are there more error logs to that? 'Reset' your job after it aborts and you might get more error information about this.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
First prove that it is a hashed file stage that is generating the error. Hashed File stages are passive stages, therefore do not generate processes, therefore can not terminate, abnormally or otherwise.
Second, it's hashed file, not hash file.
Second, it's hashed file, not hash file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
This is not wise reading and writing to the same hashed file. Hashing distributes record based on the algorithm chosen. One record might cause a group to split and therefore be processed twice. Even worse scenario is when change keys or add new records. You might process the new records again.
Mamu Kim
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: