We are getting warning message in Ver 7.5
XXXHashFiles.ABC: add_to_heap() - Unable to allocate memory
It started coming when we upgrade the system from 7.1 to 7.5 versions, the same job was running without above warning messages in 7.1 version. Does anyone come across the same issue using 7.5 version?
Any help is appriciated.
Thanks,
add_to_heap() - Unable to allocate memory
Moderators: chulett, rschirm, roy
Use the Search facility. Paste "add_to_heap" in the box and select exact match. You can read a lot of discussion about this error. Short story, either ran out of write cache, or ran out of disk space.
Since you upgraded, perhaps your project default write/read cache settings are now different.
Since you upgraded, perhaps your project default write/read cache settings are now different.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It may also be the case that you're trying to get more data into the hashed file than formerly. That becomes the more likely explanation if you find that the cache sizes have not been changed.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi Ray,
We did not change any parameters in that job, we are currently (use to) using 50000 as Array Size at Oracle (ORAOCI9) stage and "Allow Stage Cache" check in Hash file stage.
Do you think is 50000 is huge and also " allow stage cache" also over head at hash file stage.
what are the adavantages using Allow Stage Cache in hash file stage.
here is my job flow
Thanks
We did not change any parameters in that job, we are currently (use to) using 50000 as Array Size at Oracle (ORAOCI9) stage and "Allow Stage Cache" check in Hash file stage.
Do you think is 50000 is huge and also " allow stage cache" also over head at hash file stage.
what are the adavantages using Allow Stage Cache in hash file stage.
here is my job flow
Code: Select all
Oracle Stage ------> Transformer-------> Hash File Stage
(1 Query) (3 constraints) (3 hash files)