Hash file Error
Moderators: chulett, rschirm, roy
Hash file Error
I am doing a extraction and loading to a hash file.the input table has more than 100 million records.I am using a IPC stage and link partitioner to partion it and split them into 4 hash files.The hash file i craeted is dynamic one but after loading 4 million records to a hash file it throws up the error saying
"etlGXO30del_E3..Copy_3_of_Hashed_File_del.DSLink5: add_to_heap() - Unable to allocate memory"
Can any body help me with this issue..
"etlGXO30del_E3..Copy_3_of_Hashed_File_del.DSLink5: add_to_heap() - Unable to allocate memory"
Can any body help me with this issue..
RK
What is the average length of record you are trying to load? How many columns does the hashed file have? You might be either hitting the 2.2 GB limit of hashed file or running out of space on the disk? Check for both of them. See if you have checked the Write to cache option in the hashed file. If so, turn it off. These are some of the things that come to my mind.
Last edited by kris007 on Wed Sep 27, 2006 11:11 am, edited 2 times in total.
Kris
Where's the "Any" key?-Homer Simpson
Where's the "Any" key?-Homer Simpson
I have only the key column in the file the size did not hit even 1 GB.i check the size in the server..does any parameter in the hash file need to be set up..kris007 wrote:What is the average length of record you are trying to load? How many columns does the hashed file have? You might be either hitting the 2.2 GB limit of hashed file or running out of space on the disk? Check for both of them.
Ihave the parameter set up like
Minimum modulas as 60000,
group size as 2,
caching attributes as Write Deffred
I checked the option for minimizing the size.
Am i setting a parameters right.
RK
thanks Kris..as you said i turened the "Allow stage write cache" off.but the performance went down than before..is it any thinmg else i can take care to increse the perfoemance can increase the buffer in the IPC stage..kris007 wrote:Set the caching attributes to NONE and then see what happens? Also, as mentioned in the earlier post Disable "Allow stage write cache" option if you have it enabled.
RK
Use HFC(Hash file Calculator) to find out the minimum modulus and other attributes to set up for your estimated data. Then set those in your Create hash file options. Also, Check "Hashed file before writing" option and disable delete file before create option if you have checked it. Other than that, if you are pulling the data from a database table, increase the array size(play around with the number) to achieve higher speeds. These are some suggestions which come to my mind.
Kris
Where's the "Any" key?-Homer Simpson
Where's the "Any" key?-Homer Simpson
thanks kris.I am also playing with array size now.I calculated the HFC and set it to 60000.I have to disable the delete file option.the job is currently running with no warnings till now..completed 30 million...Thanks once again...kris007 wrote:Use HFC(Hash file Calculator) to find out the minimum modulus and other attributes to set up for your estimated data. Then set those in your Create hash file options. Also, Check "Hashed file before writing" option and disable delete file before create option if you have checked it. Other than that, if you are pulling the data from a database table, increase the array size(play around with the number) to achieve higher speeds. These are some suggestions which come to my mind.
RK
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The warning can safely be ignored. It informs you that a point has been reached where the hashed file will not fit in the write cache, and that therefore the hashed file writes will go to disk. Disabling use of the write cache will slow things, of course, but will prevent the warning.
The write cache can be set as high as 999MB. You might want to investigate this possibility if your total data volume is less than this figure.
The write cache can be set as high as 999MB. You might want to investigate this possibility if your total data volume is less than this figure.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.