Hash file Error

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
g_rkrish
Participant
Posts: 264
Joined: Wed Feb 08, 2006 12:06 am

Hash file Error

Post by g_rkrish »

I am doing a extraction and loading to a hash file.the input table has more than 100 million records.I am using a IPC stage and link partitioner to partion it and split them into 4 hash files.The hash file i craeted is dynamic one but after loading 4 million records to a hash file it throws up the error saying

"etlGXO30del_E3..Copy_3_of_Hashed_File_del.DSLink5: add_to_heap() - Unable to allocate memory"

Can any body help me with this issue..
RK
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

What is the average length of record you are trying to load? How many columns does the hashed file have? You might be either hitting the 2.2 GB limit of hashed file or running out of space on the disk? Check for both of them. See if you have checked the Write to cache option in the hashed file. If so, turn it off. These are some of the things that come to my mind.
Last edited by kris007 on Wed Sep 27, 2006 11:11 am, edited 2 times in total.
Kris

Where's the "Any" key?-Homer Simpson
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Do an exact search on "add_to_heap() - Unable to allocate memory" and you will find several posts regarding your query.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
g_rkrish
Participant
Posts: 264
Joined: Wed Feb 08, 2006 12:06 am

Post by g_rkrish »

kris007 wrote:What is the average length of record you are trying to load? How many columns does the hashed file have? You might be either hitting the 2.2 GB limit of hashed file or running out of space on the disk? Check for both of them.
I have only the key column in the file the size did not hit even 1 GB.i check the size in the server..does any parameter in the hash file need to be set up..

Ihave the parameter set up like

Minimum modulas as 60000,
group size as 2,
caching attributes as Write Deffred
I checked the option for minimizing the size.

Am i setting a parameters right.
RK
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

Set the caching attributes to NONE and then see what happens? Also, as mentioned in the earlier post Disable "Allow stage write cache" option if you have it enabled.
Kris

Where's the "Any" key?-Homer Simpson
g_rkrish
Participant
Posts: 264
Joined: Wed Feb 08, 2006 12:06 am

Post by g_rkrish »

kris007 wrote:Set the caching attributes to NONE and then see what happens? Also, as mentioned in the earlier post Disable "Allow stage write cache" option if you have it enabled.
thanks Kris..as you said i turened the "Allow stage write cache" off.but the performance went down than before..is it any thinmg else i can take care to increse the perfoemance can increase the buffer in the IPC stage..
RK
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

Use HFC(Hash file Calculator) to find out the minimum modulus and other attributes to set up for your estimated data. Then set those in your Create hash file options. Also, Check "Hashed file before writing" option and disable delete file before create option if you have checked it. Other than that, if you are pulling the data from a database table, increase the array size(play around with the number) to achieve higher speeds. These are some suggestions which come to my mind.
Kris

Where's the "Any" key?-Homer Simpson
g_rkrish
Participant
Posts: 264
Joined: Wed Feb 08, 2006 12:06 am

Post by g_rkrish »

kris007 wrote:Use HFC(Hash file Calculator) to find out the minimum modulus and other attributes to set up for your estimated data. Then set those in your Create hash file options. Also, Check "Hashed file before writing" option and disable delete file before create option if you have checked it. Other than that, if you are pulling the data from a database table, increase the array size(play around with the number) to achieve higher speeds. These are some suggestions which come to my mind.
thanks kris.I am also playing with array size now.I calculated the HFC and set it to 60000.I have to disable the delete file option.the job is currently running with no warnings till now..completed 30 million...Thanks once again...
RK
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The warning can safely be ignored. It informs you that a point has been reached where the hashed file will not fit in the write cache, and that therefore the hashed file writes will go to disk. Disabling use of the write cache will slow things, of course, but will prevent the warning.

The write cache can be set as high as 999MB. You might want to investigate this possibility if your total data volume is less than this figure.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
g_rkrish
Participant
Posts: 264
Joined: Wed Feb 08, 2006 12:06 am

Post by g_rkrish »

ray.wurlod wrote:The warning can safely be ignored. It informs you that a point has been reached where the hashed file will not fit in the write cache, and that therefore the hashed file writes will go to disk. Disa ...

Thanks ray...The job completed without any issues..
RK
Post Reply