add_to_heap() - Unable to allocate memory warning

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

add_to_heap() - Unable to allocate memory warning

Post by tutul »

Hi All,

I have a job design as shown below:

Lookup Table--->Hashfile
|
|
Source Table--->Transformer---->Target


I had set the Pre-load file to memory in the hash file to ENABLED. I got the following error:
add_to_heap() - Unable to allocate memory

I had to disable the Pre-load file to memory and then the warning went off, but the job's performance decreased.

I am running this job in 7.5.2. We are in the process of migration from 6.0.1 to 7.5.2.

The same job ran fine in 6.0.1 witout warnings when the Pre-load file to memory in the hash file was ENABLED.
The tunables are set to 999MB and the hash files are of 32bit. All the parameters are same in both the environments. The kernel level parameters on /etc/system are also similar in both the environments.

The size of the Hash file concerned is 724MB.

At the first place I should not get the warning of add_to_heap() - Unable to allocate memory for the hash file as 999MB is set for it. This job is giving warning in 7.5.2 and not in 6.0.1.

Please suggest what may be the problem.

Please let me know if you require any more details.
WoMaWil
Participant
Posts: 482
Joined: Thu Mar 13, 2003 7:17 am
Location: Amsterdam

Post by WoMaWil »

you are not allone with that problem. Are you loosing data? I suppose it is only a warning and has no data loss implications.
Wolfgang Hürter
Amsterdam
tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

Post by tutul »

Hi,

I am losing the performance. It is running fine in DataStage 6.0.1 but not in 7.5.2.

Please suggest the possible reasons.
JoshGeorge
Participant
Posts: 612
Joined: Thu May 03, 2007 4:59 am
Location: Melbourne

Post by JoshGeorge »

You don't lose data, but lose speed. This is because when you run out of memory for hashed file cache, the hashed file is accessed directly from disk. You might have to monitor and find out while running this job what other processes are eating up the memory allocated in 7.5.2.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

Post by tutul »

Hi JoshGeorge,

[quote]You might have to monitor and find out while running this job what other processes are eating up the memory allocated in 7.5.2.[/quote]

How do I see which are the processes which are eating my memory assigned for the HASH FILE in 7.5.2?
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

It would seem that either your hashed file is larger than the 999Mb you specified, or that this value isn't being used at runtime. Are you sure that you ran uvregen after changing your configuration file? Use "smat -t" to see what your current values are.
tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

Post by tutul »

Hi ArndW,

I have used the command from UNIX ,du -h hashfilename to get the size. It is returning 724MB.
Is there any other way you want me to check the size of the hashfile.

The cofiguration file uvconfig is not changed after installation and it has 64BIT_FILES 0.

Please suggest
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

tutul - I'm sorry about my post earlier, I was off dreaming and thought that the maximum memory for file buffering was set in the uvconfig file; while it is actually a setting that is done in the Administrator and tunables tab.
I'm not at a system with DS right now, but I seem to remember you also need to ensure that there is enough memory available. Are you loading other files to memory in the same job?
tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

Post by tutul »

Hi ArndW,

Is there any Kernel level parameters which needs to be set in order to make the Hashfile get the 999MB as set from the TUNABLES at the Administrator?

Please suggest what may be the problem.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I cannot recall the exact conditions, I'd have to look at the documentation to get those values.

Are you loading other files to memory in the same job? If yes, and you don't preload them, does the original file load without having to resort to the heap?
tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

Post by tutul »

Hi ArndW,

I am loading only 1 hash file in the job.
Lookup Table--->Hashfile
|
|
Source Table--->Transformer---->Target

Which documentation will have the kernel level parameters settings with the DataStage, Please guide.
JoshGeorge
Participant
Posts: 612
Joined: Thu May 03, 2007 4:59 am
Location: Melbourne

Post by JoshGeorge »

Isn't this about allocation of memory? Why not ask admin to install more memory in your server and try running. Then you can be sure of what the real cause is.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

999MB is the upper limit for hashed file cache. This is shared by all hashed files for which the caching option is set. Read cache and write cache are separate (each maxing out at 999MB).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
tutul
Participant
Posts: 11
Joined: Fri May 25, 2007 12:09 am

Post by tutul »

Hi JoshGeorge,

As I mentioned earlier that this job is running fine in DS 6.0.1 but not in 7.5.1.
The DS 6.0.1 is on server with the following specification:
[b]5.8 Generic_117350-26 sun4u sparc SUNW[/b]
with 8 GB memory

The DS 7.5.1 has the following specification:
[b]5.10 Generic_118833-17 sun4u sparc SUNW[/b]
with 10GB memory

The kernel level parameters on the /etc/system are also same on both the servers.

Is there any way to find out whether the memory (buffer,scratch etc) assigned from the the UNIX level is properly being reflected to the DataStage server. Is it that, the DataStage is not getting the entire memory share assigned to it from the UNIX level?

Please suggest.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's not that at all. 999MB is the maximum that DataStage can malloc() for hashed file cache.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply