add_to_heap() - Unable to allocate memory warning
Moderators: chulett, rschirm, roy
add_to_heap() - Unable to allocate memory warning
Hi All,
I have a job design as shown below:
Lookup Table--->Hashfile
|
|
Source Table--->Transformer---->Target
I had set the Pre-load file to memory in the hash file to ENABLED. I got the following error:
add_to_heap() - Unable to allocate memory
I had to disable the Pre-load file to memory and then the warning went off, but the job's performance decreased.
I am running this job in 7.5.2. We are in the process of migration from 6.0.1 to 7.5.2.
The same job ran fine in 6.0.1 witout warnings when the Pre-load file to memory in the hash file was ENABLED.
The tunables are set to 999MB and the hash files are of 32bit. All the parameters are same in both the environments. The kernel level parameters on /etc/system are also similar in both the environments.
The size of the Hash file concerned is 724MB.
At the first place I should not get the warning of add_to_heap() - Unable to allocate memory for the hash file as 999MB is set for it. This job is giving warning in 7.5.2 and not in 6.0.1.
Please suggest what may be the problem.
Please let me know if you require any more details.
I have a job design as shown below:
Lookup Table--->Hashfile
|
|
Source Table--->Transformer---->Target
I had set the Pre-load file to memory in the hash file to ENABLED. I got the following error:
add_to_heap() - Unable to allocate memory
I had to disable the Pre-load file to memory and then the warning went off, but the job's performance decreased.
I am running this job in 7.5.2. We are in the process of migration from 6.0.1 to 7.5.2.
The same job ran fine in 6.0.1 witout warnings when the Pre-load file to memory in the hash file was ENABLED.
The tunables are set to 999MB and the hash files are of 32bit. All the parameters are same in both the environments. The kernel level parameters on /etc/system are also similar in both the environments.
The size of the Hash file concerned is 724MB.
At the first place I should not get the warning of add_to_heap() - Unable to allocate memory for the hash file as 999MB is set for it. This job is giving warning in 7.5.2 and not in 6.0.1.
Please suggest what may be the problem.
Please let me know if you require any more details.
-
- Participant
- Posts: 612
- Joined: Thu May 03, 2007 4:59 am
- Location: Melbourne
You don't lose data, but lose speed. This is because when you run out of memory for hashed file cache, the hashed file is accessed directly from disk. You might have to monitor and find out while running this job what other processes are eating up the memory allocated in 7.5.2.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
tutul - I'm sorry about my post earlier, I was off dreaming and thought that the maximum memory for file buffering was set in the uvconfig file; while it is actually a setting that is done in the Administrator and tunables tab.
I'm not at a system with DS right now, but I seem to remember you also need to ensure that there is enough memory available. Are you loading other files to memory in the same job?
I'm not at a system with DS right now, but I seem to remember you also need to ensure that there is enough memory available. Are you loading other files to memory in the same job?
-
- Participant
- Posts: 612
- Joined: Thu May 03, 2007 4:59 am
- Location: Melbourne
Isn't this about allocation of memory? Why not ask admin to install more memory in your server and try running. Then you can be sure of what the real cause is.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
999MB is the upper limit for hashed file cache. This is shared by all hashed files for which the caching option is set. Read cache and write cache are separate (each maxing out at 999MB).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi JoshGeorge,
As I mentioned earlier that this job is running fine in DS 6.0.1 but not in 7.5.1.
The DS 6.0.1 is on server with the following specification:
[b]5.8 Generic_117350-26 sun4u sparc SUNW[/b]
with 8 GB memory
The DS 7.5.1 has the following specification:
[b]5.10 Generic_118833-17 sun4u sparc SUNW[/b]
with 10GB memory
The kernel level parameters on the /etc/system are also same on both the servers.
Is there any way to find out whether the memory (buffer,scratch etc) assigned from the the UNIX level is properly being reflected to the DataStage server. Is it that, the DataStage is not getting the entire memory share assigned to it from the UNIX level?
Please suggest.
As I mentioned earlier that this job is running fine in DS 6.0.1 but not in 7.5.1.
The DS 6.0.1 is on server with the following specification:
[b]5.8 Generic_117350-26 sun4u sparc SUNW[/b]
with 8 GB memory
The DS 7.5.1 has the following specification:
[b]5.10 Generic_118833-17 sun4u sparc SUNW[/b]
with 10GB memory
The kernel level parameters on the /etc/system are also same on both the servers.
Is there any way to find out whether the memory (buffer,scratch etc) assigned from the the UNIX level is properly being reflected to the DataStage server. Is it that, the DataStage is not getting the entire memory share assigned to it from the UNIX level?
Please suggest.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: