Page 1 of 1

Heap Allocation Failed-Increasing Heap Size Doesn't Help

Posted: Thu Jan 03, 2008 3:52 am
by rubik
There were already numerous posts regarding this subject they do not seem to address the situation faced by us.

We have a job that failed with the following error:
The current soft limit on the data segment (heap) size (2147483645) is less than the hard limit (2147483647), consider increasing the heap size limit
Message:: DB2_UDB_API_0,0: Current heap size: 1,598,147,728 bytes in 68,660,973 blocks
Message:: DB2_UDB_API_0,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.
What we have done:
1. Increased heap (data segment) size hard and soft limit to unlimited. Added "ulimit -aH; ulimit -aS;" in BeforeJob ExecSH and below is the output confirming that ulimit has been changed:
ulimit -aH
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) unlimited
nofiles(descriptors) unlimited

ulimit -aS
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) 2097151
nofiles(descriptors) unlimited

2. Enabled large address space model by allowing datastage (osh) to access up to 2GB memory
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh

3. Monitored the job execution through "svmon"
It seems that the job can have working storage memory of up to 2GB (8 segments of 256MB each - meaning that the large address space model works).

However, it seems that the job requires >2GB memory and aborts whenever the osh process tries to utilize more than 2GB of memory (verified through svmon).

My understanding is that DataStage is a 32bit app and therefore will only be able to use up to 2Gb of memory. If this is the case, how can we work around the error? This is a simple job that reads 2 database tables (1.5 mil records and 10 mil records) each, perform join on a key, and output to another staging table.

Any help is much appreciated!

Re: Heap Allocation Failed-Increasing Heap Size Doesn't Help

Posted: Thu Jul 08, 2010 12:48 am
by sky_sailor
Recently we met such a problem,this issue not only make the job aborted,but also caused the server down.
We found when load a table definition,there is an optional choice:
"Ensure all char columns use unicode".If we import the table layout with this option unchosed,the job will work well.
Per my thought,when we enable this option,the data conversation from ascii to uincode will be done during the job run,that will use out the memory up to 2G,totally out of the ds capability and cause the job killed,even the server down.

Re: Heap Allocation Failed-Increasing Heap Size Doesn't Help

Posted: Thu Jul 08, 2010 2:24 am
by ray.wurlod
rubik wrote:My understanding is that DataStage is a 32bit app and therefore will only be able to use up to 2Gb of memory.
This is not the case on 64-bit AIX systems (such as version 6.1). I'm not sure what the bittage of version 5.3 is. Another thing you might look at is the setting of LDR_CNTRL.

Posted: Thu Jul 08, 2010 6:34 am
by ArndW
You've hit the limit set by your ldedit value. The quick solution is to change all char and varchar columns to varchar columns with no size limit; this can reduce your memory use per row. If the data width cannot be decreased then I would either consider changing your lookup to a join or to split your lookup stage into 2 distinct ones, each with just a subset of the data.

Re: Heap Allocation Failed-Increasing Heap Size Doesn't Help

Posted: Fri Sep 24, 2010 3:10 pm
by kurapatisrk
Hi,

I am getting this error I tried everthing else except increasing the heap size. Can you tell me how to increase the heap size to unlimited.


Thanks in Advance.

Posted: Fri Jul 01, 2011 4:30 am
by prasanna_anbu
ArndW wrote:You've hit the limit set by your ldedit value. The quick solution is to change all char and varchar columns to varchar columns with no size limit; this can reduce your memory use per row. If the data ...
Have you resloved this issue? If so please help me on this.

Posted: Fri Jul 01, 2011 6:44 am
by chulett
prasanna_anbu wrote:Have you resloved this issue? If so please help me on this.
:!: Rather than jump on the end of an old thread, why not start your own post on the subject? Give us the details of your problem.

Posted: Thu Oct 25, 2012 7:30 am
by koolsun85
Change all the Char datatype to Varchar and re run the job. It worked for me.

Posted: Thu Oct 25, 2012 10:22 am
by koolsun85
Also Remove the stage and re design with the same stage as the stage might got corrupted. if u build it again it might solve the issue.

Posted: Mon Dec 10, 2012 5:33 am
by abhijain
Also, try to give a certain length to your VARCHAR fields ( for e.g Varchar(200) ) rather than using VARCHAR().

When we define the column as VARCHAR() by default, it takes the maximum possible value for the column.

We have also faced the similar issues and it was crashing our servers. We have modified the job using above resolution and it helped us alot.