APT_BadAlloc: Heap allocation failed.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rajeevn80
Participant
Posts: 28
Joined: Mon Jan 31, 2005 10:58 pm

APT_BadAlloc: Heap allocation failed.

Post by rajeevn80 »

hi guys,
when i run a px job the job abborts with the following message : -

APT_CombinedOperatorController(2),0: Caught exception from runLocally(): APT_BadAlloc: Heap allocation failed.

can anyone help me on this...is it related to some memory issue...???
Rajeev
Nobody knows Everything,
But U should not be the One who knows Nothing.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Have you checked disk space during execution of the job? If the error occurs after a bit of time then this might be the cause, if it happens right at the outset then you might have access issues.
rajeevn80
Participant
Posts: 28
Joined: Mon Jan 31, 2005 10:58 pm

Post by rajeevn80 »

the job runs for 5 mins and then aborts.... this job uses 5 lookups and the lookup files are very huge.... hoping that this could be a memory problem, i changed all the lookups to joins(inner joins)... but still the job aborts with the same message...


since no stage names are mentioned in the director log... i assumed that this could be the memory issue...

any solutions ..??
Rajeev
Nobody knows Everything,
But U should not be the One who knows Nothing.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Rajeev,

the lookups will use up temporary disk space. If you use a temporary 1-node configuration file does the problem go away?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

"Heap" always refers to memory, but it's virtual memory so can be exhausted by running out of spill space (scratch disk resource) or of swap space, depending on what's actually occurring. So I think you are on the right track suspecting memory, but probably need to do some more detective work.

There are some environment variables that can cause monitoring statistics to be captured. These may help. You might also compile in debug mode, though I don't believe this gives you memory usage statistics. And, of course, you can monitor from the operating system level, using commands such as vmstat.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

I had this same issue on AIX. I had to increase the amout of memory that the osh program could allocate. I modified it so that it could allocate 2 gb instead of the default 512 MB.

cd $APT_ORCHHOME/bin
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh
ak77
Charter Member
Charter Member
Posts: 70
Joined: Thu Jun 23, 2005 5:47 pm
Location: Oklahoma

Post by ak77 »

I had similar kind of error
But something strange was goin in the Server/Database
So the Admin just restarted asking all of us to get out of the box and it all worked fine after that

Talking to the DBA might help

Kishan
pavankvk
Participant
Posts: 202
Joined: Thu Dec 04, 2003 7:54 am

Post by pavankvk »

How can we know our current allocation for osh?
pavankvk
Participant
Posts: 202
Joined: Thu Dec 04, 2003 7:54 am

Post by pavankvk »

i get the following messages in the log..

APT_CombinedOperatorController(1),0: The current soft limit on the data segment (heap) size (2147483645) is less than the hard limit (2147483647), consider increasing the heap size limit

APT_CombinedOperatorController(1),0: Current heap size: 2,141,571,120 bytes in 580,463 blocks

I think i have 2 gigs..am i correct?
Post Reply