Job aborts due to heap size allocation problem
Posted: Tue Aug 16, 2011 3:45 pm
Hello,
The parallel job has a join stage that is joining millions of records from two data sets and the job aborts with the following error:
Join_8,2: The current soft limit on the data segment (heap) size (805306368) is less than the hard limit (2147483647), consider increasing the heap size limit
Join_8,2: Current heap size: 279,734,248 bytes in 7,574 blocks
Join_8,2: Failure during execution of operator logic.
From other similar posts, I used the ulimit command to check the space allocation on the server:
Change and report the soft limit associated with a resource
Command: ulimit -S
My output: unlimited
Change and report the hard limit associated with a resource
Command: ulimit -H
My ouptput: unlimited
All current limits are reported
Comand: ulimt -a
My output:
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) 32768
coredump(blocks) 0
nofiles(descriptors) 2000
So, it seems that soft and hard limit is unlimited on the server but still the job with the join stage fails due to heap allocation. Is the problem still due to heap/memory allocation? Any help would be greatly appreciated.
The DataStage server runs on AIX version 5.3.
Thanks.
The parallel job has a join stage that is joining millions of records from two data sets and the job aborts with the following error:
Join_8,2: The current soft limit on the data segment (heap) size (805306368) is less than the hard limit (2147483647), consider increasing the heap size limit
Join_8,2: Current heap size: 279,734,248 bytes in 7,574 blocks
Join_8,2: Failure during execution of operator logic.
From other similar posts, I used the ulimit command to check the space allocation on the server:
Change and report the soft limit associated with a resource
Command: ulimit -S
My output: unlimited
Change and report the hard limit associated with a resource
Command: ulimit -H
My ouptput: unlimited
All current limits are reported
Comand: ulimt -a
My output:
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 4194304
memory(kbytes) 32768
coredump(blocks) 0
nofiles(descriptors) 2000
So, it seems that soft and hard limit is unlimited on the server but still the job with the join stage fails due to heap allocation. Is the problem still due to heap/memory allocation? Any help would be greatly appreciated.
The DataStage server runs on AIX version 5.3.
Thanks.