Page 1 of 1

Heap memory allocation- Aggregator.

Posted: Fri Jul 27, 2012 3:56 am
by ithirak_17
We are using datastage 8.7 , when we trying to use aggregate stage for huge volume of records getting the below error:


Aggregator_124,0: The current soft limit on the data segment (heap) size (134217728) is less than the hard limit (9223372036854775807), consider increasing the heap size limit
Aggregator_124,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.

Where to set this heap size memory.

Could you please assist in fixing this.

Thanks in advance.

Posted: Fri Jul 27, 2012 5:27 am
by ArndW
I did a quick search and found a large number of threads with this same problem, but the short of it is that you can increase the limit in UNIX with the command "ulimit -s {size}". You can do this in the dsenv file in order to make it apply t all DataStage users.

Posted: Fri Jul 27, 2012 7:51 am
by jwiles
What aggregation option are you using? Hash (the default) or Sort? Please refer to the stage documentation in the Parallel Job Developer's Guide. In summary:

Hash option works for unsorted data (should still be partitioned), but can require large amounts of memory depending upon file size and data value diversity

Sort option requires partitioned/sorted data, but can work better for data with a large number of distinct groups or values.

I suggest you try the Sort option if you haven't yet.

Regards,