Heap memory allocation- Aggregator.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
ithirak_17
Participant
Posts: 17
Joined: Mon Sep 10, 2007 3:24 am

Heap memory allocation- Aggregator.

Post by ithirak_17 »

We are using datastage 8.7 , when we trying to use aggregate stage for huge volume of records getting the below error:


Aggregator_124,0: The current soft limit on the data segment (heap) size (134217728) is less than the hard limit (9223372036854775807), consider increasing the heap size limit
Aggregator_124,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.

Where to set this heap size memory.

Could you please assist in fixing this.

Thanks in advance.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I did a quick search and found a large number of threads with this same problem, but the short of it is that you can increase the limit in UNIX with the command "ulimit -s {size}". You can do this in the dsenv file in order to make it apply t all DataStage users.
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

What aggregation option are you using? Hash (the default) or Sort? Please refer to the stage documentation in the Parallel Job Developer's Guide. In summary:

Hash option works for unsorted data (should still be partitioned), but can require large amounts of memory depending upon file size and data value diversity

Sort option requires partitioned/sorted data, but can work better for data with a large number of distinct groups or values.

I suggest you try the Sort option if you haven't yet.

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
Post Reply