We are using datastage 8.7 , when we trying to use aggregate stage for huge volume of records getting the below error:
Aggregator_124,0: The current soft limit on the data segment (heap) size (134217728) is less than the hard limit (9223372036854775807), consider increasing the heap size limit
Aggregator_124,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.
Where to set this heap size memory.
Could you please assist in fixing this.
Thanks in advance.
Heap memory allocation- Aggregator.
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 17
- Joined: Mon Sep 10, 2007 3:24 am
What aggregation option are you using? Hash (the default) or Sort? Please refer to the stage documentation in the Parallel Job Developer's Guide. In summary:
Hash option works for unsorted data (should still be partitioned), but can require large amounts of memory depending upon file size and data value diversity
Sort option requires partitioned/sorted data, but can work better for data with a large number of distinct groups or values.
I suggest you try the Sort option if you haven't yet.
Regards,
Hash option works for unsorted data (should still be partitioned), but can require large amounts of memory depending upon file size and data value diversity
Sort option requires partitioned/sorted data, but can work better for data with a large number of distinct groups or values.
I suggest you try the Sort option if you haven't yet.
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
All generalizations are false, including this one - Mark Twain.