Hi,
When I run a Job that loads data into a Db2stage it is aborting with the following messages.
Db2udbXXX,2: The current soft limit on the data segment (heap) size (2147483645)
is less than the hard limit (2147483647), consider increasing the heap size limit
Db2udbXXX,2: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.
Under what circumstances it will show these messages?
data segment (heap) size
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 62
- Joined: Tue Jun 14, 2005 7:17 pm
- Location: Australia
- Contact:
On Unix, does this error message relate to the data or the file limits?
I have an Aggregator stage which has up to 40 million records to deal with and get the same error. Following warning messages in the log show:
Thanx,
Zac.
I have an Aggregator stage which has up to 40 million records to deal with and get the same error. Following warning messages in the log show:
- My current heap size= 1,856,298,288 bytes in 35,701,573 blocks.
- Followed by "Failure in operator logic" for the aggregator stage.
Thanx,
Zac.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 62
- Joined: Tue Jun 14, 2005 7:17 pm
- Location: Australia
- Contact:
Thanks Om, Thanks Ray,
I've looked at my jobs and identified areas where I can fix this aggregation issue. I will add a Sort stage and repartition before each Aggregator and use the Sort Method.
This is a lesson learnt that I will not forget - things were fine when developing using only 100 records but when I tried to run using production volumes (up to 52 million records) I started to get some problems!
Thanx,
Zac.
I've looked at my jobs and identified areas where I can fix this aggregation issue. I will add a Sort stage and repartition before each Aggregator and use the Sort Method.
This is a lesson learnt that I will not forget - things were fine when developing using only 100 records but when I tried to run using production volumes (up to 52 million records) I started to get some problems!
Thanx,
Zac.
Zac,
As Ray points out it is not the number of input records that is important. You need to consider how many results you will get. If it is approaching 1000 per MB of available memory then use a sort method for the aggregation.
Don't forget that 'available memory' is used by other stages that might be running as well as the aggregator.
As Ray points out it is not the number of input records that is important. You need to consider how many results you will get. If it is approaching 1000 per MB of available memory then use a sort method for the aggregation.
Don't forget that 'available memory' is used by other stages that might be running as well as the aggregator.