Hello,
I have a parallel job with one agregator that agregates 5000000 reg.
it fails with the error:
Aggregator_30,0: Failure during execution of operator logic.
Aggregator_30,0: Fatal Error: Throwing exception: APT_BadAlloc: Heap allocation failed.
before there is a warning log:
Aggregator_30,0: The current soft limit on the data segment (heap) size (2147483645) is less than the hard limit (2147483647), consider increasing the heap size limit
it seems to be a problem with size.
i have increased the kernell param data from root and it doesn't work.
can sameone help me?
regards,
Cristina
Agregator failed
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54
- Joined: Wed Oct 25, 2006 11:07 pm
- Location: Hyderabad
Re: Agregator failed
Hi Cristina,
Whenever u perform Aggregation based on the key fields, u also make a Heap sort based on the key fields correct. For this the system internally creates a Heap table. For Larger volume of data this table grows in size exponentially, cause of which u might run into the issue. Ask ur admin to increase the size of the heap allocation.
Else you can try this out. Open the aggregator stage u r using, Goto the tab Stage---> Properties. Under the option node See the property Method. Chage the method from Hash to sort. This will do.
Thanks n regards
Sudeep
Whenever u perform Aggregation based on the key fields, u also make a Heap sort based on the key fields correct. For this the system internally creates a Heap table. For Larger volume of data this table grows in size exponentially, cause of which u might run into the issue. Ask ur admin to increase the size of the heap allocation.
Else you can try this out. Open the aggregator stage u r using, Goto the tab Stage---> Properties. Under the option node See the property Method. Chage the method from Hash to sort. This will do.
Thanks n regards
Sudeep
Your hitting the 2GB file size limit imposed on you by the UNIX setup.
I think this is set through ulimit(). I can't remember now if this is just setup once or whether it is specified in dsenv and I don't have a system to look at now. Will have to check in the morning unless someone else answers this first.
I think this is set through ulimit(). I can't remember now if this is just setup once or whether it is specified in dsenv and I don't have a system to look at now. Will have to check in the morning unless someone else answers this first.
Regards,
Nick.
Nick.
Have a look at this link which will explain how to check and change the soft and hard limits.
http://www.ss64.com/bash/ulimit.html
As Craig said, you probably need to get SA to do this for you.
http://www.ss64.com/bash/ulimit.html
As Craig said, you probably need to get SA to do this for you.
Regards,
Nick.
Nick.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: