Page 1 of 1

Heap allocation error - ODBC stage

Posted: Thu Oct 20, 2011 3:18 am
by Rohit_ben
Hi

One of our parallel jobs fetches data from Oracle 10g and inserts into a SQL server table. The job design is as follows:

Oracle Enterprise > Transformer > ODBC Enterprise

The job nees to fetch and populate approx 20 million records. The job is failing due to the following error.

odbcwrt_EX_POST_RESULTS,0: Operator terminated abnormally: Terminating with exception:APT_BadAlloc: Heap allocation failed.

I have searched the forum for similar kind of errors and have applied some of the suggestions on the ODBC stage as following:

I have set the APT_DISABLE_COMBINATION to true.

1) Set Insert array size to a low value - I set it to 10 still the job fails
2) Set insert array size to 1 and set the row commit interval to a positive integer - I set array size to 1 and commit interval from 10, 1000, 2000 and 50,000. Still the job fails on all occasions.
3) I have also tried to set the execution mode to sequential. But to no sucecess.

Could you please let me know how can I proceed to find out the reason and resolve the issue?

Regards
Rohit

Posted: Thu Oct 20, 2011 5:12 am
by suse_dk
Are you able to run your job on just a single node?

Posted: Thu Oct 20, 2011 7:48 am
by Rohit_ben
We have created a server job with the same logic which is working fine.

Posted: Thu Oct 20, 2011 2:55 pm
by ray.wurlod
How wide are your rows? There is a configurable maximum number of bytes per row - from memory it's 8192. There's also a configurable maximum number of columns per row - from memory that one is 400. Exceeding either of these might cause a heap allocation error.

Posted: Thu Oct 20, 2011 11:10 pm
by Rohit_ben
Sorry Ray, I was not able to view the premium content :(

The number of columns is 120. Are these 'configurable maximum number of bytes per row' and 'configurable maximum number of columns per row' set through some environment variables?

Posted: Fri Oct 21, 2011 4:52 am
by ray.wurlod
Search the forum for MAXFETCHBUFF and MAXFETCHCOLS. These are entries that can be made in the uvodbc.config file.

Posted: Fri Oct 21, 2011 7:10 am
by Rohit_ben
We changed the uvodbc.config file in the project directory to reflect the following:

<DPServer32>
DBMSTYPE = ODBC
MAXFETCHBUFF = 16384


Array size = 1
Row commit interval = 5000

The job still failed. We did not change the MAXFETCHCOLS as it is 400 columns by default.
The job is failing at the same record count each time.