Page 1 of 1

Abnormal termination when processing above 10 Million rows

Posted: Tue Oct 31, 2006 9:31 am
by maheshsada
We have a job which selects data from Oracle and pass through transform(there is no transform done just input is fed into output) and writes to a sequential file. When the number of rows from the query is more than 10M, then the job terminates without giving any meaningful error message.

When we split the job by using Min and Max values of primary key in the select query and loop through, then the job gets completed without any error.

We have even checked whether there is a limit in the file size in unix box

Is there any kernel parameter which needs to be modified for processing more than 10M rows instead of splitting into multiple jobs. We are using Sun solaris

Magesh S

Posted: Tue Oct 31, 2006 11:47 am
by kcbland
Try turning off inter-process and row buffering to see if the problem persists. Watch the size of the output file, see how big it is when the job aborts, that may point to a file system configured for only 2GB (an older 32BIT configuration). Otherwise, release 5.x was quite awhile back and it sounds familiar that there might have been a 2GB limit on the Sequential stage. You could search this forum, we certainly have that info onfile!