Abnormal termination when processing above 10 Million rows
Posted: Tue Oct 31, 2006 9:31 am
We have a job which selects data from Oracle and pass through transform(there is no transform done just input is fed into output) and writes to a sequential file. When the number of rows from the query is more than 10M, then the job terminates without giving any meaningful error message.
When we split the job by using Min and Max values of primary key in the select query and loop through, then the job gets completed without any error.
We have even checked whether there is a limit in the file size in unix box
Is there any kernel parameter which needs to be modified for processing more than 10M rows instead of splitting into multiple jobs. We are using Sun solaris
Magesh S
When we split the job by using Min and Max values of primary key in the select query and loop through, then the job gets completed without any error.
We have even checked whether there is a limit in the file size in unix box
Is there any kernel parameter which needs to be modified for processing more than 10M rows instead of splitting into multiple jobs. We are using Sun solaris
Magesh S