We have a job which selects data from Oracle and pass through transform(there is no transform done just input is fed into output) and writes to a sequential file. When the number of rows from the query is more than 10M, then the job terminates without giving any meaningful error message.
When we split the job by using Min and Max values of primary key in the select query and loop through, then the job gets completed without any error.
We have even checked whether there is a limit in the file size in unix box
Is there any kernel parameter which needs to be modified for processing more than 10M rows instead of splitting into multiple jobs. We are using Sun solaris
Magesh S
Abnormal termination when processing above 10 Million rows
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 69
- Joined: Tue Jan 18, 2005 12:15 am
Try turning off inter-process and row buffering to see if the problem persists. Watch the size of the output file, see how big it is when the job aborts, that may point to a file system configured for only 2GB (an older 32BIT configuration). Otherwise, release 5.x was quite awhile back and it sounds familiar that there might have been a 2GB limit on the Sequential stage. You could search this forum, we certainly have that info onfile!
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle