Abnormal termination when processing above 10 Million rows

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
maheshsada
Participant
Posts: 69
Joined: Tue Jan 18, 2005 12:15 am

Abnormal termination when processing above 10 Million rows

Post by maheshsada »

We have a job which selects data from Oracle and pass through transform(there is no transform done just input is fed into output) and writes to a sequential file. When the number of rows from the query is more than 10M, then the job terminates without giving any meaningful error message.

When we split the job by using Min and Max values of primary key in the select query and loop through, then the job gets completed without any error.

We have even checked whether there is a limit in the file size in unix box

Is there any kernel parameter which needs to be modified for processing more than 10M rows instead of splitting into multiple jobs. We are using Sun solaris

Magesh S
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Try turning off inter-process and row buffering to see if the problem persists. Watch the size of the output file, see how big it is when the job aborts, that may point to a file system configured for only 2GB (an older 32BIT configuration). Otherwise, release 5.x was quite awhile back and it sounds familiar that there might have been a 2GB limit on the Sequential stage. You could search this forum, we certainly have that info onfile!
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Post Reply