HI
I have a job which loads data from seq file to oracle table.
The job has a seq file input---transformer (has trims on all columns)-----Oacle stage(Insert or update)
The job runs fine for small datasets but aborts for bigger ones.
I have a file which has 530,000 records in it and it usually aborts after processing over 500,000 records.
When i split the file into two it worked fine loading both the files.
When i created a file which has 490,000 records,it again failed after
processing over 480,000 records.
I dont understand the resons for this problem.
I even changed the array size and transaction size to 1,but stil the job aborts.
Has anyone experienced this problem before???
All inputs will be greately appreciated!
Thanks
Abnormal Termination
Moderators: chulett, rschirm, roy
Re: Abnormal Termination
My car broke down, can you diagnose it now.
Kidding, what are your error messages?
Ogmios
Kidding, what are your error messages?
Ogmios
It seems like the variable is one of size. If splitting the file into halves works, then I'd guess your problem might be database side. If you say you set the commit count to 1, then rollback or temp segment would be ruled out.
So then it might be how your job is designed interfering with the loading. Are you using any row-buffering in the design? Try turning that off. What about a reject file? Are you getting row rejects and don't know it? You really should be capturing rejects to a file.
Is abnormal termination the only error message you're getting? Can you feed us a little more info?
So then it might be how your job is designed interfering with the loading. Are you using any row-buffering in the design? Try turning that off. What about a reject file? Are you getting row rejects and don't know it? You really should be capturing rejects to a file.
Is abnormal termination the only error message you're getting? Can you feed us a little more info?
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Hi KC
It just says abnormal termination detected.Its one of those tough ones.
I already have a rejects file..,i dont see anything in that file.
It fails at different row numbers each time.So,its not the data issue and usuallly it fails towards the end of the file.To me it looks like some scratch files or intermediate files are filled up
You mentioned something about row buffering..,can you pls tell me how to turn it off.
Also i was wondering if there are any directories i should check to see if they are filled up or there is not enough space.
Thanks
It just says abnormal termination detected.Its one of those tough ones.
I already have a rejects file..,i dont see anything in that file.
It fails at different row numbers each time.So,its not the data issue and usuallly it fails towards the end of the file.To me it looks like some scratch files or intermediate files are filled up
You mentioned something about row buffering..,can you pls tell me how to turn it off.
Also i was wondering if there are any directories i should check to see if they are filled up or there is not enough space.
Thanks
Under job properties, performance tab. Try disabling inter process row buffering and don't use the project defaults in case it's turned on there.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
When you reset (not re-compile) the job after it aborts, is there an event in the job log called "from previous run..."? This, if it exists, may contain additional diagnostic information.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
I submitted this case to Ascential and they said we have to update our Oracle from 9 to 9.2.
I sent them the process sizes and there seems to be a memory leak.
I was thinking that the memory leak is b'coz of Datastage and not database version.May be they should have told us before that thier Oracle plug-in will not work for Ver 9 of Oracle.
Will let you once i hear back more from Ascential.
regards
I sent them the process sizes and there seems to be a memory leak.
I was thinking that the memory leak is b'coz of Datastage and not database version.May be they should have told us before that thier Oracle plug-in will not work for Ver 9 of Oracle.
Will let you once i hear back more from Ascential.
regards
ray.wurlod wrote:When you reset (not re-compile) the job after it aborts, is there an event in the job log called "from previous run..."? This, if it exists, may contain additional diagnostic information.
-
- Premium Member
- Posts: 385
- Joined: Wed Jun 16, 2004 12:43 pm
- Location: Virginia, USA
- Contact:
Have you check the ulimit on your server. Unix could be killing your process based upon cpu time or file size. You can probably use the ulimit -a command in a before job SH command to see how the ulimits are set.
Chuck Smith
www.anotheritco.com
www.anotheritco.com