How to abend a job that has Oracle insert/update rejects?
Moderators: chulett, rschirm, roy
How to abend a job that has Oracle insert/update rejects?
Hello everyone,
I have recently been asked by our developers for a way to abend a parallel DataStage job in the event a row is rejected from an Oracle insert or update using the Oracle Enterprise stage. A constraint violation for example.
In my reasearch, it looks as if DataStage only handles two scenarios when it comes to rejects - rejects are ok and continue on OR rejects are something to track, put them to reject file and continue.
For us, there is a desire to say, "rejecst are bad and we do not want to contiue if they occur" . . . like you can do with a flat file (continue, reject, fail).
I would be interested to know if anyone else has done this and how it was done.
Thanks if advance for any advice you have.
I have recently been asked by our developers for a way to abend a parallel DataStage job in the event a row is rejected from an Oracle insert or update using the Oracle Enterprise stage. A constraint violation for example.
In my reasearch, it looks as if DataStage only handles two scenarios when it comes to rejects - rejects are ok and continue on OR rejects are something to track, put them to reject file and continue.
For us, there is a desire to say, "rejecst are bad and we do not want to contiue if they occur" . . . like you can do with a flat file (continue, reject, fail).
I would be interested to know if anyone else has done this and how it was done.
Thanks if advance for any advice you have.
Then simpy set the "Abort after X rows" value on the reject link to 1 and the first record down it will abort the job. Now, someone else will need to speak to how this is impacted by running in a multi-node configuration in a Parallel job but I believe this should work for you.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Hi Craig,
My apologies for such a tardy response . . . holidays, year end . . . the usual.
Anyway, it took me a little digging and experimentation to figure out what you were referring to, "Abort after X rows", as this is not an option on a reject out of an Oracle Enterprise stage or on the input side of any record capture stages (seq, peek, dataset, etc). It was not until I found another reference that it dawned on me that you were talking about the link option in a Transformer (I assume). Armed with this knowledge, I have been able to create a job that puts the Oracle reject link to a Transform, set to abort after 1 row, and output to a Peek. This has worked wonderfully.
By turning on column propagation out of the Oracle stage and through to the Peek, we have been able to make this a reusable set of code. But I did run into a problem when I bundled this in a shared container. I am getting an error stating the shared container is not a valid output for the reject link . . . even though everything in the container matches (link type, naming, etc). For now, we are using the shared container as a way to facilitate common code, we just have to use its contents (cntl + drag option) rather than using the actual shared container. If you have any thoughts on this, I would be interested. If not, no worries . . . I am going to check it in 8.1 and if it is still and issue I will submit to IBM.
Thanks again for your help.
Scott Lukas
My apologies for such a tardy response . . . holidays, year end . . . the usual.
Anyway, it took me a little digging and experimentation to figure out what you were referring to, "Abort after X rows", as this is not an option on a reject out of an Oracle Enterprise stage or on the input side of any record capture stages (seq, peek, dataset, etc). It was not until I found another reference that it dawned on me that you were talking about the link option in a Transformer (I assume). Armed with this knowledge, I have been able to create a job that puts the Oracle reject link to a Transform, set to abort after 1 row, and output to a Peek. This has worked wonderfully.
By turning on column propagation out of the Oracle stage and through to the Peek, we have been able to make this a reusable set of code. But I did run into a problem when I bundled this in a shared container. I am getting an error stating the shared container is not a valid output for the reject link . . . even though everything in the container matches (link type, naming, etc). For now, we are using the shared container as a way to facilitate common code, we just have to use its contents (cntl + drag option) rather than using the actual shared container. If you have any thoughts on this, I would be interested. If not, no worries . . . I am going to check it in 8.1 and if it is still and issue I will submit to IBM.
Thanks again for your help.
Scott Lukas
Hi i have a doubt about the aborting the job if any rejects in oracle stage.
suppose i have 1 lakh records in source, but 90 thousand records are inserted then i got the rejected record then job aborted. the oracle commit for insert is 5000 records.
if any rejects found in the job records should not insert.
is there any option for this type of requirement.
Please any one help me in this.
Thanks in Advance
Prams
suppose i have 1 lakh records in source, but 90 thousand records are inserted then i got the rejected record then job aborted. the oracle commit for insert is 5000 records.
if any rejects found in the job records should not insert.
is there any option for this type of requirement.
Please any one help me in this.
Thanks in Advance
Prams
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Tough. You specify 5000 records per transaction then your 90000 records will have been committed. You need to design your job differently, perhaps providing a "run identifier" that would allow you to undo the committed changes.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi Ray,
currently my job design is like this.
source...>tranformer.....>(target)[oracle stage]......>transformer.....>peakstage
here in target oracle stage reject option is tru.
if any rejects found the transfor aborting the job after 1 row option.
but the job is aborting, at the same time records are commited.but my requirement is records should not commit.
if reject records found job should abord without inserting into target.
Please help me the new design of job
Thanks in Advance
prams
currently my job design is like this.
source...>tranformer.....>(target)[oracle stage]......>transformer.....>peakstage
here in target oracle stage reject option is tru.
if any rejects found the transfor aborting the job after 1 row option.
but the job is aborting, at the same time records are commited.but my requirement is records should not commit.
if reject records found job should abord without inserting into target.
Please help me the new design of job
Thanks in Advance
prams
and then someone will want this functionality for jobs that also update rows (type II dims come to mind)... make sure you know where all the stones in the water are in order to keep up the impression of walking on water. Doable, but now you are probably getting into adding extra columns in your target table.
Hi solukas,
Thank you very much for the information.
I set the commit interval to 0 for that perticular job by adding enviroment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL default value as '0'. and then i ran the job, even though the job aborted but the records has been inserted in the table.
could you please tell me is there any wrong in my job?
Thanks & Regards
Prams
Thank you very much for the information.
I set the commit interval to 0 for that perticular job by adding enviroment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL default value as '0'. and then i ran the job, even though the job aborted but the records has been inserted in the table.
could you please tell me is there any wrong in my job?
Thanks & Regards
Prams
Hi Prams,
I had to do a little digging but I hope this solves your issue.
There are two APT variables that work together to contorl commit level -
APT_ORAUPSERT_COMMIT_ROW_INTERVAL APT_ORAUPSERT_COMMIT_TIME_INTERVAL
By default they are, 5000 (rows) and 2 (seconds) respectively. If you set the row interval to 0, but leave the time at 2, it will committing every 2 seconds.
A note about my explanation above . . . I mention that we solved the issue quite some time ago . . . it was in Sever jobs specificly. The method, however, should be valid for parallel jobs as well.
I hope this does the trick.
Scott
I had to do a little digging but I hope this solves your issue.
There are two APT variables that work together to contorl commit level -
APT_ORAUPSERT_COMMIT_ROW_INTERVAL APT_ORAUPSERT_COMMIT_TIME_INTERVAL
By default they are, 5000 (rows) and 2 (seconds) respectively. If you set the row interval to 0, but leave the time at 2, it will committing every 2 seconds.
A note about my explanation above . . . I mention that we solved the issue quite some time ago . . . it was in Sever jobs specificly. The method, however, should be valid for parallel jobs as well.
I hope this does the trick.
Scott