How to abend a job that has Oracle insert/update rejects?

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
solukas
Premium Member
Premium Member
Posts: 6
Joined: Tue May 07, 2002 7:53 pm
Location: USA

How to abend a job that has Oracle insert/update rejects?

Post by solukas »

Hello everyone,

I have recently been asked by our developers for a way to abend a parallel DataStage job in the event a row is rejected from an Oracle insert or update using the Oracle Enterprise stage. A constraint violation for example.

In my reasearch, it looks as if DataStage only handles two scenarios when it comes to rejects - rejects are ok and continue on OR rejects are something to track, put them to reject file and continue.

For us, there is a desire to say, "rejecst are bad and we do not want to contiue if they occur" . . . like you can do with a flat file (continue, reject, fail).

I would be interested to know if anyone else has done this and how it was done.

Thanks if advance for any advice you have.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

The answer will be different, depending on if you want to track all rejects and then abort after all have been logged or simply abort on the first one you find.
-craig

"You can never have too many knives" -- Logan Nine Fingers
solukas
Premium Member
Premium Member
Posts: 6
Joined: Tue May 07, 2002 7:53 pm
Location: USA

Post by solukas »

Thank you for the quick response.

Based on the discussions I have had with our developers, an immediate failure would be desirable.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Then simpy set the "Abort after X rows" value on the reject link to 1 and the first record down it will abort the job. Now, someone else will need to speak to how this is impacted by running in a multi-node configuration in a Parallel job but I believe this should work for you.
-craig

"You can never have too many knives" -- Logan Nine Fingers
solukas
Premium Member
Premium Member
Posts: 6
Joined: Tue May 07, 2002 7:53 pm
Location: USA

Post by solukas »

Hi Craig,
My apologies for such a tardy response . . . holidays, year end . . . the usual.

Anyway, it took me a little digging and experimentation to figure out what you were referring to, "Abort after X rows", as this is not an option on a reject out of an Oracle Enterprise stage or on the input side of any record capture stages (seq, peek, dataset, etc). It was not until I found another reference that it dawned on me that you were talking about the link option in a Transformer (I assume). Armed with this knowledge, I have been able to create a job that puts the Oracle reject link to a Transform, set to abort after 1 row, and output to a Peek. This has worked wonderfully.

By turning on column propagation out of the Oracle stage and through to the Peek, we have been able to make this a reusable set of code. But I did run into a problem when I bundled this in a shared container. I am getting an error stating the shared container is not a valid output for the reject link . . . even though everything in the container matches (link type, naming, etc). For now, we are using the shared container as a way to facilitate common code, we just have to use its contents (cntl + drag option) rather than using the actual shared container. If you have any thoughts on this, I would be interested. If not, no worries . . . I am going to check it in 8.1 and if it is still and issue I will submit to IBM.

Thanks again for your help.
Scott Lukas
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

No worries. :wink:

Wish I could help with the container issue, but I've got no clue at the moment, but perhaps someone else will chime in on that. Or it may be one of the many things I've heard they've fixed in the 8.1 release.
-craig

"You can never have too many knives" -- Logan Nine Fingers
prams
Participant
Posts: 73
Joined: Mon Apr 02, 2007 11:26 pm

Post by prams »

Hi i have a doubt about the aborting the job if any rejects in oracle stage.

suppose i have 1 lakh records in source, but 90 thousand records are inserted then i got the rejected record then job aborted. the oracle commit for insert is 5000 records.

if any rejects found in the job records should not insert.

is there any option for this type of requirement.

Please any one help me in this.

Thanks in Advance
Prams
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Tough. You specify 5000 records per transaction then your 90000 records will have been committed. You need to design your job differently, perhaps providing a "run identifier" that would allow you to undo the committed changes.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
prams
Participant
Posts: 73
Joined: Mon Apr 02, 2007 11:26 pm

Post by prams »

Hi Ray,

currently my job design is like this.

source...>tranformer.....>(target)[oracle stage]......>transformer.....>peakstage

here in target oracle stage reject option is tru.

if any rejects found the transfor aborting the job after 1 row option.

but the job is aborting, at the same time records are commited.but my requirement is records should not commit.

if reject records found job should abord without inserting into target.

Please help me the new design of job

Thanks in Advance
prams
crouse
Charter Member
Charter Member
Posts: 204
Joined: Sun Oct 05, 2003 12:59 pm
Contact:

Post by crouse »

and then someone will want this functionality for jobs that also update rows (type II dims come to mind)... make sure you know where all the stones in the water are in order to keep up the impression of walking on water. Doable, but now you are probably getting into adding extra columns in your target table.
Craig Rouse
Griffin Resouces, Inc
www.griffinresources.com
prams
Participant
Posts: 73
Joined: Mon Apr 02, 2007 11:26 pm

Post by prams »

Hi solukas,

Thank you very much for the information.

I set the commit interval to 0 for that perticular job by adding enviroment variable $APT_ORAUPSERT_COMMIT_ROW_INTERVAL default value as '0'. and then i ran the job, even though the job aborted but the records has been inserted in the table.

could you please tell me is there any wrong in my job?

Thanks & Regards
Prams
solukas
Premium Member
Premium Member
Posts: 6
Joined: Tue May 07, 2002 7:53 pm
Location: USA

Post by solukas »

Hi Prams,
I had to do a little digging but I hope this solves your issue.

There are two APT variables that work together to contorl commit level -
APT_ORAUPSERT_COMMIT_ROW_INTERVAL APT_ORAUPSERT_COMMIT_TIME_INTERVAL

By default they are, 5000 (rows) and 2 (seconds) respectively. If you set the row interval to 0, but leave the time at 2, it will committing every 2 seconds.

A note about my explanation above . . . I mention that we solved the issue quite some time ago . . . it was in Sever jobs specificly. The method, however, should be valid for parallel jobs as well.

I hope this does the trick.
Scott
prams
Participant
Posts: 73
Joined: Mon Apr 02, 2007 11:26 pm

Post by prams »

Hi

I have tried all the option as you mentioned but it is not working,

I kept time interval as 60000 seconds.even though the records has been inserting.

Could you please help me in this regards any other options.

Thanks & Regards
Prams.
Post Reply