Job aborted, i want to start excution from where it aborted

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
nkln@you
Premium Member
Premium Member
Posts: 271
Joined: Wed Nov 17, 2004 5:15 am
Location: US

Job aborted, i want to start excution from where it aborted

Post by nkln@you »

I ran a job. It got aborted. 10 records are inserted in the target. For the next run i want to start processing from 11 th record.

Is this possible in Data Stage.
Aim high
amsh76
Charter Member
Charter Member
Posts: 118
Joined: Wed Mar 10, 2004 10:58 pm

Post by amsh76 »

Yes you can, but you will need a process that keeps the track of how many records read/written and can add a extra parameter for start point. Then you can put a constraint using @INROWNUM....and startpoint
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

...assuming that you source produce rows in the same order every time.

Otherwise, you may have to either clear the rows from target or check in any other way.
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

if your source is sequential flat file then the method suggested, using @INROWNUM, can be helpful otherwise one cannot be sure that; row processed now, was not processed in last run.
If source is Oracle ordering by rownum would help as well as instead of using @INROWNUM you can avoid reading from the source by having a filter in the SQL query using rownum.
Shantanu Choudhary
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

You might also need to round your @INROWNUM count down to the nearest commit point. If you are commiting with a transaction size of 1000 and you fail after 11323 rows you may have committed just 11000 of those rows. If you have the transaction and array size set to 1 then this makes reject handling and restart easier but slows your job down.

If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

vmcburney wrote:You might also need to round your @INROWNUM count down to the nearest commit point. If you are commiting with a transaction size of 1000 and you fail after 11323 rows you may have committed just 11000 of those rows. If you have the transaction and array size set to 1 then this makes reject handling and restart easier but slows your job down.

If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.
The number of records processed should NOT be taken from @OUTROWNUM, either have a after-job routine which gets the link count of the link going into the OCI stage(which will always give a count equal to rows inserted/updated) OR do a count(*) on the oracle table, with a condition for a field which distinguishes a group of records from previous days record. But at the same time the rows which are getting rejected by the OCI stage should also be considered to be processed. So for that have a reject link writing into a sequential file.

No of rows to skip in the next run=no of rows rejected+no of rows processed(inserted/updated/deleted)
Shantanu Choudhary
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

missed out few things, again the soln i suggested is not the only thing.
The soln i hav given will work if your job fails once.

But what if your job fails several times, in that case , you need to get/store the counts of rejected and processed records for every run and before running the job, you should sum all the counts of previous runs.

Coding re-startability is bit difficult in Datastage and it should be handled carefully, looking into all the aspects.

All the best.
Shantanu Choudhary
Post Reply