Page 1 of 1

Job aborted, i want to start excution from where it aborted

Posted: Thu May 26, 2005 8:22 am
by nkln@you
I ran a job. It got aborted. 10 records are inserted in the target. For the next run i want to start processing from 11 th record.

Is this possible in Data Stage.

Posted: Thu May 26, 2005 9:53 am
by amsh76
Yes you can, but you will need a process that keeps the track of how many records read/written and can add a extra parameter for start point. Then you can put a constraint using @INROWNUM....and startpoint

Posted: Thu May 26, 2005 11:34 am
by Sainath.Srinivasan
...assuming that you source produce rows in the same order every time.

Otherwise, you may have to either clear the rows from target or check in any other way.

Posted: Thu May 26, 2005 10:13 pm
by talk2shaanc
if your source is sequential flat file then the method suggested, using @INROWNUM, can be helpful otherwise one cannot be sure that; row processed now, was not processed in last run.
If source is Oracle ordering by rownum would help as well as instead of using @INROWNUM you can avoid reading from the source by having a filter in the SQL query using rownum.

Posted: Fri May 27, 2005 12:29 am
by vmcburney
You might also need to round your @INROWNUM count down to the nearest commit point. If you are commiting with a transaction size of 1000 and you fail after 11323 rows you may have committed just 11000 of those rows. If you have the transaction and array size set to 1 then this makes reject handling and restart easier but slows your job down.

If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.

Posted: Fri May 27, 2005 12:38 am
by talk2shaanc
vmcburney wrote:You might also need to round your @INROWNUM count down to the nearest commit point. If you are commiting with a transaction size of 1000 and you fail after 11323 rows you may have committed just 11000 of those rows. If you have the transaction and array size set to 1 then this makes reject handling and restart easier but slows your job down.

If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.
The number of records processed should NOT be taken from @OUTROWNUM, either have a after-job routine which gets the link count of the link going into the OCI stage(which will always give a count equal to rows inserted/updated) OR do a count(*) on the oracle table, with a condition for a field which distinguishes a group of records from previous days record. But at the same time the rows which are getting rejected by the OCI stage should also be considered to be processed. So for that have a reject link writing into a sequential file.

No of rows to skip in the next run=no of rows rejected+no of rows processed(inserted/updated/deleted)

Posted: Fri May 27, 2005 12:46 am
by talk2shaanc
missed out few things, again the soln i suggested is not the only thing.
The soln i hav given will work if your job fails once.

But what if your job fails several times, in that case , you need to get/store the counts of rejected and processed records for every run and before running the job, you should sum all the counts of previous runs.

Coding re-startability is bit difficult in Datastage and it should be handled carefully, looking into all the aspects.

All the best.