I ran a job. It got aborted. 10 records are inserted in the target. For the next run i want to start processing from 11 th record.
Is this possible in Data Stage.
Job aborted, i want to start excution from where it aborted
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
if your source is sequential flat file then the method suggested, using @INROWNUM, can be helpful otherwise one cannot be sure that; row processed now, was not processed in last run.
If source is Oracle ordering by rownum would help as well as instead of using @INROWNUM you can avoid reading from the source by having a filter in the SQL query using rownum.
If source is Oracle ordering by rownum would help as well as instead of using @INROWNUM you can avoid reading from the source by having a filter in the SQL query using rownum.
Shantanu Choudhary
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
You might also need to round your @INROWNUM count down to the nearest commit point. If you are commiting with a transaction size of 1000 and you fail after 11323 rows you may have committed just 11000 of those rows. If you have the transaction and array size set to 1 then this makes reject handling and restart easier but slows your job down.
If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.
If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
The number of records processed should NOT be taken from @OUTROWNUM, either have a after-job routine which gets the link count of the link going into the OCI stage(which will always give a count equal to rows inserted/updated) OR do a count(*) on the oracle table, with a condition for a field which distinguishes a group of records from previous days record. But at the same time the rows which are getting rejected by the OCI stage should also be considered to be processed. So for that have a reject link writing into a sequential file.vmcburney wrote:You might also need to round your @INROWNUM count down to the nearest commit point. If you are commiting with a transaction size of 1000 and you fail after 11323 rows you may have committed just 11000 of those rows. If you have the transaction and array size set to 1 then this makes reject handling and restart easier but slows your job down.
If your INROWNUM is different to your OUTROWNUM then this becomes harder to calculate. The number of committed rows depends on how many were passed to the database stage, so any constraints in your transformer will make @INROWNUM unreliable.
No of rows to skip in the next run=no of rows rejected+no of rows processed(inserted/updated/deleted)
Shantanu Choudhary
-
- Charter Member
- Posts: 199
- Joined: Tue Jan 18, 2005 2:50 am
- Location: India
missed out few things, again the soln i suggested is not the only thing.
The soln i hav given will work if your job fails once.
But what if your job fails several times, in that case , you need to get/store the counts of rejected and processed records for every run and before running the job, you should sum all the counts of previous runs.
Coding re-startability is bit difficult in Datastage and it should be handled carefully, looking into all the aspects.
All the best.
The soln i hav given will work if your job fails once.
But what if your job fails several times, in that case , you need to get/store the counts of rejected and processed records for every run and before running the job, you should sum all the counts of previous runs.
Coding re-startability is bit difficult in Datastage and it should be handled carefully, looking into all the aspects.
All the best.
Shantanu Choudhary