Restart sequencer after abort
Moderators: chulett, rschirm, roy
Restart sequencer after abort
Hi,
Need your help
I have one sequencer, it have 5 job activities and the 3rd job get failed. The data volume is huge , job get aborted after 2 m records. I want to restart the sequencer from that failure position. Which means from 3rd job activity should start and start load from 2 m record. We are using DB2 is one target and other target is oracle.
We are using data stage 8.1 Parallel.
Let say if the database is comming every 2000 records is there any way to capture the committed max key and load from there onwards.
Need your help
I have one sequencer, it have 5 job activities and the 3rd job get failed. The data volume is huge , job get aborted after 2 m records. I want to restart the sequencer from that failure position. Which means from 3rd job activity should start and start load from 2 m record. We are using DB2 is one target and other target is oracle.
We are using data stage 8.1 Parallel.
Let say if the database is comming every 2000 records is there any way to capture the committed max key and load from there onwards.
-
- Premium Member
- Posts: 892
- Joined: Thu Oct 16, 2003 5:18 am
-
- Premium Member
- Posts: 892
- Joined: Thu Oct 16, 2003 5:18 am
-
- Premium Member
- Posts: 892
- Joined: Thu Oct 16, 2003 5:18 am
As per my knowledge , we can do in informatica. but we need to create two tables and we need to set the properties to session and workflow.
I did't know the table names.
if the session abort , it will re run the process in recovery mode. if the commit intervell is 2000 and job fail at 2050 , in the recovery mode it will delete 50 records and starts from 2001. How will do in datastage
I did't know the table names.
if the session abort , it will re run the process in recovery mode. if the commit intervell is 2000 and job fail at 2050 , in the recovery mode it will delete 50 records and starts from 2001. How will do in datastage
You'll need a mechanism to record the latest commit point and then another mechanism to check that (typically via a contraint) so that you only start 'processing' rows once you are past the previous commit point. A successful job run would set that to zero.
Of course, all this assumes a static source.
Of course, all this assumes a static source.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 151
- Joined: Fri Feb 13, 2009 4:19 pm
Hi
First if you want to restart the sequence from where it aborted, as mentioned previously you need to know where it failed, ie the point of failure. so try to get this information first. In my system, what we do is all the job activity will have a number and we have a seperate file where we store this. so every time a sequence aborts, the point of failure will be noted (to a text file - this will be a number, example 1005). So when we start the sequence next time we will read this number first (to get start point), so the sequence will start from this point of failure.
Second, if you want to process from the record where it failed, say 2 million. then you should have a mechanism in your job to capture the last key (we se the last key value) value, or if you are using dates then you need to capture the maximum or latest dates (form your target) and now if you start your job it will capture from the point it left.
First if you want to restart the sequence from where it aborted, as mentioned previously you need to know where it failed, ie the point of failure. so try to get this information first. In my system, what we do is all the job activity will have a number and we have a seperate file where we store this. so every time a sequence aborts, the point of failure will be noted (to a text file - this will be a number, example 1005). So when we start the sequence next time we will read this number first (to get start point), so the sequence will start from this point of failure.
Second, if you want to process from the record where it failed, say 2 million. then you should have a mechanism in your job to capture the last key (we se the last key value) value, or if you are using dates then you need to capture the maximum or latest dates (form your target) and now if you start your job it will capture from the point it left.
Thanks
Karthick
Karthick
You don't need to 'capture the last key' but rather the last record number committed. Your source is static correct? Note that running on multiple nodes will complicate this.
PS. 'Key' just means any column(s) that uniquely identify a record.
PS. 'Key' just means any column(s) that uniquely identify a record.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers