Hello,
I was wondering if anyone could point me in the right direction to get information on DataStage functionality to automatically restart jobs/ set checkpoints for recovery if a job fails midstream.
Your thoughts would be appreciated.
TIA.
DataStage Restart Processing / Checkpoints and Recovery
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 15
- Joined: Mon Jul 20, 2009 7:16 am
-
- Participant
- Posts: 15
- Joined: Mon Jul 20, 2009 7:16 am
Yes, I saw that job sequences can create checkpoints to restart jobs, but at a more aggregated level, how are you guys handling restart/recovery points for say, if you had to load 1 million rows worth of data, and it fails midway? Your experiences are appreciated.chulett wrote:There is no "automatic" functionality if you are talking about restarts at the job level, i.e. inside a job. Sequence jobs can automate restarts between jobs but I suspect that's not wha ...
-
- Participant
- Posts: 15
- Joined: Mon Jul 20, 2009 7:16 am
It seems like an all or nothing be style would take longer especially if you are handling millions of rows, is there a way to have it commit x # of rows at a time? Then keep track of where you are in your insert stream via a temporary data set? (Let me know if this is ridiculous talk, still very new to DS)chulett wrote:If at all possible, I prefer an "all or nothing" style of load where commits are only done once at the end. That means a restart is simply a restart and something that can be handled at the job contro ...
Thanks!
I'm going to disagree with your "longer" comment unless you specifically mean in a recovery situation where it may take longer because you have to load "everything" again and you could be waiting for the rollback to complete. True, but I'll take that over any kind of a "manual intervention required" solution any day, especially when the failure is at 2 o'clock in the morning.
Of course, you have full control over the commit interval. If you want to track them, leave breadcrumbs that the job could leverage and know that if it finds a leftover marker that it has that many records to skip - that's fine and lots of people do that. Just make sure you're loading from a static source if you're going to play that game.
Of course, you have full control over the commit interval. If you want to track them, leave breadcrumbs that the job could leverage and know that if it finds a leftover marker that it has that many records to skip - that's fine and lots of people do that. Just make sure you're loading from a static source if you're going to play that game.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 15
- Joined: Mon Jul 20, 2009 7:16 am
Thanks for your comments Craig!!chulett wrote:I'm going to disagree with your "longer" comment unless you specifically mean in a recovery situation where it may take longer because you have to load "everything" again and you could be waiting for the rollback to complete. True, but I'll take that over any kind of a "manual intervention required" solution any day, especially when the failure is at 2 o'clock in the morning.
Of course, you have full control over the commit interval. If you want to track them, leave breadcrumbs that the job could leverage and know that if it finds a leftover marker that it has that many records to skip - that's fine and lots of people do that. Just make sure you're loading from a static source if you're going to play that game.