Page 1 of 1

recovery option..

Posted: Fri Mar 18, 2005 11:44 pm
by rajiivnb
Hi Group,

if am running a job with 2 lacs records. if suddenly my job fails after 1 lac record, is there any way to recover those records which already passed. That if i run for the second time the same job, it should start from the last load record instead of staring from the first record or begining. Though if i check the transaction level, how can i get the exact record from which i should load. assume that job is an initial load, i dont need a target lookup again...

can any one suggest me a solution for this.

Thnx in advance.

Re: recovery option..

Posted: Sat Mar 19, 2005 4:36 am
by ogmios
It's not built in into DataStage as such, you have to implement the functionality yourself.

E.g. if the records you move contain a (rising) surrogate id, you can do a query on the target for the maximum already loaded surrogate id and integrate this in the original job.
Before executing the real job you check the highest surrogate id in the target and the load job only loads records for surrogate id's larger than the maximum from the target.

This is just an example, a lot depends on what data you work with. If e.g. you load from files to a database you cpuld e.g. split your input into several files and move the files away upon proper processing.

Ogmios

Posted: Sun Mar 20, 2005 5:43 pm
by vmcburney
Whatever restart method you try to implement will probably be easier if you seperate your load job from your transformation job. Create load ready files and build your database rollback/restart into the load jobs.

- Rollback the table
* Delete those rows you added.
OR Truncate the table.
OR Dump the table to a file before the load, truncate it after a failture and bulk load it back in.
OR Do a database rollback of the table.
- Restart from where you left off.
* Retrieve the row count by counting the rows processed in DataStage job details or counting the new rows in the destination table.
* Skip over the first x number of rows with a job parameter that holds a skip number and a transformer constraint.
OR Split your source file putting unprocessed rows into a new file. For example calculate the number of unprocessed rows and use the TAIL command to put them into a new file.