Page 1 of 1

CheckPoints in Sequencer

Posted: Thu Nov 02, 2006 1:17 am
by prmuttireddy
Hi,

I have 4 jobs in a sequencer.While it was running the third job has been aborted.when it was aborted it has been loaded the records 3000 records into target.I resolved that job and again i restarted the sequencer.I want to load the record number 3001 onwards not from starting.I enabled the checkpoint option in Sequencer also.But it has been loading from the starting and rejecting the records since those records already loaded into target.I want to load from 3001 onwards.

How to resolve this please give me the solution.

Regards,
Reddy.

Posted: Thu Nov 02, 2006 6:21 am
by kduke
It is not that smart. It just reruns the one job which failed. It will only do that once. If it fails twice it gets confused. You may need to redesign your jobs to take care of this situation or manually delete records loaded.

Re: CheckPoints in Sequencer

Posted: Thu Nov 02, 2006 9:05 am
by gateleys
prmuttireddy wrote: when it was aborted it has been loaded the records 3000 records into target.
So, your 'Rows per transaction' was set as a factor of 3000. The DataStage checkpoint does NOT rollback rows that have been committed. But as Kim pointed out, it skips 'jobs' that have finished successfully (or with warnings) and reruns those that failed. In order that you start loading from the 3001 row (assuming that the abort resolution did not change any of your values), put a constraint in your transformer (prior to the target stage): @INROWNUM > 3000. But of course, this is just a quick fix.

gateleys

Posted: Thu Nov 02, 2006 9:28 am
by DSguru2B
Or to simplify your life, set the transaction size to 0 which means all or nothing. In other words, if the job fails, the inserted rows will be rolled back. This way every time you restart your job, it will start loading afresh.

Posted: Tue Nov 07, 2006 12:28 am
by prmuttireddy
Thanks for your reply.I want to do permenent fix.

Is there any solution to load the records from 3001 onwards.
As Chuck Reid said we can do start from the begining but i want to start where the job failed.

Regards,
Reddy.

Posted: Tue Nov 07, 2006 4:34 am
by Jothy Ganesan
There is no direct way i suppose to exactly start from the record where it has failed previously. U have to handle that by programming tat.
Or as pointed put ur transaction size to 0.

Posted: Tue Nov 07, 2006 7:27 am
by DSguru2B
You can do it yes. But there are limitations. If your database is partitioned then it becomes even more complicated.
Do a select max(id) from table and store the result in a hashed file keyed on a dummy key.
Reference that key in your transformer and specify the constraint
In.Key > Max.key
where In.key is your input key and Max.key is the max key value stored in the hashed file. This way if 3000 records got loaded and your input has keys from 1-6000. The next time the job runs after failure, the Max.key will be 3000 and hence the transformer will only allow 3001+ values to pass through.
Once again, this will only work for a non-partitioned database.
But if your data size is rather small, just specify the transaction size to 0 and get done with it.

Posted: Tue Nov 07, 2006 7:43 am
by chulett
Thanks Chuck. :wink:

You can also consider something like a 'check-point' system, one that records the most recent commit point as the job runs in a flat file, for instance. At completion of the job the file is deleted.

Before starting the job the file is checked and - if it exists - that number is passed to the load job to be used in a constraint as Chuck mentioned. When the file cannot be found, pass in a zero for that parameter value. Pretty straight-forward to set something like that up via a Sequence job.

I don't believe this method will be 'complicated' by a partitioned database as there is no 'max' calculation involved.

Posted: Tue Nov 07, 2006 10:53 am
by kris007
DSguru2B wrote:You can do it yes. But there are limitations. If your database is partitioned then it becomes even more complicated.
Can you throw some more light on why it becomes more complicated .Does it not return max value from all the partitions?

Posted: Tue Nov 07, 2006 10:57 pm
by tagnihotri
Although was surprised by the concept not applicable to partitioned db, I tried it on my db (db2 paritioned) and it worked fine!

Can you please detail as in what kind of complexity you are talking about here ?
DSguru2B wrote:You can do it yes. But there are limitations. If your database is partitioned then it becomes even more complicated.
Do a select max(id) from table and store the result in a hashed file keyed on a dummy key.
Reference that key in your transformer and specify the constraint
In.Key > Max.key
where In.key is your input key and Max.key is the max key value stored in the hashed file. This way if 3000 records got loaded and your input has keys from 1-6000. The next time the job runs after failure, the Max.key will be 3000 and hence the transformer will only allow 3001+ values to pass through.
Once again, this will only work for a non-partitioned database.
But if your data size is rather small, just specify the transaction size to 0 and get done with it.

To Load from 3001

Posted: Tue Nov 07, 2006 11:52 pm
by suresh.narasimha
Hi,

If you are using a Bulk Load Stage you can specify Restart Count in the properties as 3001 and Load Mode as INSERT.
This will start loading from 3001st row.

Regards,
Suresh N

Posted: Fri Nov 10, 2006 1:10 am
by prmuttireddy
Hi Suresh,

I am using DRS stage, How to do i this stage.Could you please explain it clearly.

Regards,
Reddy.

Posted: Fri Nov 10, 2006 7:22 am
by DSguru2B
If you tested it out and it works with partitioned databases then thats really good, as it cleared my misunderstanding too. I was thinking in terms of, data being passed in different channels to the database table. It was a blurry thing in my mind, thats why i advised and warned the OP about it in the begining before proposing my method.
Thanks for clearing that out for me.
Regards,

Posted: Fri Nov 10, 2006 11:14 am
by Edwink
reddy,
have a changededuction process to happen before loading those records .hope this will resolve the issue by loading the records from 3001