hi,
i know if sequencer failed ofter loading some records but
how to run a job if job failed ofter loading some records
Ex:i have 1000 records in source job is aborted ofter loading 600 records
how to load remaining 400 records
i faced this one in one interview
normally if job is aborted i didn't find any records in target (i mean record count is zero)
how to run a job if job failed ofter loading some records
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 46
- Joined: Tue Jul 20, 2010 1:26 pm
- Location: USA
- Contact:
Re: how to run a job if job failed ofter loading some record
If the job aborts after loading 600 records out of 1000 in source file .... verify if the records 600 are actually loaded.
Then in the output database stage whatever it be use UPSERT Update then Insert in place of simple insert and do not forget to specify the KEY fields in that.
That will do yours work- it checks for the key if exists will update the remaining fields which will be same as the source file is the sam - then after the update is done it further inserts the remaining records only.
Hope this helps.
Thanks
Abhijit
Then in the output database stage whatever it be use UPSERT Update then Insert in place of simple insert and do not forget to specify the KEY fields in that.
That will do yours work- it checks for the key if exists will update the remaining fields which will be same as the source file is the sam - then after the update is done it further inserts the remaining records only.
Hope this helps.
Thanks
Abhijit
-
- Premium Member
- Posts: 457
- Joined: Tue Sep 25, 2007 4:05 pm
Re: how to run a job if job failed ofter loading some record
That is because you might have had the transaction limit set to '0'.ds_dwh wrote: normally if job is aborted i didn't find any records in target (i mean record count is zero)
Vivek Gadwal
Experience is what you get when you didn't get what you wanted
Experience is what you get when you didn't get what you wanted
-
- Premium Member
- Posts: 19
- Joined: Sat Apr 17, 2010 10:28 am
- Location: Navi Mumbi
first question : if my target is database then it is fine.. if my target is sequential file then what to do...?
my second question : suppose input record is 10 million, job got failed after 9 million... when i run the job again it has to update 9 million record right.. updating 9 million record is very costly right..? instead of this is there any option to fetch only the remaining 1 million record from the source...
transaction limit has to set to '0' - where to set.. this s for what purpose... can u explain in brief...?
thanks in advance..
my second question : suppose input record is 10 million, job got failed after 9 million... when i run the job again it has to update 9 million record right.. updating 9 million record is very costly right..? instead of this is there any option to fetch only the remaining 1 million record from the source...
transaction limit has to set to '0' - where to set.. this s for what purpose... can u explain in brief...?
thanks in advance..
Kannan
If your rows are in key order, you could insert a parameter on your output constraint, and set the value so that you only try to output rows that are > the last key value already output successfully.
For a target sequential file couldn't you use the append option to add more rows to an existing file? Check your Parallel Job Developer Guide for Sequential File Stage.
For a target sequential file couldn't you use the append option to add more rows to an existing file? Check your Parallel Job Developer Guide for Sequential File Stage.
-
- Participant
- Posts: 35
- Joined: Wed Oct 06, 2010 11:24 am
kannan_ayyanar wrote:first question : if my target is database then it is fine.. if my target is sequential file then what to do...?
my second question : suppose input record is 10 million, job got failed after 9 million... when i run the job again it has to update 9 million record right.. updating 9 million record is very costly right..? instead of this is there any option to fetch only the remaining 1 million record from the source...
transaction limit has to set to '0' - where to set.. this s for what purpose... can u explain in brief...?
thanks in advance..
Hi Kannan,
Every project is having restartability logic according to business needs. You can use some audit columns and while mapping pass them DStransforms DSjobsart time. use those Audit columns in before and after sql of your job according to your needs.
Pacific