Hi,
The routine KeyMgtGetNextValueConcurrent('SSA') is generating duplicate ids(key.
I have a simple job . Read from a file , generate this key(transformer) and insert into oracle stage.
It is insert only.
I had three jobs running parallely . 1 job aborted, because of unique constraint on the key). I could see that the same unique id which the job 1 was trying to insert is being used by the 2nd job and there is a record in the table.
What could be the reason for this.
KeyMgtGetNextValueConcurrent generating duplicates
Moderators: chulett, rschirm, roy
Re: KeyMgtGetNextValueConcurrent generating duplicates
I think the initial calls to the function should not be parallel, even though your jobs run in parallel at a later stage. I mean, may be giving some delay between the start ups of the jobs might solve this.
I think i got the problem..
All the jobs aborted at the same time. I re-ran one job first..it completed. I started running the second job, it aborted with the same reason. that too at the same time where the first job initially aborted.
So, i checked the logs and found that there is one more independent job which is running at that time which does the uniq_id mgmt..
i.e it reads the max(key)value from the table and updates it in the hash file where the max ids are stored by this routine-->KeyMgtGetNextValueConcurrent('SSA').
So, there is a clash between the max ids generated.
I am not sure why this job was created (of course it was created during the inital start of the project). Is anyone aware of why this could have been designed this way..
I can remove this job, but trying to find if there could be any impact..
All the jobs aborted at the same time. I re-ran one job first..it completed. I started running the second job, it aborted with the same reason. that too at the same time where the first job initially aborted.
So, i checked the logs and found that there is one more independent job which is running at that time which does the uniq_id mgmt..
i.e it reads the max(key)value from the table and updates it in the hash file where the max ids are stored by this routine-->KeyMgtGetNextValueConcurrent('SSA').
So, there is a clash between the max ids generated.
I am not sure why this job was created (of course it was created during the inital start of the project). Is anyone aware of why this could have been designed this way..
I can remove this job, but trying to find if there could be any impact..
It's a better solution for single-threaded, non-concurrent loads: get the starting value once, increment it over the course of the job and put the new starting value back at the end. Less I/O and you don't have to "lose" values in an abort/restart situation.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers