Page 1 of 1

KeyMgtGetNextValueConcurrent(), Server Container, PX Job

Posted: Thu Nov 17, 2005 2:18 pm
by jmessiha
Hi,

I am primary keying records with the KeyMgtGetNextValueConcurrent() routine from within a Server Container that I am using inside of a Parallel job. It has worked in the past and I do not know what variable has caused it not work now.

Recently I have been getting the following error message in the Director log (and the job aborts):

"PKeySrvrContainer,0: dspipe_wait(2535514): Writer timed out waiting for Reader to connect."

I also got one from the other process which as I understand, is also run on the conductor node since the object is a server object:

"PKeySrvrContainer,1: dspipe_wait(2416800): Writer timed out waiting for Reader to connect."

Another thing that seems to be happening is that when the job is run by hand, it works just fine but when a sequencer runs it, I see the symptoms described above.

What might be the problem, and what might I do to troubleshoot it/solve it?

Thanks in advance.

Posted: Thu Nov 17, 2005 3:54 pm
by vmcburney
Any reason why you are not using the surrogate key stage, which generates key much more efficiently, or a parallel transformer counter field as described in the FAQ forum. Both methods can be passed a start value and will concurrently increment across partitions.

Posted: Thu Nov 17, 2005 5:20 pm
by jmessiha
vmcburney wrote:Any reason why you are not using the surrogate key stage, which generates key much more efficiently, or a parallel transformer counter field as described in the FAQ forum. Both methods can be passed a start value and will concurrently increment across partitions.
I have considered the surrogate key stage but what method would be used to pass the start value? and how would the new start value be updated for the next run? What if I had 2 instances of the same job running... could the SK's collide? I do not like the idea of setting ranges. That sounds like too much maintenance...

Posted: Thu Nov 17, 2005 8:38 pm
by vmcburney
I agree, ranges are bad, you pass in the start value in one of two ways, either as a job parameter into the job (where it is retrieved via an operating system script) or via a lookup stage to the target table (as a max select). The start value can then be set in the surrogate key stage start value property as a job parameter or in a transformer using a counter as a job parameter or input field (from the lookup).

You run the job normally and let the parallel engine make the instances. Don't run it as a multiple instance job as this defeats the purpose of it being parallel.

If you use the Surrogate Key stage or the counter method from the parallel counter FAQ then you get unique values across the multiple processes.

Posted: Thu Nov 17, 2005 8:54 pm
by jenkinsrob
If you dont need the key value elsewhere in your etl process and only care that it is a unique value then you should consider using a trigger on the table into which you are inserting the data...

Posted: Thu Nov 17, 2005 9:15 pm
by ray.wurlod
You can't reliably use any routine that uses variables in COMMON (see the end of Chapter 2 of Parallel Job Developer's Guide for proof). This particular routine does use COMMON, so is not appropriate for use in a parallel job.

Posted: Thu Nov 17, 2005 9:53 pm
by vmcburney
jenkinsrob wrote:If you dont need the key value elsewhere in your etl process and only care that it is a unique value then you should consider using a trigger on the table into which you are inserting the data...
Trigger surrogate key generators are getting the cold shoulder nowadays, too much overhead on bulk inserts, better to use a database number generator such as an identity field or a database sequence if you want the database to manage it.