Hi,
I am primary keying records with the KeyMgtGetNextValueConcurrent() routine from within a Server Container that I am using inside of a Parallel job. It has worked in the past and I do not know what variable has caused it not work now.
Recently I have been getting the following error message in the Director log (and the job aborts):
"PKeySrvrContainer,0: dspipe_wait(2535514): Writer timed out waiting for Reader to connect."
I also got one from the other process which as I understand, is also run on the conductor node since the object is a server object:
"PKeySrvrContainer,1: dspipe_wait(2416800): Writer timed out waiting for Reader to connect."
Another thing that seems to be happening is that when the job is run by hand, it works just fine but when a sequencer runs it, I see the symptoms described above.
What might be the problem, and what might I do to troubleshoot it/solve it?
Thanks in advance.
KeyMgtGetNextValueConcurrent(), Server Container, PX Job
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Any reason why you are not using the surrogate key stage, which generates key much more efficiently, or a parallel transformer counter field as described in the FAQ forum. Both methods can be passed a start value and will concurrently increment across partitions.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
I have considered the surrogate key stage but what method would be used to pass the start value? and how would the new start value be updated for the next run? What if I had 2 instances of the same job running... could the SK's collide? I do not like the idea of setting ranges. That sounds like too much maintenance...vmcburney wrote:Any reason why you are not using the surrogate key stage, which generates key much more efficiently, or a parallel transformer counter field as described in the FAQ forum. Both methods can be passed a start value and will concurrently increment across partitions.
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
I agree, ranges are bad, you pass in the start value in one of two ways, either as a job parameter into the job (where it is retrieved via an operating system script) or via a lookup stage to the target table (as a max select). The start value can then be set in the surrogate key stage start value property as a job parameter or in a transformer using a counter as a job parameter or input field (from the lookup).
You run the job normally and let the parallel engine make the instances. Don't run it as a multiple instance job as this defeats the purpose of it being parallel.
If you use the Surrogate Key stage or the counter method from the parallel counter FAQ then you get unique values across the multiple processes.
You run the job normally and let the parallel engine make the instances. Don't run it as a multiple instance job as this defeats the purpose of it being parallel.
If you use the Surrogate Key stage or the counter method from the parallel counter FAQ then you get unique values across the multiple processes.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Participant
- Posts: 31
- Joined: Mon Dec 01, 2003 6:24 am
- Location: London
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
You can't reliably use any routine that uses variables in COMMON (see the end of Chapter 2 of Parallel Job Developer's Guide for proof). This particular routine does use COMMON, so is not appropriate for use in a parallel job.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Trigger surrogate key generators are getting the cold shoulder nowadays, too much overhead on bulk inserts, better to use a database number generator such as an identity field or a database sequence if you want the database to manage it.jenkinsrob wrote:If you dont need the key value elsewhere in your etl process and only care that it is a unique value then you should consider using a trigger on the table into which you are inserting the data...
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn