DB Referential Lookup
|
Transformer1 -->Transformer2-->HashFile
In order to accomplish this, I've created the job such that Transformer1 does not have a primary input. Instead of an input link, I've created a stage variable named RowLimit with a default value of 2. Within the constraint of the output link, I've specified a condition of @OUTROWNUM < RowLimit. The job successfully compiles and processes a single row, as desired.
I'm not sure whether this method (i.e. using a transformer without an input link) is documented anywhere and would like to know whether there is any drawback to this approach.
That method has been an accepted approach since far back in the distant past versions of DataStage. I am not aware of it being officially documented in any of the manuals, though.
As noted, a perfectly acceptable and very cool approach. I've used it to generate calendar data, gobs of test data and sometimes (with a constraint of @FALSE) to clear a hashed file lookup that is also written to in the same job before the main input stream starts. I believe that I first saw it in an old collection of "Tips and Tricks" years ago, but couldn't say for cetain.
But never "officially" documented that I am aware of either.
-craig
"You can never have too many knives" -- Logan Nine Fingers
I just wanted to pass some job parameters to some columns of a transformer so that I can pass those column values to a shared container. I guess I will have to use a row generator and then a transformer and then the shared container. looks like it is not possible with only a transformer and shared container in a parallel job.
Thanks,
Chad
__________________________________________________________________
"There are three kinds of people in this world; Ones who know how to count and the others who don't know how to count !"