DSSETPARAM

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
butlerhd
Participant
Posts: 7
Joined: Wed Feb 19, 2003 11:55 pm
Location: USA

DSSETPARAM

Post by butlerhd »

Has anyone been able to use DSSETPARAM from within a job? I want to set the value of a job parameter based on the result of a data transformation within a transform stage. The parameter will be used as a job output file name. For example, if a job performs a lookup on an Oracle table, and the value it retrieves needs to used in the name of the output file. Can this be done using a parameter. I would like to do this from within a job, and avoid using external job scripting and job control.

Thanks
raju_chvr
Premium Member
Premium Member
Posts: 165
Joined: Sat Sep 27, 2003 9:19 am
Location: USA

Re: DSSETPARAM

Post by raju_chvr »

I don't think you can use DSSETPARAM within the job. I think they are one of the first things that are executed when the job is started.

Look on of Ray's postings where he mentioned the Order of execution of variables in the job.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Re: DSSETPARAM

Post by chulett »

butlerhd wrote:Has anyone been able to use DSSETPARAM from within a job?
You can't, period. You can only affect the value of a Job Parameter before the job starts, once it does it is cast in stone.

However, that doesn't mean you can't do what you want to do - but you will need to break this into two jobs. Use the first to get the value that you need and then pass it to a second job that does the actual work as a Job Parameter to be used as the output filename. A simple Sequence job can automate the passing of the parameter. One way to skin that particular cat, anyway.
-craig

"You can never have too many knives" -- Logan Nine Fingers
butlerhd
Participant
Posts: 7
Joined: Wed Feb 19, 2003 11:55 pm
Location: USA

Re: DSSETPARAM

Post by butlerhd »

Thanks for the swift reply, but I am still unclear how to set the Parameter Value in the first of the two proposed jobs. If I break the job into two, and the first job derives a value( from a lookup or data transformation), how do I put that value into a job parameter or job sequencer parameter that can then be available to the second job?

Thanks
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

IMHO, the simplest way is to leverage the USERSTATUS area. There have seen several discussions here on the subject which a search should turn up. It will flesh out what I've written below.

I've built a custom routine to call the DSSetUserStatus function from inside a Transform to do exactly the kind of thing we are discussing. This information in essence gets 'parked' and any subsequent job can reference it. Then, when you link the two jobs via the Sequence job, set the value of the second job's parameter to the User Status value from the previous job. It should be available via the "Insert Parameter Value / External Parameter Helper" as the StageName.$UserStatus choice from the stage that runs the first job.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

If you wish to stay within a single job, you might consider a fixed output file name and use an after-job routine call to execute a script to do the database fetch and copy/move the file under the variable name.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Craig

I would use a hash file or a sequential file. It is very easy to write the result to either of these adn read it back in the batch job controlling the next job. These are a lot easier to explain to someone not fimliar with DataStage. They are more conventional storage methods. Keep it simple. I think these more eloborate methods are confusing and may not work from release to release.
Mamu Kim
Post Reply