Can we give dynamic filenames to Hash, target & Source

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
sck
Participant
Posts: 18
Joined: Thu Mar 18, 2004 2:58 pm

Can we give dynamic filenames to Hash, target & Source

Post by sck »

Hi All,
Can we give dynamic filenames in between a job to hash files, source, and target. Here we have an ID coming in from the Source. How can we give this ID as a filename alongwith a static file name, for example
#Key#_seg_name
where #Key# is the dynamic number and _seg_name is static.

Could anyone please help on this.

Thanks

Krishna
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

I don't see why not - as long as the metadata doesn't change. Have you tried it?

Will you be reusing these dynamically named objects? If not, you'll need to worry about cleaning up after yourself, deleting the hash file when you are finished with it, for example.
-craig

"You can never have too many knives" -- Logan Nine Fingers
sck
Participant
Posts: 18
Joined: Thu Mar 18, 2004 2:58 pm

Post by sck »

Thanks for your reply.
Yes we will be reusing the same metadata. But I am wondering how do we actually do it. Is there any specific way that we can actually assig filename to a hash file, and target in between a job. And also curious as to how a source file could be dynamic names.

Thanks
Krishna
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Use Job Parameters for all dynamic portions, which you alluded to in your first post. You'll need to build a process that determines the values a particular job would need and then passes them in as Parameter values.

Without knowing your processing requirements and your expertise level with DataStage, it's hard to provide a specific solution. There are a number of ways to do this: UNIX script called via a Execute Command stage, or a Routine Activity Stage that executes a script and prepares the parameter value. Possibly even a DataStage job could be leveraged to do this and pass information out via User Status. This could be data driven, with parameter values being pulled from a database table or a flat file at runtime.

Most of these can be used to set Parameter values in a subsequent DataStage job via a Sequence Job. Some methods may require some "manual intervention" to get parameters passed downstream, requiring you to hand code some Job Control. Lastly, you could even code an entire Job Control "batch" to do all of this.

Like I said, hard to say. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
santhu
Participant
Posts: 20
Joined: Fri Mar 12, 2004 3:07 am

Post by santhu »

Hi,

From your scenario stated, You can do the following.

1) Use a Control table with Feed-id identifying the source/ Target process uniquely.

2) Other columns in the above control table could be Sequence number / Key (From source), load date and so on as per your requirement.

3) You have to concatenate the Feed-id with sequence number/ key in you main Batch job and pass this concatinated value as the job parameter for the required jobs.

4) By doing so, you can control the synchronization of sequence numbers between your DW area and source / Target for the Refresh load / or consecutive Incremental / Delta extraction or loads.

Hope this helps.

Regards,
Santhosh S
Post Reply