Dynamic handling metadata

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
rivajtp
Participant
Posts: 27
Joined: Tue Jul 10, 2007 8:49 pm
Location: Bangalore

Dynamic handling metadata

Post by rivajtp »

HI,

I have a job(suppose for Job A) which take data from input file(Sequential file stage) and load into target table(DB2 Stage) in between a tranformer which does some validation.

I have around 100 table which has the same type of tranformation and job design. but 100 table has different metadata .

Is there any way i can create only one job and dynamically it maps the metadata data (Squential file stage and DB2 STage) .

In another words one job which does the load for all the table which has similar tranformation put in the tranformer to reduce my developement and maintance effort.

Can any one please comment on the idea and give inputs

Regards
Rivaj T P
Rivaj T P
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Welcome aboard. :D

There is no way to achieve this in a server job.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
rivajtp
Participant
Posts: 27
Joined: Tue Jul 10, 2007 8:49 pm
Location: Bangalore

Post by rivajtp »

HI Ray,

Thanks for the reply.

Keeping in mind my final goal of loading the table for various metadata. i relax my gateway now from having one job to 100 jobs for 100 table, but to reduce the developement time and hence cost. Is it possible for me to write a program which take the template (Job A developed) and create other job for Other metadata

Regards
Rivaj T P
ag_ram
Premium Member
Premium Member
Posts: 524
Joined: Wed Feb 28, 2007 3:51 am

Post by ag_ram »

I have around 100 table which has the same type of tranformation and job design. but 100 table has different metadata .
I have one question:

How the same type of transformation would be applicable to different metadata of source?
ag_ram
Premium Member
Premium Member
Posts: 524
Joined: Wed Feb 28, 2007 3:51 am

Post by ag_ram »

I have around 100 table which has the same type of tranformation and job design. but 100 table has different metadata .
I have one question:

How the same type of transformation would be applicable to different metadata of source?
rivajtp
Participant
Posts: 27
Joined: Tue Jul 10, 2007 8:49 pm
Location: Bangalore

Post by rivajtp »

Same type of tranformation which i meant is there are some validation done on key column, key generation, etc which is identitical. columns are with different metadata but one to one mapping from source to target
Rivaj T P
ag_ram
Premium Member
Premium Member
Posts: 524
Joined: Wed Feb 28, 2007 3:51 am

Post by ag_ram »

Though, i am not clear with your transformation as having questions - what about Input metadata before and after transformation, is it possible to seperate light weight transformation in order to make same input and output metadata for main transformation.

1. Seperate Extract, Transform and Load as Jobs.

2. Reuse Transform and Load components.

3. (You may) keep light weight transformation in Source stage(Source Job).
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The most common method to do this kind of work would be to write a template example job, then do a DataStage export to either XML or .ds format and write a job or a program to read this file and change those elements around that are different and create 100 jobs this way.
rivajtp
Participant
Posts: 27
Joined: Tue Jul 10, 2007 8:49 pm
Location: Bangalore

Post by rivajtp »

HI,

that is a good solution.
1) which language u will prefer to write the outside program
2) while importing back if any bug. i feel is that there is some chance of getting corrupting the project. what you say on that?

Please comment on above question

Regards
Rivaj t P
Rivaj T P
rivajtp
Participant
Posts: 27
Joined: Tue Jul 10, 2007 8:49 pm
Location: Bangalore

Post by rivajtp »

HI,

that is a good solution.
1) which language u will prefer to write the outside program
2) while importing back if any bug. i feel is that there is some chance of getting corrupting the project. what you say on that?

Please comment on above question

Regards
Rivaj t P
Rivaj T P
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I believe U would prefer to use DataStage BASIC, since U is currently working at a server-only shop in Singapore (or was, last time I looked).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
dilip.datastage
Participant
Posts: 22
Joined: Wed Aug 15, 2007 10:59 pm
Location: Bangalore

Post by dilip.datastage »

Hi Rivaj,
As ArndW said we can use template and create 100 jobs using a job or program.

Hi Andrw, can you suggest how to proceed on. Has any one implimented same before. Me and Rivaj are working for same problem? If there is an metadata change after generating 100 jobs how can we go and chage again....
rivajtp
Participant
Posts: 27
Joined: Tue Jul 10, 2007 8:49 pm
Location: Bangalore

Post by rivajtp »

HI Ray,

Doing export and doing is good solution.

for my understading,

can i achive in parallel extender, if yes what is the lacking in server where i am failing to achive in server job. (Little bit to internal of datastage)

Regards
Rivaj T P
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There should be no difference in the process based upon job type.

Only design-time components are exported - parallelism is not determined until run-time in parallel jobs.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply