Page 1 of 1

How to handle multiple Input as well as output @runtime

Posted: Tue Mar 30, 2004 5:46 am
by Agnivesh
Hi,
How can we create a job in which at run time we pass the input(flat-file)
as well as output(oracle 9i tables) ? We are having more than 200 flat-file & 200 oracle 9 i table. Can any body help?[/b]

Posted: Tue Mar 30, 2004 6:19 am
by WoMaWil
Use parameters for input-files and output-tables

Posted: Tue Mar 30, 2004 8:02 am
by roy
Hi,
Welcome :).
I doubt you have 200 files/tables with the same table definition,
so parameters will get you to dynamically set file/table name/s, but not the table definition.
what are you trying to acheive?
are you trying to create a generic job that loads the data without the need to build 200 jobs, each having it's own table definition?

Posted: Tue Mar 30, 2004 8:05 am
by chulett
What about the metadata for the 200 flat files and tables? Are they similar in all but name? That's the only way you're going to create one job that takes Job Parameters and updates 200 tables from 200 files. Unless the Oracle tables have a common set of fields and those fields are the only fields that will be updated. :?

Posted: Tue Mar 30, 2004 11:00 pm
by Agnivesh
roy wrote:Hi,
Welcome :).
I doubt you have 200 files/tables with the same table definition,
so parameters will get you to dynamically set file/table name/s, but not the table definition.
what are you trying to acheive?
are you trying to create a generic job that loads the data without the need to build 200 jobs, each having it's own table definition?

I am trying to create a generic job that loads the data without the need to build 200 jobs, each having it's own table definition. The files & respective tables are 1 to 1.

Posted: Tue Mar 30, 2004 11:08 pm
by Agnivesh
chulett wrote:What about the metadata for the 200 flat files and tables? Are they similar in all but name? That's the only way you're going to create one job that takes Job Parameters and updates 200 tables from 200 files. Unless the Oracle tables have a common set of fields and those fields are the only fields that will be updated. :?

I am trying to create a generic job that loads the data without the need to build 200 jobs, each having it's own table definition. The files & respective tables are 1 to 1. But two Oracle tables doesn't have common set of field & I am going to truncate the tables before loading ,so there is
no chance for updations.

Posted: Wed Mar 31, 2004 12:05 am
by chulett
Agnivesh wrote:I am trying to create a generic job that loads the data without the need to build 200 jobs, each having it's own table definition. The files & respective tables are 1 to 1. But two Oracle tables doesn't have common set of field & I am going to truncate the tables before loading ,so there is no chance for updations.
Can't be done - at least not in any kind of a 'normal' DataStage job. Perhaps if you used the Oracle 'catalog' tables as a source, passed in a table name and then somehow dynamically spun out a control file for SQL Loader... :? Heck, then you might as well stick with shell scripts and SQL Loader. Set them up once and then run the matching load for each table. There really isn't any such animal as a 'generic' DataStage job, one job that could load 200 different tables.

Now, there are ways to automatically create 200 jobs from the table metadata. A colleague of mine just did something like this to create 478 jobs to extract CDC data to matching flat files. The end result was an export file in XML format that you then imported to create the jobs. Not something for The Faint of Heart or that could be explained in a post here. You may, however, be able to contract with him to create something like that for you. :wink:

Other than that, if you are simply doing truncate-and-load without any odd trasformations of the data, why not just fall back on SQL Loader?

Posted: Thu Apr 08, 2004 4:04 pm
by crouse
chulett wrote:Now, there are ways to automatically create 200 jobs from the table metadata. A colleague of mine just did something like this to create 478 jobs to extract CDC data to matching flat files.
Agnivesh,

I responded via private message, but...

Send me an email with details if you're still interested in a process to generate 1-N number of DS jobs automatically.

Thanks,
Craig

Posted: Thu Apr 08, 2004 5:22 pm
by ray.wurlod
Ask yourself this question. "Could I automate the generation of 200 sqlldr control scripts?" If the answer is yes, then the DataStage answer is also yes. If the answer is no, then the DataStage answer could still be yes, but not without adding some specialist knowledge about how table and column definitions are stored in job designs.

Posted: Sun Apr 11, 2004 9:46 pm
by Agnivesh
crouse wrote:
chulett wrote:Now, there are ways to automatically create 200 jobs from the table metadata. A colleague of mine just did something like this to create 478 jobs to extract CDC data to matching flat files.
Agnivesh,

I responded via private message, but...

Send me an email with details if you're still interested in a process to generate 1-N number of DS jobs automatically.

Thanks,
Craig


Sorry for being late. I will mail u soon with details

Thanks
Agnivesh