Hi,
Say I have 3 tables (more like 30):
Table Foo
ColA_Key
ColB_Var
Table Bar
ColC_Key
ColD_Var1
ColE_Var2
Table Blah
ColF_Key
ColG_Var
Say I create a CSV file (or database table) like so:
TableName,KeyVars,DataVars
Foo,ColA_Key,ColB_Var
Bar,ColC_Kay,ColD_Var1|ColE_Var2
Blah,ColF_Key,ColG_Var
I'd like to create a "template" job which would create a hash file from these tables. In pseudocode:
Open the CSV file
Read the TableName,KeyVars,DataVars
Generate SQL code like:
select distinct keyvars,datavars from tablename
Map these columns to identically named columns in the hash file
Create the hash file named HA_tablename in a standard directory location
Loop until done
Has anyone ever done something like this, where you write a template job, then loop over the job according to some other metadata, supplying parameters for each job?
Or do I have to create 30 jobs and/or 30 DRS-->xfm-->hash file processes for each table?
If this is a FAQ, please just give me a suitable search string for the forum database.
Thanks,
Scott
Metadata driven job execution
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
Simplest will be to have 2 columns - keyCol and nonKeyCol (both Varchar).
Send all key cols concatenated with markers into keyCol and non key to nonKeyCol. When reading, split them accordingly.
Second option (This is a guess...maybe Ray can ratify !!)
1.) Create an universe ODBC link
2.) First job to 'create table' with parameter in the create SQL but not passing any row
3.) Second job to populate with same like option 1 but using parameterized (user defined) insert sql to split the columns.
Send all key cols concatenated with markers into keyCol and non key to nonKeyCol. When reading, split them accordingly.
Second option (This is a guess...maybe Ray can ratify !!)
1.) Create an universe ODBC link
2.) First job to 'create table' with parameter in the create SQL but not passing any row
3.) Second job to populate with same like option 1 but using parameterized (user defined) insert sql to split the columns.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Actually, you CAN do this if you treat all the columns as VarChar, and use generic column names (sequential files don't care). Just make sure that you have as many columns named as the maximum possible number, and replace any missing columns with "" or NULL. Hashed files are permitted to have variable numbers of fields in the record, but it's effectively the same if you fill them with empty strings.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.