I want to utilize the same set of DS jobs, the challenge is the sequential source file records are the name of each of the table names that will be used for input.
here are the steps:
>file has a table name in each record
>read the record
>use the retrieved tablename as the source
>run the DS sequence
>read next record
> repeat process
The sequence will use the looping stage the number of records will be known prior to running the process.
I'll add at this time that the DS process is an on request and not schedule.
I was trying to find away to avoid replicating the jobs to meet the source file requirements. This is running DS ver 8.5.
Thanks,
Char
How to run same Sequence from sequential file
Moderators: chulett, rschirm, roy
One possible solution:
A job sequence should be able to loop through the text file and pass the current table name as a parameter to an extract datastage job (which will read the table and write to a sequential file). An additional parameter would be an iteration counter.
In your datastage job, you can pass the tablename as a parameter into the database stage/connector. Read the data from the table and write to a sequential file, the name of which contains the iteration counter. Example: /data/table_extract_1.txt, /data/table_extract_2.txt and so on.
After you have completed the loop, a second datastage job can read all of the extracted files by using a file pattern: /data/table_extract_*.txt
Someone else here may have a better solution than this.
Regards,
A job sequence should be able to loop through the text file and pass the current table name as a parameter to an extract datastage job (which will read the table and write to a sequential file). An additional parameter would be an iteration counter.
In your datastage job, you can pass the tablename as a parameter into the database stage/connector. Read the data from the table and write to a sequential file, the name of which contains the iteration counter. Example: /data/table_extract_1.txt, /data/table_extract_2.txt and so on.
After you have completed the loop, a second datastage job can read all of the extracted files by using a file pattern: /data/table_extract_*.txt
Someone else here may have a better solution than this.
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
All generalizations are false, including this one - Mark Twain.