I have routines where I need to write a rejected record from potentially 4 different points in a job. I write to the same hashed file stage from each of the 4 (Transformer) points. Then, in a parent "driver" job (which calls this one via Job Control and therefore runs the dependent job before its own active stages), I read the hashed file of accumulated rejected records and write it sequential output.
I think you could use the idea of a hashed file as an intermediate landing place for your data as long as the column structure of the many output records is all the same. I would think you could have one Transformer stage with multiple outputs (each with different constraints). Each of those outputs goes to the same exact named hashed file. Then, in a driver
(parent) job, write the hashed file to a sequential one.
If you go this route, I suggest creating/clearing the hashed file you use in the "child" or dependent job in that jobs Job Control. I think it would yield unpredictable behavior if you tried to use Create File or other options within the Universe hashed file stages themselves, since I would guess there is no predicting in which order those stages are first initialized or "opened for business."
Tracy Slack
Harland Financial Solutions
tslack@harlandfs.com
-----Original Message-----
From: Direk Phaisitwanitkul [mailto:
DirekP@dtac.co.th]
Sent: Friday, August 10, 2001 10:26 AM
To:
datastage-users@oliver.com
Subject: Generate many output records from 1 record
Hi,
How to generate many output records to the same text file from 1 input record. Such as I have 1 record "A" and I want to generate 2 records "A","1" and "A","2". I try to do this but I have problem because if I have 2 links from 1 transformer stage to sequential stage. It will generate only 1 record depends on order or link which is the last process link.
Regards,
Direk
---------------------------------
Do You Yahoo!?
Make international calls for as low as $.04/minute with Yahoo! Messenger
http://phonecard.yahoo.com/