Page 1 of 1

Dynamic output files

Posted: Sun Dec 06, 2009 12:20 am
by dsconultant
Hi,
I have a data file which has output like following

abc20091011
abc20091011
abc20091111
abc20091111
xyz20091011
xyz20090909

I need separate files for abc20091011, abc200911, xyz20090909 and so on. Problem is the output is dynamic. It could have any number of records. Right now I have 4 unique data but it will change to maybe 20 in my next run. So for each unique record I need separate file. How can I handle this?

Posted: Sun Dec 06, 2009 9:04 am
by chulett
I'm not sure if there are any stage in PX that would allow output to dynamic files, perhaps the External Target stage? Server allegedly does - the Folder stage but to be honest I've never used it in that fashion. Or you could always go the 'build op' route and put together something in C++ to do that for you.

Another possibility is building a single file and then 'splitting' the file post job, using something like csplit perhaps.

Datastage parallel (C++) routine to create files dynamically

Posted: Sun Dec 06, 2009 7:36 pm
by JoshGeorge
Link to - Datastage parallel (C++) routine to create files dynamically for every record in single job.

http://it.toolbox.com/blogs/dw-soa/data ... ally-21095

Posted: Tue Dec 15, 2009 9:09 pm
by dsconultant
Thanks, I will try the routine.

Posted: Wed Dec 16, 2009 6:11 pm
by vinnz
You could try awk as well to post-process your file as suggested above

awk '{ print >> $1 ; close($1)}' yourfilename

Posted: Wed Dec 16, 2009 8:07 pm
by keshav0307
if you want to do in datastage then , in a transformer stage do a mapping like

'echo ':INPUT_COLUMN:' >> ':INPUT_COLUMN:'.txt'

and the in after job subroutine, just execute the target file.