Page 1 of 1

Applying Loop in Datastage

Posted: Mon Apr 14, 2008 11:39 pm
by tech_savvy
Hello,

I have an requirement where i need to apply loop,
Can it be done through datastage Enterprise edition.

Thanks in Advance,

Posted: Mon Apr 14, 2008 11:44 pm
by ray.wurlod
Your requirement is too vague.

Every DataStage job is inherently a loop over all records in its source.

What do you mean more specifically?

Posted: Mon Apr 14, 2008 11:57 pm
by tech_savvy
Hello,

My requirement is that each row in a file will be a single file at the output, If there are 30 rows in the output i am expecting 30 files.
Please suggest me the best way for this approch.

Thanks

Posted: Tue Apr 15, 2008 12:01 am
by BugFree
One row per file? I think it will be easier in command. Process the data in your job and write it to a file and then split the file using windows command.

Posted: Tue Apr 15, 2008 12:32 am
by Rubu
I agree with BugFree. why would one want to add overload by creating a job for such a task.

BTW, just curious, why do you actually need to do such a thing?

Posted: Tue Apr 15, 2008 12:34 am
by Rubu
I agree with BugFree. why would one want to add overload by creating a job for such a task.

BTW, just curious, why do you actually need to do such a thing?

Posted: Tue Apr 15, 2008 12:39 am
by JoshGeorge
I have posted a parallel routine which exactly does what you need. You can find the solution in THIS post.

Main highlights of this C++ function are:
-->Creates and writes on a text file for each record.
-->You can dynamically pass your file path, file name and extension and also the records to be written into the file.
-->Records can be of different metadata.

Posted: Tue Apr 15, 2008 1:00 am
by tech_savvy
Hello Jose,
Thanks for your reply, The solution page U have given is not opening I think the page might have been expired.
Please send me other link

Thanks in advance

Posted: Tue Apr 15, 2008 1:28 am
by ray.wurlod
I'd use a server job and write to a Type 19 hashed file. Only 30 lines - sheesh! The server job would be done before the parallel job had even gotten started.

Posted: Tue Apr 15, 2008 2:05 am
by JoshGeorge
OP didn't say only 30 rows :wink:
Can't see Jose or U's post in this thread :?
tech_savvy wrote:If there are 30 rows in the output i am expecting 30 files.

Posted: Tue Apr 15, 2008 2:13 am
by ray.wurlod
Wanna race? Any N (identically structured) rows up to 10 million.

Posted: Tue Apr 15, 2008 2:22 am
by JoshGeorge
Upto 10 million!! Love to race!

Posted: Tue Apr 15, 2008 6:34 am
by chulett
tech_savvy wrote:Thanks for your reply, The solution page U have given is not opening I think the page might have been expired.
The link is fine. You're probably logged in to 'dsxchange.com' and he linked to 'www.dsxchange.com'. Copy the link, paste it into the address bar and remove the www, see if it works then.