Job Parameter value

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
sumitgulati
Participant
Posts: 197
Joined: Mon Feb 17, 2003 11:20 pm
Location: India

Job Parameter value

Post by sumitgulati »

Hi All,

I have a job that reads a sequential file. The sequential file has a column called TRANSACTION_ID and the input file is sorted by TRANSACTION_ID. I need to split the input file into as many files as the number of distinct TRANSACTION_ID in the source file. What I am doing right now is the I have 10 output links from the transformer. In the transformer I compare the TRANSACTION_ID of the input row with TRANSACTION_ID of the previous row. Every time we encounter a change I start sending the records to the next link. This works perfectly fine because the file is sorted by TRANSACTION_ID but it puts a limit on maximum number of splits I can have because I have 10 output links.

What is the best way to do it?

Is there any way to change a job parameter value with in the same job while the job is running. If yes then I can use just one output link to a sequential file stage. Use the job parameter in the sequential file name and change its value every time the transaction id changes.

Thanks and Regards,
-Sumit
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Is there any way to change a job parameter value with in the same job while the job is running?
No.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

What you need is something like

for i in `uniq testfile1.txt`
do
grep "^$i" testfile1.txt > testfile1.txt.$i
done

This will create multiple files with prefix "testfile1.txt" and suffix of your transaction_id. Obviously you need to change the uniq and grep commands to incorporate the pattern where the TRANSACTION_ID key is in the file.
talk2shaanc
Charter Member
Charter Member
Posts: 199
Joined: Tue Jan 18, 2005 2:50 am
Location: India

Post by talk2shaanc »

If you want to do it on Datastage, you can write DS basic code, in your job control.Sort the data on transaction_id.Then read from the sequential file(sequentially) and for every NEW transaction_id open a new sequential file (<file name>.transaction_id) to write the records , for this use the same logic as you have used earlier.
if you are not aware of DS basic, you can refer Datastage Basic code manual, installed along with your datastage and look for OPENSEQ,READSEQ,WRITESEQ,CLOSESEQ, apart from that how to use loop.
sumitgulati
Participant
Posts: 197
Joined: Mon Feb 17, 2003 11:20 pm
Location: India

Post by sumitgulati »

I wanted to aviod writing any code but looks like I will have to do it. Thanks for you ideas.

Regards,
-Sumit
Post Reply