1.First job will dump the EMP_ID data from the table to a sequential file
2.Take the count of records from the sequential file into a user variable
3.Start a loop, initialize it to run on the count variable value
4.Use sed command (in a command activity) to get the first record from the sequential file (i think sed -n '#ACTIVITYNAME.$Counter#p' should work) and so on
5.Use another command activity to read the output from step 4 and create a file with date suffixed to it
The above approach should work fine for your requirement.
How to generate text file for each input record
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 536
- Joined: Thu Oct 11, 2007 1:48 am
- Location: Bangalore
Hi,
The easiest way is to write a C++ external function and call in transformer.Input of the function will be file name and data.
The easiest way is to write a C++ external function and call in transformer.Input of the function will be file name and data.
Thanks
Prasoon
ETL Consultant
LinkedIn :- http://www.linkedin.com/profile/view?id ... ab_pro_top
Blog:- http://dsshar.blogspot.com/
Prasoon
ETL Consultant
LinkedIn :- http://www.linkedin.com/profile/view?id ... ab_pro_top
Blog:- http://dsshar.blogspot.com/
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It's in the Sequential File stage in version 9.1 - the functionality had to be created for the Big Data File stage, so they dropped it into the Sequential File stage as well.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Another technique is to use a Folder stage, either in a server job or in a server shared container in a parallel job. This functionality is innate to the Folder stage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hi dsetlteam,
Thanks for your replay.
i achived this by using below steps:
1) fetch emp table data into one sequential file.
sequential file name : emp_20.txt
2) use this sequential file to get record count.
sq_file_stage(set row number column [row])-------------tfm_stage row=[row+1], duplicate col DUP: set to 1, this is used to group by this column ---------Agg stage--group by : DUP ,type: calculation,column for calculation: row, maximun value o/p column :COUNT
this count value store into sequential file: emp_count.txt
3)Execute_Command: command : cd
cat emp_count.txt| sed s/\"//g
4) then store this value into user variable activity : Count_WLIL_Table
5) start loop: from :1, step: 1, to: #UserVariables_Activity.Count_WLIL_Table#
6)using execute command activity fetch first record and first value i.e., emp_id value using below cammand:
sed -n '#StartLoop_Activity.$Counter# p' emp_20.txt | awk '{print $1}'
o/p: 123
7) this value store into user variable activity (Record)so that it is used to give constraint in the job.
8)create a job, take a parameter give it as : SPram
in the parameters set SPram to : UserVariables_Activity_214.Record first value [123], now SPram stores value 123
job:
source [emp]----in tfm give constraint emp_id=SPram-- destination sequential file
name it as #SPram#_##DSJobStartDate#.txt
by using this constraint it will laod first emp_id ie., 123 and file name is: 123_26-05-2014.txt
end loop for the next loop it will generates 245_26-05-2014.txt ,786_26-05-2014.txt , this will end untill all records are processed
Thanks,
Karteek M
Thanks for your replay.
i achived this by using below steps:
1) fetch emp table data into one sequential file.
sequential file name : emp_20.txt
Code: Select all
emp_id e_name
--------- ---------
123 A
245 B
786 C
sq_file_stage(set row number column [row])-------------tfm_stage row=[row+1], duplicate col DUP: set to 1, this is used to group by this column ---------Agg stage--group by : DUP ,type: calculation,column for calculation: row, maximun value o/p column :COUNT
this count value store into sequential file: emp_count.txt
3)Execute_Command: command : cd
cat emp_count.txt| sed s/\"//g
4) then store this value into user variable activity : Count_WLIL_Table
5) start loop: from :1, step: 1, to: #UserVariables_Activity.Count_WLIL_Table#
6)using execute command activity fetch first record and first value i.e., emp_id value using below cammand:
sed -n '#StartLoop_Activity.$Counter# p' emp_20.txt | awk '{print $1}'
o/p: 123
7) this value store into user variable activity (Record)so that it is used to give constraint in the job.
8)create a job, take a parameter give it as : SPram
in the parameters set SPram to : UserVariables_Activity_214.Record first value [123], now SPram stores value 123
job:
source [emp]----in tfm give constraint emp_id=SPram-- destination sequential file
name it as #SPram#_##DSJobStartDate#.txt
by using this constraint it will laod first emp_id ie., 123 and file name is: 123_26-05-2014.txt
end loop for the next loop it will generates 245_26-05-2014.txt ,786_26-05-2014.txt , this will end untill all records are processed
Thanks,
Karteek M