Page 1 of 1

Posted: Thu May 22, 2014 1:05 am
by dsetlteam
1.First job will dump the EMP_ID data from the table to a sequential file
2.Take the count of records from the sequential file into a user variable
3.Start a loop, initialize it to run on the count variable value
4.Use sed command (in a command activity) to get the first record from the sequential file (i think sed -n '#ACTIVITYNAME.$Counter#p' should work) and so on
5.Use another command activity to read the output from step 4 and create a file with date suffixed to it

The above approach should work fine for your requirement.

Posted: Thu May 22, 2014 5:09 am
by prasson_ibm
Hi,

The easiest way is to write a C++ external function and call in transformer.Input of the function will be file name and data.

Posted: Thu May 22, 2014 7:40 am
by qt_ky
I vaguely remember on a "what's new in v9.1" webinar there being reference to some new feature like this. Not sure if is this. Are you on version 9.1?

Posted: Thu May 22, 2014 6:05 pm
by ray.wurlod
It's in the Sequential File stage in version 9.1 - the functionality had to be created for the Big Data File stage, so they dropped it into the Sequential File stage as well.

Posted: Thu May 22, 2014 6:06 pm
by ray.wurlod
Another technique is to use a Folder stage, either in a server job or in a server shared container in a parallel job. This functionality is innate to the Folder stage.

Posted: Mon May 26, 2014 7:47 am
by karteek
hi dsetlteam,

Thanks for your replay.

i achived this by using below steps:

1) fetch emp table data into one sequential file.
sequential file name : emp_20.txt

Code: Select all

emp_id         e_name 
---------         --------- 
123                   A 
245                   B 
786                   C
2) use this sequential file to get record count.

sq_file_stage(set row number column [row])-------------tfm_stage row=[row+1], duplicate col DUP: set to 1, this is used to group by this column ---------Agg stage--group by : DUP ,type: calculation,column for calculation: row, maximun value o/p column :COUNT
this count value store into sequential file: emp_count.txt

3)Execute_Command: command : cd
cat emp_count.txt| sed s/\"//g
4) then store this value into user variable activity : Count_WLIL_Table
5) start loop: from :1, step: 1, to: #UserVariables_Activity.Count_WLIL_Table#
6)using execute command activity fetch first record and first value i.e., emp_id value using below cammand:

sed -n '#StartLoop_Activity.$Counter# p' emp_20.txt | awk '{print $1}'
o/p: 123
7) this value store into user variable activity (Record)so that it is used to give constraint in the job.

8)create a job, take a parameter give it as : SPram
in the parameters set SPram to : UserVariables_Activity_214.Record first value [123], now SPram stores value 123

job:
source [emp]----in tfm give constraint emp_id=SPram-- destination sequential file
name it as #SPram#_##DSJobStartDate#.txt
by using this constraint it will laod first emp_id ie., 123 and file name is: 123_26-05-2014.txt

end loop for the next loop it will generates 245_26-05-2014.txt ,786_26-05-2014.txt , this will end untill all records are processed

Thanks,
Karteek M