How to add data to same seq file from more than one jobs
Moderators: chulett, rschirm, roy
How to add data to same seq file from more than one jobs
Hi,
I have two jobs and I want to write to same seq file after running both job. I know i can append at the bottom of the file but I want to add to the next colum instead rows. For e.g. I am writing ITEM 1 and ITEM2 from job 1 and ITEM3 and ITEM 4 from job 2. Now I want to see result like below.
ITEM1 ITEM2 ITEM3 ITEM4
Any idea ?
I tried to define four fields from manager into file properties and imported 2 for each one thinking it would write accordingly but did not help. Any help is appreciated.
Thanks
V
I have two jobs and I want to write to same seq file after running both job. I know i can append at the bottom of the file but I want to add to the next colum instead rows. For e.g. I am writing ITEM 1 and ITEM2 from job 1 and ITEM3 and ITEM 4 from job 2. Now I want to see result like below.
ITEM1 ITEM2 ITEM3 ITEM4
Any idea ?
I tried to define four fields from manager into file properties and imported 2 for each one thinking it would write accordingly but did not help. Any help is appreciated.
Thanks
V
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Not possible if the jobs are running at the same time. This is an O/S restriction, not a DataStage restriction.
If the jobs are run consecutively, use "Append" as the rule.
Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.
If the jobs are run consecutively, use "Append" as the rule.
Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 23
- Joined: Mon Jul 04, 2005 6:25 am
I hope the number of rows generated in the first job and second job are same.
In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.
In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4
Now you have all four columns in one file.
In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.
In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4
Now you have all four columns in one file.
Thanks Ray. The jobs are not running at the same time it will run one after another. When you mentioed about an extra table ? do you mean Hash File ?
ray.wurlod wrote:Not possible if the jobs are running at the same time. This is an O/S restriction, not a DataStage restriction.
If the jobs are run consecutively, use "Append" as the rule.
Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.
Thanks Manoj. I think you have good suggestion but i am not very clear on implementation. If i understand it correctly. Add a colum in the first output has file which will look like this,
ROWNO, ITEM 1, ITEM2
Then do a look up on row numer and write ITEM 3 and ITEM4 to the same hash file ? or different file ?
Thanks
ROWNO, ITEM 1, ITEM2
Then do a look up on row numer and write ITEM 3 and ITEM4 to the same hash file ? or different file ?
Thanks
manojmathai wrote:I hope the number of rows generated in the first job and second job are same.
In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.
In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4
Now you have all four columns in one file.
-
- Participant
- Posts: 23
- Joined: Mon Jul 04, 2005 6:25 am
Hi
For the second job, the input will be the file that contains cols ITEM3 and ITEM4 . This job will lookup the hash file to add the colums ITEM1 and ITEM2 which is generated in the first job. The output from this transform will be a written to a new file that contains four fields which are ITEM1,ITEM2, ITEM3 and ITEM4.
Hope you are clear now
Thanks
Manoj
For the second job, the input will be the file that contains cols ITEM3 and ITEM4 . This job will lookup the hash file to add the colums ITEM1 and ITEM2 which is generated in the first job. The output from this transform will be a written to a new file that contains four fields which are ITEM1,ITEM2, ITEM3 and ITEM4.
Hope you are clear now
Thanks
Manoj