Page 1 of 1

How to add data to same seq file from more than one jobs

Posted: Wed Aug 17, 2005 4:13 pm
by vsanghvi
Hi,

I have two jobs and I want to write to same seq file after running both job. I know i can append at the bottom of the file but I want to add to the next colum instead rows. For e.g. I am writing ITEM 1 and ITEM2 from job 1 and ITEM3 and ITEM 4 from job 2. Now I want to see result like below.

ITEM1 ITEM2 ITEM3 ITEM4

Any idea ?

I tried to define four fields from manager into file properties and imported 2 for each one thinking it would write accordingly but did not help. Any help is appreciated.

Thanks
V

Posted: Wed Aug 17, 2005 4:51 pm
by ray.wurlod
Not possible if the jobs are running at the same time. This is an O/S restriction, not a DataStage restriction.

If the jobs are run consecutively, use "Append" as the rule.

Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.

Posted: Thu Aug 18, 2005 6:35 am
by manojmathai
I hope the number of rows generated in the first job and second job are same.

In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.

In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4

Now you have all four columns in one file.

Posted: Thu Aug 18, 2005 7:24 am
by vsanghvi
Thanks Ray. The jobs are not running at the same time it will run one after another. When you mentioed about an extra table ? do you mean Hash File ?
ray.wurlod wrote:Not possible if the jobs are running at the same time. This is an O/S restriction, not a DataStage restriction.

If the jobs are run consecutively, use "Append" as the rule.

Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.

Posted: Thu Aug 18, 2005 7:27 am
by vsanghvi
Thanks Manoj. I think you have good suggestion but i am not very clear on implementation. If i understand it correctly. Add a colum in the first output has file which will look like this,

ROWNO, ITEM 1, ITEM2

Then do a look up on row numer and write ITEM 3 and ITEM4 to the same hash file ? or different file ?

Thanks
manojmathai wrote:I hope the number of rows generated in the first job and second job are same.

In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.

In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4

Now you have all four columns in one file.

Posted: Fri Aug 19, 2005 3:13 am
by manojmathai
Hi

For the second job, the input will be the file that contains cols ITEM3 and ITEM4 . This job will lookup the hash file to add the colums ITEM1 and ITEM2 which is generated in the first job. The output from this transform will be a written to a new file that contains four fields which are ITEM1,ITEM2, ITEM3 and ITEM4.

Hope you are clear now

Thanks
Manoj

Posted: Fri Aug 19, 2005 10:41 am
by kumar_s
Hi,
Yes maintain a additional column with a uniqe number(value), make a join based on the column in the second job, and drop the column.
Do u think, u r violating yr business rule.

regards
kumar

Posted: Fri Aug 19, 2005 11:30 am
by vsanghvi
Thanks Manoj and Kumar. I did add column with unique row ID (@OUTROWNUM) and it does work. Thanks for you help.

Kumar, I am not sure how I will be violating a business rule doing this ? Can you explain what it could be?

Thanks
V

Posted: Fri Aug 19, 2005 10:51 pm
by kumar_s
If there is no restriction in doing so, goahead............ :wink:

regards
kumar