How to add data to same seq file from more than one jobs

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
vsanghvi
Participant
Posts: 22
Joined: Wed Mar 09, 2005 4:47 pm

How to add data to same seq file from more than one jobs

Post by vsanghvi »

Hi,

I have two jobs and I want to write to same seq file after running both job. I know i can append at the bottom of the file but I want to add to the next colum instead rows. For e.g. I am writing ITEM 1 and ITEM2 from job 1 and ITEM3 and ITEM 4 from job 2. Now I want to see result like below.

ITEM1 ITEM2 ITEM3 ITEM4

Any idea ?

I tried to define four fields from manager into file properties and imported 2 for each one thinking it would write accordingly but did not help. Any help is appreciated.

Thanks
V
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Not possible if the jobs are running at the same time. This is an O/S restriction, not a DataStage restriction.

If the jobs are run consecutively, use "Append" as the rule.

Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
manojmathai
Participant
Posts: 23
Joined: Mon Jul 04, 2005 6:25 am

Post by manojmathai »

I hope the number of rows generated in the first job and second job are same.

In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.

In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4

Now you have all four columns in one file.
vsanghvi
Participant
Posts: 22
Joined: Wed Mar 09, 2005 4:47 pm

Post by vsanghvi »

Thanks Ray. The jobs are not running at the same time it will run one after another. When you mentioed about an extra table ? do you mean Hash File ?
ray.wurlod wrote:Not possible if the jobs are running at the same time. This is an O/S restriction, not a DataStage restriction.

If the jobs are run consecutively, use "Append" as the rule.

Updating rows in a text file is very difficult, and can really only be done through the ODBC driver for text files (and will be very slow). Better would to be to use an intermediate table which you can update, and then dump to file if required.
vsanghvi
Participant
Posts: 22
Joined: Wed Mar 09, 2005 4:47 pm

Post by vsanghvi »

Thanks Manoj. I think you have good suggestion but i am not very clear on implementation. If i understand it correctly. Add a colum in the first output has file which will look like this,

ROWNO, ITEM 1, ITEM2

Then do a look up on row numer and write ITEM 3 and ITEM4 to the same hash file ? or different file ?

Thanks
manojmathai wrote:I hope the number of rows generated in the first job and second job are same.

In the first job,have an extra field with name as index which has a row number. Write it into a hash file with this as the key.

In the second job, do a lookup for this row number and write into the output file. Do the look up and add the columns 1 and 2 along with columns 3 and 4

Now you have all four columns in one file.
manojmathai
Participant
Posts: 23
Joined: Mon Jul 04, 2005 6:25 am

Post by manojmathai »

Hi

For the second job, the input will be the file that contains cols ITEM3 and ITEM4 . This job will lookup the hash file to add the colums ITEM1 and ITEM2 which is generated in the first job. The output from this transform will be a written to a new file that contains four fields which are ITEM1,ITEM2, ITEM3 and ITEM4.

Hope you are clear now

Thanks
Manoj
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Hi,
Yes maintain a additional column with a uniqe number(value), make a join based on the column in the second job, and drop the column.
Do u think, u r violating yr business rule.

regards
kumar
vsanghvi
Participant
Posts: 22
Joined: Wed Mar 09, 2005 4:47 pm

Post by vsanghvi »

Thanks Manoj and Kumar. I did add column with unique row ID (@OUTROWNUM) and it does work. Thanks for you help.

Kumar, I am not sure how I will be violating a business rule doing this ? Can you explain what it could be?

Thanks
V
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

If there is no restriction in doing so, goahead............ :wink:

regards
kumar
Post Reply