Multiple Links to Same Sequential File

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
butlerhd
Participant
Posts: 7
Joined: Wed Feb 19, 2003 11:55 pm
Location: USA

Multiple Links to Same Sequential File

Post by butlerhd »

I am trying to write to a sequential file with multiple links without success. Using DataStage 6.0 on AIX.
I have a Transformer stage with two output links to the same sequential file. If I have 15 rows on the input link, I would expect 15 rows for each output link. The objective is to get 30 rows in the sequential file.
The actual result is that only 15 rows show up in the sequential file. The log in Director shows that 15 records went in, and 15 records went out of each output link. But it is as though one link is "clobbering" the other link.

Any ideas?
rasi
Participant
Posts: 464
Joined: Fri Oct 25, 2002 1:33 am
Location: Australia, Sydney

Post by rasi »

Hi,

The reason is the first link is overwritten with the second. To solve this issue you need to break your job as creating two different file and then using the Merge Stage join those two files to one file to get 15 +15 = 30 Records.

Thanks
Rasi
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The "error", if such it be, lies in the operating system, which allows only one process at a time to write to a file.


Ray Wurlod
Education and Consulting Services
ABN 57 092 448 518
johnno
Participant
Posts: 50
Joined: Wed Mar 05, 2003 5:33 am

Post by johnno »

I have a similar problem in that I have the single input file and for every row on this input file I need to write 6 rows to an output file. These output rows are the same total length, but with different record layouts.

I have played around with various possibilities, none of which work.

Any assistance would be greatly appreciated. I have sent this off to the Ascential Tech Support who are looking at it for me and will post back any resposnes that I receive - and then I started looking for appropriate forums....
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

You could design a job with one input to the transformer and 6 outputs to 6 different text files. You would then run an after job routine that concatenated the 6 files into one. This will mix up the record order a bit.

Vincent McBurney
Data Integration Services
www.intramatix.com
tomengers
Participant
Posts: 167
Joined: Tue Nov 19, 2002 12:20 pm
Location: Key West

Post by tomengers »

Actually the solution here belongs to ButlerHD: mulitple links to the same sequential file will not work for the reasons already expressed -- but it will work if you write these links to a (keyed) hash file.
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

This is an unusual situation since Johnno is is trying to write out records to the same file that have different record layouts. I don't know why he'd be trying to do this but if the records are different it makes it hard to push it into a hash file.

Vincent McBurney
Data Integration Services
www.intramatix.com
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Difficult, but not impossible! [:)]

If the hashed file is defined as having the key and one field, one can stick anything at all into that field, even if it contains field marks (@FM) so that the hashed file ends up having N fields per row, where N may vary between rows. It's not Codd, folks!


Ray Wurlod
Education and Consulting Services
ABN 57 092 448 518
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Yah, doesn't seem all that different from programatically dealing with a flat file with different record lengths.

Ummm... Ray? Codd? [:I] (or not Codd, as it were)

-craig
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Codd, patron saint (as it were) of normal forms.

Thou shall have no repeating groups, no partial dependencies on key, no dependencies between attributes, so help you Codd. Also, every row must have identical structure.

Doesn't have to apply in hashed files, which is one of the reasons that hashed files are used to store objects in the DataStage repository. How many different kind of object are there in the DS_JOBOBJECTS hashed file, for example? (Answer: lots!!)
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Oh, that Codd! As in EF Codd and CJ Date, two of the Founding Fathers.

Thought I was missing some sort of Austrailian fish joke.

[:o)]
johnno
Participant
Posts: 50
Joined: Wed Mar 05, 2003 5:33 am

Post by johnno »

Thank you all for your help, and I'm sure over time I will try out all of the suggestions given.

The Ascential Tech Support offered: "...re-engineer the job, this time making use of the Link Partitioner and Link Collector in order to get the desired results.".

But after playing around, the simplest solution I found was to write each required output row to an output column and then use the Pivot stage to flip these around to rows.

Hope this may help someone out in the future.

Cheers
John
Post Reply