Good morning everybody,
I need to have multiple links writing to the same target, the target could be a sequential file, afileset or a dataset. is there anyway I can do it without using a funnel stage?
multiple links to the same file in one job
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 37
- Joined: Mon Jan 24, 2005 10:12 am
-
- Charter Member
- Posts: 560
- Joined: Wed Jul 13, 2005 5:36 am
- Location: Ohio
If you intention is an UNION ALL type result of links, the Funnel is the stage you use. If you're joining, consider the Join, Merge, or Lookup stages.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Just a quick caution. We have run into performance issues with the Funnel stage when dealing with large volumes.
Assume we have a single job that has multiple inputs into the funnel, then after the funnel it does some other work, even if it is just writing to a final dataset or export file. If the volume is too high, the job slows WAY down. If we just break up the program to write to multiple datasets in one job, then have a new job funnel the datasets together, the overall runtime drops significantly.
Not entirely sure why this is happening, and unfortunately we do not know at what point (volume wise, that is) the slowdown begins. I was asked recently to put together some test cases to pass along to IBM so they can fix the issue, but have not yet had time to do so.
Please don't interpret this as a reason not to use the Funnel. Like kcbland recommended, if you need the equivalent of a Union all, the funnel is the right way to go. Just know that if your volumes are high and your performance is slow, you may need to break up your job.
Brad.
Assume we have a single job that has multiple inputs into the funnel, then after the funnel it does some other work, even if it is just writing to a final dataset or export file. If the volume is too high, the job slows WAY down. If we just break up the program to write to multiple datasets in one job, then have a new job funnel the datasets together, the overall runtime drops significantly.
Not entirely sure why this is happening, and unfortunately we do not know at what point (volume wise, that is) the slowdown begins. I was asked recently to put together some test cases to pass along to IBM so they can fix the issue, but have not yet had time to do so.
Please don't interpret this as a reason not to use the Funnel. Like kcbland recommended, if you need the equivalent of a Union all, the funnel is the right way to go. Just know that if your volumes are high and your performance is slow, you may need to break up your job.
Brad.
bcarlson wrote:you may need to break up your job.
![Shocked :shock:](./images/smilies/icon_eek.gif)
![Laughing :lol:](./images/smilies/icon_lol.gif)
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
The operating system will not permit multiple writers to a Sequential File.
Therefore the parallel Sequential File mandatorily executes in sequential mode and imposes a Collector on its input link.
Therefore the parallel Sequential File mandatorily executes in sequential mode and imposes a Collector on its input link.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.