Hi just a question I am getting a "linear or cyclic dependency" error message with the job. The job is trying to preform a vertical pivot. With the four hashed files and the main stream coming from the same source
Any ideads on how to solve/avoid this problem would be greatly appreciated
Regards
Travis
linear/cyclic depend error on vertical pivot
Moderators: chulett, rschirm, roy
The error message basically means you have created a 'closed loop' job stream - you need to resolve that before the job will compile or even think about running.
Make sure there is a separate beginning and end to your job, that one doesn't point back to the other.
Make sure there is a separate beginning and end to your job, that one doesn't point back to the other.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Sure. Actually, that kind of thing (dependencies between different 'sections' of your job, that is) should be handled automatically.
Best to post an actual image of your job design, Travis, if you don't mind. Check the next to last post on the first page of this post for an easy method to accomplish this.
Take two shakes after that, I would think.
Best to post an actual image of your job design, Travis, if you don't mind. Check the next to last post on the first page of this post for an easy method to accomplish this.
Take two shakes after that, I would think.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Hi, it appears i have solved my problem. Just saved both the main steam, and the other three steams (all of which came from the same source) To hashed files, and then brought them into one transformer for the maaping/vertical pivot.
So this will suffice for the moment, but is there away to avoid mapping the main stream into a hased file to map across, or is this even a problem? (i'm kinda new to data stage).
The job is:
a sequal file stage running into one transformer and then from there being written into 4 hashed files based on a criteria, and then a vertical pivot is being performed to merge the four rows, after which the row is written to its destination
Regards
Travis
So this will suffice for the moment, but is there away to avoid mapping the main stream into a hased file to map across, or is this even a problem? (i'm kinda new to data stage).
The job is:
a sequal file stage running into one transformer and then from there being written into 4 hashed files based on a criteria, and then a vertical pivot is being performed to merge the four rows, after which the row is written to its destination
Regards
Travis
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Use a separate Hashed File stage to update the hashed file from the stream. In the first stage, where you are reading from the hashed file, make sure that you set "lock for update", and do not use caching in either Hashed File stage.
Both Hashed File stages refer to the same hashed file.
Both Hashed File stages refer to the same hashed file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.