Being a newbie (and not knowing about the Pivot stage, which was not installed at the time), I used a Transformer to create the 2 journal transactions, wrote them to files, merged the files with a Link Collector into a third file, and then proceeded with further processing.
ie.
Code: Select all
----> SEQ ----
...TX ---+ +---> LC ---> SEQ ---> TX ---> ....
----> SEQ ----
Reading the doco I found the Pivot stage (actually, I read about it here, but it sounds better when I say I discovered it myself). Installed it, tried it again, but it's SLOOOOOOOOOW!!!!!!
Without the Pivot, the job processes at 1500 rows/sec (but writes only one row per input row). With Pivot, I was prepared to accept half pace because I'm doubling the transactions, but it went down to 150 rows/sec.
Much research later, I put derivations on every column to stop them being used as grouping keys, and wrapped the Pivot stage in InterProcess stages to mitigate the row-buffering problem with grouping stages. This brought it up to 220 rows/sec. Still too shabby.
Q1. Is there anything else I can do to make Pivot faster?
Q2. Is Pivot just rubbish? Should I go back to my old method with the Collector?
Q3. Is there another method on Server 7.5.1a I could use? (Without breaking it up into multiple jobs that communicate through named pipes - too much trouble)
Thanks in advance for your advice.