Both jobs have the same general design: XML input, transformers to create mainframe-format rows including headers and trailers, a final funnel and FTP to mainframe.
The input formats are different, but follow the same pattern: single tags with file information (timestamp, record count, etc.) and transaction tag groups of data representing indiviual items that create the detail records on output. Repeating tags are reliably unique at both levels (file, transaction).
One job continues to process very fast, with increases in processing time of seconds. The other job jumped from seconds to minutes. The older data volume was in the few dozens (up to 200 or so) of transactions, and the recent data volume increase was ten-fold. The latest record count was just under 2,600.
I just can't find a difference between the still-fast job and the now-slow job. Ironically, the still-fast job has more branchings (two output files each with a header and trailer) than the now-slow job (one output, header only). They process essentially the same data (a before-after sort of thing), though they do have different XML layouts. I don't mind saying that I'm very frustrated that one job just keeps processing as before.
![Confused :?](./images/smilies/icon_confused.gif)
I've tried going to single-node and forcing the job to be sequential on every link, and I've spent a few hours with internal support. None of us have found the cause yet. The best advice I can think of is to be told where to look for possible causes. I'm experienced enough to move forward with suggestions.
Basic design:
Code: Select all
External source (filename) --> XMLInput stage (split outputs based on repeating tags) --> transformers to Cobol layouts (some editing for padding and length) --> Funnel (link ordering for header-details-trailer) --> FTP Enterprise