CFF stage, reading single file with multiple redefines

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dsguyok
Premium Member
Premium Member
Posts: 24
Joined: Thu Jan 21, 2010 10:22 pm

CFF stage, reading single file with multiple redefines

Post by dsguyok »

Hi, sorry if this has been covered - I had a good look but didn't see anything.

I am reading an fixed-width EBCDIC file into Datastage using CFF stage.

The file has a field indicating record type (header, details or trailer). The first 500 bytes of each detail record are defined the same (in the copybook) for each record. The last 200 bytes of each detail record are redefined in 7 different ways. So there are 7 possible layouts for the last 200 bytes of each record in the file.

The problem: in the 7 different layouts, there might be a packed decimal field at byte 300 in one layout, while at byte 300 in another layout there is a PICX field.

I had hoped to separate the records based on a field that indicates which of the 7 layouts to use. So read one file into CFF and send out 7 different streams. But the problem described above seems to cause the job to fail. I suspect this because when I use zero defaults on Decimals in CFF, the job succeeds but I lose the data in the packed decimal fields.

Is there something I'm missing in the CFF stage that could address the problem? What would be the best practice here? Send the last 200 bytes downstream to a transformer or column importer? Thanks.
Post Reply