Complex Flat file- Multiple Record Types

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

I've been playing a bit with CFF (we don't support it here) and your information from the mainframe file attributes is a critical piece of info.

First, whoever creates the file to begin with is "breaking" a basic standard in COBOL: with multiple record types in one file, always, always make every record type the same length when the file is being written. This is just common sense. The copybook (COBOL FD in Import, usually with the file extension of ".cfd") then has a series of redefines on the basic record length, with a "FILLER" field as the last entry or each redefine to pad out that particular record type to the correct length.

FYI: The data control block is not a part of the physical file. It is contained in the file catalog. It is set and referenced in the job control language code (JCL) of the mainframe job that defines it, not in the COBOL programs or in the copybook or other record definitions. "Implicit" means that DataStage reads the file byte-by-byte, filling in each column in your table definition until it gets to the last byte of the last column, then assumes that the record is complete and begins to fill in the table again at position 1 with the next byte.

Sample data control block in JCL:

Code: Select all

DCB=(RECFM=FB,LRECL=110)
This (with other JCL code) defines a new file as fixed-block with a fixed record length of 110 bytes. If you were to display the catalog information for the file after it was created, you would see something like this:

Code: Select all

Record format . . . : FB 
Record length . . . : 110
If your header is not the same length as your details and trailer, it is not a standard COBOL file.

Given that, one solution is to import three table definitions, one each for header (78 bytes), details (81) and trailer (81). Load them to your CFF separately for each output link. I would guess that this is what is meant by the records becoming fixed-length. After you get that done and after compiling the job go to the Job Properties, Generated OSH tab and find the record schemas. They should, if I understand Aruna correctly, show that your schemas all have the record type of implicit. If not, ask Aruna. I am a bit beyond my understanding of CFF already.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Thanks a lot Franklin for taking time to provide me with a very helpful information.

Yes, I am able to see the Record Type as Implicit in the layout if I set the record type as Fixed/fixed block under Properties.

Can you please help me in understanding by sharing your thoughts on below.

If my record lengths are Header (78), Detail (81), Trailer (81) and the input file is a Variable Block file on MainFrame server then can I still set the Record Type as Fixed Block? If so, should I create a filler for 3 bytes in the Header copybook to make it the same length as Detail and Trailer?

Or Should I set the Record Type as Variable block as it is a Variable block file on Mainframe? If so, Do I need to create a filler to make all record types as equal in lengths? (I have tried with this Variable Block, Filler and with other combinations but it didn't work so, I am wondering about the right set of options for this kind of files)

Thank you very much Franklin for your support and information.
FranklinE
Premium Member
Premium Member
Posts: 739
Joined: Tue Nov 25, 2008 2:19 pm
Location: Malvern, PA

Post by FranklinE »

I'm glad to help. I've received quite a bit here, so this is a "share the wealth" place (well, except for paying for premium access, but I agree with that part).

I have the same question you do. Will CFF handle records of differing fixed lengths in the same file? You seem to have proven the opposite side, showing that forcing a common record length in the table definitions doesn't work.

Based on my current understanding of how DS interacts with mainframe operating systems, it should see the header, fill in the 78-byte record, and look for the next record in the next byte. Seeing that it's a detail, it will then fill in the 81-byte record, etc. Hopefully you'll be able to test that and see. For me, the key is seeing implicit for the record type for every table definition. I suggest staying with fixed or fixed-block until it proves to be wrong.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson

Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
Aruna Gutti
Premium Member
Premium Member
Posts: 145
Joined: Fri Sep 21, 2007 9:35 am
Location: Boston

Post by Aruna Gutti »

Your definition should match your input file. It is not about whether you can add filler to make the record lengths same but what exactly is being sent to you from Mainframe for each type of record.

You can request Mainframe group to send you a screen shot of each record type or they can browse the file and let you know what is the record length.

As your job with single record definition is working fine that means the definition you used in this debug job is correct. So, please try using the same record definitions in the CFF stage with multiple record options. If your record lengths are Header (78), Detail (81), Trailer (81) then I suggest variable record option.

Aruna.
srds2
Premium Member
Premium Member
Posts: 66
Joined: Tue Nov 29, 2011 6:56 pm

Post by srds2 »

Thanks Franklin and Aruna for your response. I will try with all the possibilities and come back with the results.

Thank You!
Post Reply