CFF - EBCDIC Binary data

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
denzilsyb
Participant
Posts: 186
Joined: Mon Sep 22, 2003 7:38 am
Location: South Africa
Contact:

CFF - EBCDIC Binary data

Post by denzilsyb »

Hi all

We have a download from the mainframe in ebcdic/binary format. This means that we need to know the length of the record to determine when the next record starts - i.e. no CR/LF to do that for us.

We are reading each record in as one column (or at least, thats what i want to do), because i only need certain columns starting/ending at certain positions.

The problem is that the record has a possible record length of 24032 - it is variable length.

when i add the metadata of the column, what length should i specify? 24032 on CHARACTER takes up all the records when viewing the data, so i get one record in the column instead of the 1000 records (test data) i am supposed to.

so.. should i be using CHARACTER? I think not.. How does CFF handle a file in this layout?

If you want to point me in the direction of 'occurs depending on', remember that in the file definition I dont have a maximum occurence, which means i could end up with (on bin4) 9999 possible occurs.

I also want to avoid adding CR/LF characters when downloading the file as it may affect the data when displaying/translating it.
dnzl
"what the thinker thinks, the prover proves" - Robert Anton Wilson
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

The documentation for the CFF states clearly that fixed occurs are ok and occurs depending on are not. In the past when I have run into this situation we have requested an extract that limits the number of occurs and creates a new copybook that is suitable for the CFF stage.

The way you are doing this is treating the input as nothing more than a sequential file of one column and you are receiving none of the benefits that the CFF stage offers.

Others might have suggestions regarding how you might accomplish what you want, but I might suggest looking for a different strategy.

Also, is it likely there is a record that would actually contain 9999 occurrences of this particular piece of data? And if so, are they all important to what you are doing?

Regards,
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

The CFF file stage has a check box that defines whether the file is ASCII or ebcdic format. You will need a copybook that describes the format of the file in order to import the file definition, you should then be able to choose columns from any part of this file. Have you tried a CFF import?
mhester
Participant
Posts: 622
Joined: Tue Mar 04, 2003 5:26 am
Location: Phoenix, AZ
Contact:

Post by mhester »

I can't remember exactly what happens, but I believe you cannot even import the metadata when there are "occurs depending on" in the copybook.

Regards,
denzilsyb
Participant
Posts: 186
Joined: Mon Sep 22, 2003 7:38 am
Location: South Africa
Contact:

Post by denzilsyb »

The CFF does support occurs depending on, but it helps if there is a maximum occurs in the file definition.

From the above, one could try to be clever :idea: and accept that the columns until the occurs depending on will be defined. that means that if the first columns are 53 in length, that would leave

total_length - first_cols_length = remainder length
24032 - 53 = 23979

And now, if the occurs depending on is only 12 length, we take 23979/12 = 1998

HA! which leaves me with a max occurs of 1998! Imagine the transformer! Imagine the flattened array in the CFF!

I could be coding this thing forever - thats if dsdesign can handle this load.

right now we are working on some code to read blocks of the record. methinks this is a better approach.
dnzl
"what the thinker thinks, the prover proves" - Robert Anton Wilson
Post Reply