Page 1 of 1

Reading input conditionally

Posted: Thu Aug 11, 2011 1:18 pm
by poornimasai
Can datastage read input from the source conditionally? That is, can it make a 'decision' to read or not read the line based on the value of a field in the current line. I know some programming languages have the capability to do this.

Recently, I was told datastage would read its input completely and then a 'decision' can be made only in the consequent stages. Is this true?

Posted: Thu Aug 11, 2011 2:08 pm
by chulett
Seems to me you'd have to read the record first to get the value in the field to make this decision on. :?

Regardless, to answer your generic question, yes - entire records are read and then decisions on how the parse different layouts can be made after that in subsequent stages. As an 'additional info' note, the ability to read 'mainframe files with different record layouts' is built into the CFF (Complex Flat File) stage.

Posted: Thu Aug 11, 2011 2:48 pm
by poornimasai
Thank you, Craig! :) I checked the documentation for the mainframe file stages and found it can read multiple 'occur depending on'(though our developer insists 8.1 cannot :? ). Now, my mainframe file has different record layouts within the same file and the header record tells me the following records are of some particular layout. I would have to read in my header record and accordingly, change layout! :roll:

Can I do the decision making if I read the file in sequentially?

Posted: Thu Aug 11, 2011 5:01 pm
by poornimasai
Oh, I read the documentation again and it has only said a CFF can contain multiple 'Occurs depending on' clauses. Not that the CFF stage can read it. :roll:

My bad! :)

Posted: Fri Aug 12, 2011 12:17 pm
by FranklinE
Without knowing the details, the following criti... um, commentary should be taken with a grain of salt.

Number one: Using a single dataset to contain variable-length/blocked records with multiple data definitions (schemas, copybooks) is insane. It's lazy design to avoid making the coders do standard work. It's just sane to use fixed-width records on a consistent schema (copybook). :(

The only reliable way to read mainframe data into DataStage is to import that consistent schema. If you have mutiple record types identified in the first 1 or 2 bytes, you can use a filter or transformer contraint to use or skip them accurately.

Posted: Fri Aug 12, 2011 1:33 pm
by arunkumarmm
If it is Variable block file in binary format, each and every record will have the record length in the begining. You can read in that and route your records... We have jobs like that in DS 390 but it should work the same way in parallel