Page 1 of 1

CFF drawbacks in Parallel Jobs

Posted: Fri Oct 06, 2006 12:44 pm
by gsherry1
Hello Forum,

It seems to me that the CFF is somewhat less useful in Parallel Jobs than in Server Jobs. The majority of files I receive from MVS have multiple record types and redefined fields. In server edition I would handle each record type on a separate output link of the CFF. It would appear to me that reading multiple record layouts in CFF is not really feasible, particularily when the datatype differs between record types (PIC 9 vs COMP-3).

The Seqeuential Flat File stage seems to have better EBCIDIC and packed features, but these also don't seem very useful when there is more than one record type. It seems that in order to parse such files in Parallel jobs I am forced to first write a job that parses source file witch schema like this:

record (
rec_id:decimal[3];
remainder:raw[500];
)

Then after splitting by rec_id, land the file and reread with more specific schema to each record. Is this what others are doing?

Given that the Sequential flat file has both EBCDIC and packed features, it seems the useful feature of the CFF in parallel jobs is to have some advanced flattening techniques for arrays and occur clauses. If those features are not necessary is it recommended to use the Sequential Flat File? Is Sequential flat file faster than CFF?

Your input is appreciated.


Greg

Re: CFF drawbacks in Parallel Jobs

Posted: Fri Oct 06, 2006 2:02 pm
by ukyrvd
gsherry1 wrote: Then after splitting by rec_id, land the file and reread with more specific schema to each record. Is this what others are doing?
Yes .. we have similar requirement and we followed similar technique

Posted: Fri Oct 06, 2006 2:46 pm
by ray.wurlod
It's probably worth either asking the vendor whether it's changed in the Hawk release or putting in an enhancement request. There's a forum here for the latter, or you can do it through your support provider.