CFF vs Seq File stage
Moderators: chulett, rschirm, roy
CFF vs Seq File stage
Please let me know when the use of CFF is absolutely required and can not be done using Seq File stage.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Charter Member
- Posts: 193
- Joined: Tue Sep 05, 2006 8:01 pm
- Location: Australia
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
About 90% of our input data from 100+ sources are from the mainframe and we do not use CFF anywhere, only SEQ file stage. There are some circumstances where CFF would have been handy, but its functionality in PX is not complete compared to Server - mainly in the handling of redefines or multiple record types in single file.
Brad
- If the data has occurences (repeating groups) we use vectors in the import schema.
- We remove (or ignore) column level redefines or have the source remove them.
- If the redefines are used to represent multiple record types, then we import the file by only defining the first X bytes - enough to provide natural keys and record types. The rest of the record is left a one big raw field. Then we split the output with a filter based on record type and the output of each of those streams uses the column import to apply a record-type specific layout to the raw field.
Brad
It is not that I am addicted to coffee, it's just that I need it to survive.
Hi Brad
We are doing exactly as the last point of your's (providing enough natural keys and record types), but we are facing an issue due to the presence of "packed decimal" values in the raw field of some record type.
Could you please suggest, if there is any property in Column Import stage which will unpack the "packed decimal" values.
We are doing exactly as the last point of your's (providing enough natural keys and record types), but we are facing an issue due to the presence of "packed decimal" values in the raw field of some record type.
Could you please suggest, if there is any property in Column Import stage which will unpack the "packed decimal" values.
You unpack a decimal in the column import the same as you would with an import. Your record should already be specified as binary EBCDIC. Set your import field to Decimal with the approriate length and scale and then set Packed to Yes (Packed is an option for the Decimal datatype).
Brad.
Brad.
It is not that I am addicted to coffee, it's just that I need it to survive.