mhester wrote:in 8.1 there is a new environment variable which controls this and in 8.5 the behaviour is default. In 8.1 in order to use it you will need to add a copy prior to the write of a dataset and read of a dataset to allow for a modify adapter (so you cannot optimize out the copy).
Great post but I have no idea what this means. It sounds good, whatever it is. Basically either we use APT_OLD_BOUNDED_LENGTH or our scratch and dataset storage requirements are astronomical, ridiculous, and prohibitive.
IBM's guidance has always been to have bound character columns unless that column is generally > ~ 100 bytes.
We are unable to make any changes to Table Defs because changes break the shared table status with Metadata Workbench, and re-import using the Import Connector Wizard blows away the changes in the Table Def. So even if I wanted to follow this advice on a column by column basis, I can't because it is impractical.
Job failures can and do happen with unbound character fields in certain load conditions.
All of our jobs fail 100% if datastage uses diskspace unconditionally based upon the varchar lengths in the metadata.
The dataset stuff does not affect the manner in which data are moved between operators within a flow.
Perhaps I am mistaken in assuming the varchar limits in the metadata impact memory consumption. I am basing this on the footprint in the scratch area. It is plausible that I have inferred incorrectly.