We have run into this as well. We have set the APT_STRING_PADCHAR to a space, but the most important thing is to make sure that the schema of your record matches the layout of your table.
If you have string[4] in your DataStage schema (the input stream to your db2 stage) and you are writing to a char(5) field in the table, then you are going to get nulls in the final byte, and the PADCHAR setting will make no difference whatsoever.
If you have a buildop or transform in your stream where you can manipulate the layout, that would be a great place to ensure that your datatypes match up exactly (order doesn't make a difference, just names and datatypes). However, keep in mind that if you are increasing the size of the field, you do need to make sure the padchar is set correctly for your output field. At a minimum, you can always add a modify stage to fix any datatypes. For a string field, the modification would look like this:
Code: Select all
myOutputStringField: nullable string[3,padchar=' '] = myInputStringField
On an unrelated note, why are you using the API stage instead of the Enterprise stage? The Enterprise stage is faster and more efficient than the API stage.
Brad.