About UDB API stage

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Nripendra Chand
Premium Member
Premium Member
Posts: 196
Joined: Tue Nov 23, 2004 11:50 pm
Location: Sydney (Australia)

Post by Nripendra Chand »

Thanks. I tried it and it worked. :lol:

-Nripendra
Nageshsunkoji
Participant
Posts: 222
Joined: Tue Aug 30, 2005 2:07 am
Location: pune
Contact:

Post by Nageshsunkoji »

Hi Nripendra,

I am getting the same problem like padding of spaces is not happening for a charcter field in DB2 EE stage. If you have any solution for this problem. Please pass it, it will be helpful to me.

Thanks&Regards
Nagesh.
NageshSunkoji

If you know anything SHARE it.............
If you Don't know anything LEARN it...............
Nripendra Chand
Premium Member
Premium Member
Posts: 196
Joined: Tue Nov 23, 2004 11:50 pm
Location: Sydney (Australia)

Post by Nripendra Chand »

Hi Nagesh,

Include '$APT_STRING_PADCHAR' env variable in your job and assign the padding value to it. If you want to make it default padding value at your project level then assign the padding value through administrator to this variable.

-Nripendra
bmadhav
Charter Member
Charter Member
Posts: 50
Joined: Wed May 12, 2004 1:16 pm

Post by bmadhav »

Go to DataStage administrator and set $APT_STRING_PADCHAR value to a single space (" ").
bcarlson
Premium Member
Premium Member
Posts: 772
Joined: Fri Oct 01, 2004 3:06 pm
Location: Minnesota

Post by bcarlson »

We have run into this as well. We have set the APT_STRING_PADCHAR to a space, but the most important thing is to make sure that the schema of your record matches the layout of your table.

If you have string[4] in your DataStage schema (the input stream to your db2 stage) and you are writing to a char(5) field in the table, then you are going to get nulls in the final byte, and the PADCHAR setting will make no difference whatsoever.

If you have a buildop or transform in your stream where you can manipulate the layout, that would be a great place to ensure that your datatypes match up exactly (order doesn't make a difference, just names and datatypes). However, keep in mind that if you are increasing the size of the field, you do need to make sure the padchar is set correctly for your output field. At a minimum, you can always add a modify stage to fix any datatypes. For a string field, the modification would look like this:

Code: Select all

myOutputStringField: nullable string[3,padchar=' '] = myInputStringField
On an unrelated note, why are you using the API stage instead of the Enterprise stage? The Enterprise stage is faster and more efficient than the API stage.

Brad.
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

Sorry to come in late on this topic, but I've just run into a similar issue...

Despite Brad's comment above, I seem to have contradicted this somehow.

We have a derivation job producing a dataset with a field of char(6). This job uses APT_STRING_PADCHAR=0x20, so has padded with space on this field where we only passed in 4 digits to it i.e. '9999'

I've verified this in the schema:

record
( SITE_ID: string[6,padchar=' '];
PROD_ID: string[40,padchar=' '];


etc etc...

However, when the subsequent load job which calls this dataset and loads to a DB2EE stage runs, it loads all the data, yet a query against the table using "... where SITE_ID='9999'" returns no rows.
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
Maveric
Participant
Posts: 388
Joined: Tue Mar 13, 2007 1:28 am

Post by Maveric »

Did you try with 2 spaces in front? Like: where SITE_ID=' 9999'
Post Reply