If that was an apostrophe (hex 27) it would have loaded fine. I've had the same issue in the past when the source is something like Word and you get them dang 'smart quotes' rather than a simple apostrophe.
What is your NLS_LANG setting for your database?
Character not getting loaded in Oracle through ODBC
Moderators: chulett, rschirm, roy
Thanks Chulett.
Is there some way we can ensure that all such > 80H characters get loaded properly in oracle. (through ODBC would be preferable)
Code: Select all
NLS_LANGUAGE - American
userenv('LANGUAGE') = AMERICAN_AMERICA.AL32UTF8
First thing would be to verify that your character actually exists in the target character set. If your source and your target use the same character set, you can just load from one straight to the other, otherwise a conversion needs to happen. When the source character set (controlled by the job setting) is different from the target character set (controlled by the database setting) the conversion is automatic.
Where you run into trouble is when the character sets are different but you tell your process they are the same. No conversion takes place and you get 'garbage' in your target.
We use 7.x/Server/OCI here, not sure if there are any nuances to this when using 8.x/PX/ODBC. We also don't have NLS enabled in DataStage where it sounds like you do. Hopefully others will have words of wisdom to add to this.
Where you run into trouble is when the character sets are different but you tell your process they are the same. No conversion takes place and you get 'garbage' in your target.
We use 7.x/Server/OCI here, not sure if there are any nuances to this when using 8.x/PX/ODBC. We also don't have NLS enabled in DataStage where it sounds like you do. Hopefully others will have words of wisdom to add to this.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers