Need to Process Untranslatable characters

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
DeeptiKulk
Participant
Posts: 2
Joined: Wed Apr 17, 2013 6:20 am

Need to Process Untranslatable characters

Post by DeeptiKulk »

Hi,

My dastage job is failing due to Untranslatable character issue.

In my job source system is Oracle table and Target system is Teradata.

Job design :

2 Oracle stage--> Join Stage-> Join stage to combine data from lookup --> Remove duplicate stage--> transformer--> Teradata connector stage with reject link.
Options which I tried to recover this job:
1> used Client_character set == UTF8 Change target table definition apart from key columns I made all columns as unicode columns. --> Job runs with out error. but it rejects records which are not able to process... (Problem here is it created 0 byte reject file but none of the records are written in the reject file.)
2> used Client char set ==utf16==?Change target table definition apart from key columns I made all columns as unicode columns. --> job failes.

3> also tried to use options mentioned in this link. but job is contenously failing.
viewtopic.php?p=411279
Any body has come accorss this kind of issue Please sugest option which I can try to overcome this issue.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

You need to first find out what character set your Oracle source is using and then what character set your Teradata target is using. Playing around with the DataStage settings isn't the correct way to approach this problem.

Oracle: "SELECT * FROM NLS_DATABASE_PARAMETERS"
Teradata: "select * from dbc.chartranslations;"
vinothkumar
Participant
Posts: 342
Joined: Tue Nov 04, 2008 10:38 am
Location: Chennai, India

Post by vinothkumar »

Did you try setting client character set as LATIN01_A in Teradata stage.
Post Reply