Data type mapping from Oracle to DataStage
Posted: Tue Oct 04, 2011 5:53 am
Hi,
We are using Oracle database as the source/target DB in our extraction process.
I have a particular column - ColA of datatype VARCHAR2(40 CHAR). The NLS setting of the database is UTF8.
When I use the ODBC Table Definitions to import the metadata, this column translates to NVARCHAR(160). But, when I use the Orchestrate Schema Definitions the column is read as VARCHAR(40).
Which of the above datatypes is the correct translation? Currently, we are using the development database which doesn't have multibyte data. So, either way my job runs fine.
But, when we migrate code to higher environments we might run into the risk of encountering multi byte data. So, which of the above two options is capable of handling multi byte data?
I also went through the IBM documentation on this which states VARCHAR2(n CHAR) multibyte = VARCHAR(n) in Datastage.
http://publib.boulder.ibm.com/infocente ... oracc.html
- Deepti
We are using Oracle database as the source/target DB in our extraction process.
I have a particular column - ColA of datatype VARCHAR2(40 CHAR). The NLS setting of the database is UTF8.
When I use the ODBC Table Definitions to import the metadata, this column translates to NVARCHAR(160). But, when I use the Orchestrate Schema Definitions the column is read as VARCHAR(40).
Which of the above datatypes is the correct translation? Currently, we are using the development database which doesn't have multibyte data. So, either way my job runs fine.
But, when we migrate code to higher environments we might run into the risk of encountering multi byte data. So, which of the above two options is capable of handling multi byte data?
I also went through the IBM documentation on this which states VARCHAR2(n CHAR) multibyte = VARCHAR(n) in Datastage.
http://publib.boulder.ibm.com/infocente ... oracc.html
- Deepti