My target DB is a SQL Server and few columns on these tables are defined as NVARCHAR. the tables should be able to support characters from any character set, not just the Latin alphabet. Nvarchar can do this, since it stores each character in a 2-byte representation, rather than a 1-byte representation, as with varchar.
My problem is, if I am loading the VARCHAR defined data from seqencial file to the DB - NVARCHAR column, the last byte of the data is truncated(only the last byte whatever is the real lenght of the data).
If I change the column definition of the SQL server DB to VARCHAR, the data is loading fine, but as of now the data is for North America, so all is well for now, but the same should work for Europe and Asia Pacific language Character set.
Please help in understanding and resolving this..
VARCHAR data load Vs NVARCHAR data load using DataStage 8.0
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Assuming you have imported the table definition and then loaded it into the job, it should work OK.
If you're worried about accented characters and the like, use one of the maps that handle these such as UTF-8 or ISO8859-1 (depending on the character set used to encode your actual data).
If you're worried about accented characters and the like, use one of the maps that handle these such as UTF-8 or ISO8859-1 (depending on the character set used to encode your actual data).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.