VARCHAR data load Vs NVARCHAR data load using DataStage 8.0

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jagadam
Premium Member
Premium Member
Posts: 107
Joined: Wed Jul 01, 2009 4:55 pm
Location: Phili

VARCHAR data load Vs NVARCHAR data load using DataStage 8.0

Post by jagadam »

My target DB is a SQL Server and few columns on these tables are defined as NVARCHAR. the tables should be able to support characters from any character set, not just the Latin alphabet. Nvarchar can do this, since it stores each character in a 2-byte representation, rather than a 1-byte representation, as with varchar.

My problem is, if I am loading the VARCHAR defined data from seqencial file to the DB - NVARCHAR column, the last byte of the data is truncated(only the last byte whatever is the real lenght of the data).

If I change the column definition of the SQL server DB to VARCHAR, the data is loading fine, but as of now the data is for North America, so all is well for now, but the same should work for Europe and Asia Pacific language Character set.

Please help in understanding and resolving this..
NJ
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Please expand on what you mean by "Europe and Asia Pacific language character set".
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jagadam
Premium Member
Premium Member
Posts: 107
Joined: Wed Jul 01, 2009 4:55 pm
Location: Phili

Post by jagadam »

if the source data from other geographical regions contain different character set other than English, the target table should accomodate that.
NJ
jagadam
Premium Member
Premium Member
Posts: 107
Joined: Wed Jul 01, 2009 4:55 pm
Location: Phili

Post by jagadam »

Any suggessions on this please..
NJ
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Assuming you have imported the table definition and then loaded it into the job, it should work OK.

If you're worried about accented characters and the like, use one of the maps that handle these such as UTF-8 or ISO8859-1 (depending on the character set used to encode your actual data).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply