Page 1 of 1

Lookup fails for Japanese data

Posted: Wed May 11, 2005 4:04 am
by vinodlakshmanan
There are actually 2 queries:
1. We have a set of server jobs which we need to have UTF8 enabled for Japanese / Korean characters. These jobs do a lot of lookups using Xformers. Now, the same lookup does not work for Japanese characters even though the job default character set is UTF8 and lookup field NLS map is individually set as UTF8. The lookup field is VARCHAR and has only Japanese characters as of now, though this may not be the case always. 0 records are read from the lookup file.
The same works in a parallel job using a lookup stage. Could anyone throw some light on this?
2. Also, on a related note, we are having a routine which does some comparisons and works fine on Latin characters (input is character). However Japanese characters are not processed correctly. What can be changed in the routine to make it UTF8 compliant?

SOlved: Error in mapping hash files

Posted: Wed May 11, 2005 4:22 am
by vinodlakshmanan
There was an error in hash file mapping - the input and output field columns order was different, hence the problem.
BTW, is a hash file not like a dataset wherein even if you interchange the column names it picks up the correct data? If a hash file is stored as a table internally, then column name order should not matter. That does not seem to be the case.

Posted: Wed May 11, 2005 4:36 am
by roy
definitly not the case with hash files

Posted: Wed May 11, 2005 6:55 am
by chulett
Exactly. With hash files - Metadata Matters.