Page 1 of 1

Junk characters issue

Posted: Thu Jul 18, 2013 9:09 am
by hemaarvind1
Hi Everyone,

We are having an issue while loading the data from source mainframe file to netezza database.

We are reading data in EBCDIC format and we are able to see the data in the view according to the requirement. However,when we try to load the data to netezza, it gave error as "bad rows limit exceeded".

When we checked the data in mainframe system manually, there are few special characters found in the file,however, while viewing the data through cff stage, they are not shown and data is displayed normally.

Could you please let me know how the cff stage reads junk data while transferring to its output tab and how to identify what are the exact special characters coming in.

Posted: Fri Jul 19, 2013 4:14 am
by srinivas.nettalam

Posted: Fri Jul 19, 2013 4:09 pm
by ray.wurlod
There is no such thing as "junk data". Any data in your client's database is your client's data.

Since the data are being read successfully, the values are valid. Somewhere in your job design - possibly in the loading phase - you are having mapping issues. You need to resolve these. Try writing to a text file instead, which you can inspect using a hex editor of some kind to learn what that data look like. Specify a map of NONE for this file, at least initially.