Page 1 of 1

Posted: Thu Oct 30, 2014 5:56 pm
by ray.wurlod
Welcome aboard. How did you "convert to ASCII"? Did you try "read as EBCDIC" in the Sequential File stage?

Posted: Fri Oct 31, 2014 8:46 am
by Dip_DS
Hi Ray,

tried 'read as EBCDIC' in the sequential stage without any luck...

There is an existing production job which converts the file from EBCDIC to ASCII. Due to some constraint, Job is using column importer instead of CFF to read the EBCDIC file (input is a 'multiple redefine layout') and splitting the file in xx number of files based on the layout. Splitting logic is coded in a transformer. The output sequntial file will be used for further processing and loading the data into teradata table.

Input:
Sequential file
Character set = EBCDIC and data format = Binary

Output
Sequential file

Today when I saw the file in Unix I observed that records are getting broken when it encounters the decimal field.
it looks like

Code: Select all

 xx,xx,xx,000     , ,
,    ,000000111111,0000000222222,
it should be like

Code: Select all

 xx,xx,xx,000     , , ,    ,000000111111,0000000222222,
Dump of the rec from Unix:

Code: Select all

0000100                                ,   0   0   0   0   0   0   0   0
0000120    0   0   0   0   0   0   0   2   0   1   6   9   2   5   4   ,
0000140    0   0   0   0   0   0   0   0   0   0   0   0   0   0   0   2
0000160    0   1   6   9   2   5   4   ,       ,   0   1   2   2   7   0
0000200    7   ,  \0  \0  \0  \0  \0 223 220   \   ,  \0  \0  \0  \0  \0
0000220   \0  \0  \f   ,  \0  \0  \0  \0  \0  \0  \0  \f   ,  \0  \0  \0
0000240   \0  \0  \0  \0  \f   ,  \0  \0  \0  \0  \0  \0  \0  \f   ,  \0
0000260   \0  \0  \0  \0  \0  \0  \f   , 002 001   @ 201 234   , 002 001
0000300    @ 201 234   , 002   5 222  \f   ,  \0  \f   ,  \0  \0  \f   ,
0000320    2   6   4   5   Z   0   3   4   1   2   9   0
0000340                                    ,               ,
0000360    ,   0   ,                                       ,  \0 004 022
0000400  234   ,           ,           ,   0   0   0   0   0   ,   0   0
0000420    0   0   ,   0   5   ,       ,       ,       ,       ,       ,
0000440        ,       ,       ,
0000460                                                        ,  \0  \0
0000500   \0  \f   ,                                   ,  \0  \0  \f   ,
0000520                    ,                           ,
0000540                                                    ,   1   ,   0
0000560    1   ,   0   ,   0   ,   3   1   2   ,   2   7   0   7   0   1
0000600    3   1   3   ,   3   ,   0   0   ,   0   0   0   0   0   0   0
0000620    0   9   3   ,   9   0   5   0   0   0   0   0   0
0000640                                        0   2   0   1   6   9   2
0000660    5   4           ,
0000700                                ,       0   0   0   0   0   2   9
0000720    9   8   0   0   0   0   0   0   ,   0   0   0   0   0   0   1
0000740    2   1   2   4   2   0   1   4   0   ,   8   1   9   0   4   7
0000760    2   6  \n
SO I guess this is something to do with the the line break character \f (terminator) which is coming for most of the decimal fields. I read couple of topics here with 'contain terminator' and convert function (Server) but didnt find anything suitable for parallel.

Posted: Sat Nov 01, 2014 10:35 am
by roy
Hi,
In my experience, most of the time the EBCDIC file is transferred from the source server to the DataStage server.
when that is the case usually you include the EBCDIC to ASCII conversion in the file transfer utility (in both directions) that prevents your need to handle such things.
IHTH,

Posted: Sat Nov 01, 2014 4:41 pm
by chulett
Keep in mind the fact that any "ASCII translation" of packed fields will destroy them. Years ago we used a utility that allowed you to define the character positions that needed translation during the transfer so you could skip over anything packed. Nowadays it seems that people either build out the record with unpacked fields or transfer without translation and use something like DataStage's CFF to do the appropriate translations during the read.

Posted: Mon Nov 03, 2014 1:17 pm
by FranklinE
Based on the information you've provided, you are seeing a standard error in handling the packed decimal field. See the Using Mainframe Source Data FAQ for more details on this format and how to handle it (and how to avoid errors on the ASCII side).