Standard EBCDIC 4f is a pipe symbol, so that is good. But after copying it to UNIX it should still be 4F, so i am not sure what your value of "O" means - if the value isn't 4F then the error isn't inside DataStage.
Within the sequential file or complex flat file , Use cobol copy books to read the data in the file ....Also could you please let us know what are the options that you are using dd command ??
I tried to test it with sample file. I created only one character '|' in the file in ebcdic format. In CFF, I gave char(1) and character set = EBCDIC and the output is coming as '!' instead of '|'. Same thing is happening with SEQ FILE stage.
ArndW wrote:Are you doing an explicit conversion from EBCDIC to ASCII in your CFF stage? Is that field declared as Char(1)? ...
As Craig has already hinted, the problem might be in a nonbinary FTP or transfer from the host. Just create a UNIX file with an ASCII capital "O" (4F) and then push that through your CFF stage to see if you still have the problem.
I have created a unix file with one character "O" on datastage host server and tried to read that from CFF, SEQ file stages and it is still getting converted into '!' instead of '|'. Is there anything to change in the Adminstrator?
Thanks
Kiran
ArndW wrote:As Craig has already hinted, the problem might be in a nonbinary FTP or transfer from the host. Just create a UNIX file with an ASCII capital "O" (4F) and then push that through your CFF stage to see ...
Klaus Schaefer wrote:You may also check which EBCDIC codepage is being used at your host system and then set the environment variable APT_EBCDIC_VERSION accordingly.
Klaus
Any idea what APT_EBCDIC_VERSION is set to if it is not explicitly set? Where can I find it? I looked in Administrator at the project level settings and could not find it defined anywhere.
Thanks!
Brad.
It is not that I am addicted to coffee, it's just that I need it to survive.