I'm trying to read data from a mainframe file and i'm unable to do that with a parallel CFF stage. If I read the same file from server job using the server cff stage, job is working fine. Below are the details I gave for the server job:
Code: Select all
Data Format: EBCDIC
Record Style: Binary
Record Length: 1033
Code: Select all
CFF_File,0: Field "XYZ" has import error and no default value; data: {@ @ @ @}, at offset: 316
CFF_File,0: Import warning at record 0.
CFF_File,0: Import unsuccessful at record 0.
CFF_File,0: No further reports will be generated from this partition until a successful import.
CFF_File,0: Import complete; 0 records imported successfully, 469371 rejected.
Code: Select all
Record Type:Fixed
Missing Filemode: depends
Reject mode: Continue
Code: Select all
Byte order: Native-endian
Character set: EBCDIC
Data foramt: Binary
Record delimiter: {none}
I had the "read from multiple nodes" unchecked and the stage is working in sequential mode. I tried enabling this option but didn;t work.
The input binary file contains 2 columns with OCCURS clause. one 6 times and the other 30 times. In the parallel job developers guide, it is mentioned that the record type should be set to "fixed" when there are OCCURS clauses in the input file. So, I hope that my selection for that option is correct.
The metadata given in both server and parallel jobs is same. But still parallel job is not working. I am trying to use the parallel job because I may need to process multiple files at the same time and only parallel CFF stage has file pattern option.
I didn't find any option that asks for the record length in parallel stage.
Can you help me find the mistake i'm doing/missing something in the parallel stage?