CFF Stage problme in Parallel job

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
deployDS
Premium Member
Premium Member
Posts: 45
Joined: Thu Mar 09, 2006 9:36 am

CFF Stage problme in Parallel job

Post by deployDS »

Hi all,
I'm trying to read data from a mainframe file and i'm unable to do that with a parallel CFF stage. If I read the same file from server job using the server cff stage, job is working fine. Below are the details I gave for the server job:

Code: Select all

Data Format: EBCDIC
Record Style: Binary
Record Length: 1033
But if I try with a parallel job, i'm getting the following output:

Code: Select all

CFF_File,0: Field "XYZ" has import error and no default value; data: {@ @ @ @}, at offset: 316
CFF_File,0: Import warning at record 0.
CFF_File,0: Import unsuccessful at record 0.
CFF_File,0: No further reports will be generated from this partition until a successful import.
CFF_File,0: Import complete; 0 records imported successfully, 469371 rejected.
In the file option tab, I had the following selections:

Code: Select all

Record Type:Fixed 
Missing Filemode: depends
Reject mode: Continue
And in record options:

Code: Select all

Byte order: Native-endian
Character set: EBCDIC
Data foramt: Binary
Record delimiter: {none}
Even trying to view the data is not working. Its giving "no rows were returned" message and viewing the details give the same information as above.
I had the "read from multiple nodes" unchecked and the stage is working in sequential mode. I tried enabling this option but didn;t work.
The input binary file contains 2 columns with OCCURS clause. one 6 times and the other 30 times. In the parallel job developers guide, it is mentioned that the record type should be set to "fixed" when there are OCCURS clauses in the input file. So, I hope that my selection for that option is correct.
The metadata given in both server and parallel jobs is same. But still parallel job is not working. I am trying to use the parallel job because I may need to process multiple files at the same time and only parallel CFF stage has file pattern option.

I didn't find any option that asks for the record length in parallel stage.
Can you help me find the mistake i'm doing/missing something in the parallel stage?
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

What is column XYZ declared as and shoul it be at position 316 in the row? You could use your favorite editor which supports binary to look at positions 316 to 319 and check the binary values.
deployDS
Premium Member
Premium Member
Posts: 45
Joined: Thu Mar 09, 2006 9:36 am

Post by deployDS »

Field "XYZ" is actually named "ZIP4_N". I think it would be easier if I include the original field names. its SQL type is decimal(4). Along with that field, there is one more field for which this warning is coming. DEATH_CC_N, which is a Decimal(2) type.
I tried to view the data in these two fields. As this is a binary file and I don't have any utility/application that reads them, I gave a shot using the working server job. Every thing is fine. data coming in as 1234, 2345 etc etc.

Then I tried to view the data in parallel job by including a part of the metadata. I included 132 fields/488 record length and trid to view data. Parallel added filler for the rest of 489-1033 length and I'm able to view the data. No problem with the fields "ZIP4_N" or "DEATH_CC_N". Data is coming fine for both the columns. If I include even a single extra column, I again get the warning "Field "ZIP4_N" has import error and no default value".

If this is related to data error, then even server job shouldn't be able to get me the data. But it is working fine in a server job/stage. I can paste the column definitions for your reference, but it is really huge with around 650 columns. Let me know.
deployDS
Premium Member
Premium Member
Posts: 45
Joined: Thu Mar 09, 2006 9:36 am

Post by deployDS »

Can some one help me on this please. I'm still having the same problem.
Kryt0n
Participant
Posts: 584
Joined: Wed Jun 22, 2005 7:28 pm

Post by Kryt0n »

Is this the first decimal field in your data? Is it a packed decimal?

If so, have a look at the layout tab and note the record length value. Add one to the length of the decimal and check the record length in layout tab again. If still the same length, try viewing your data again. If not, try take a byte out of the length (to 3). Decimals are a right royal pita!
deployDS
Premium Member
Premium Member
Posts: 45
Joined: Thu Mar 09, 2006 9:36 am

Post by deployDS »

Thanks for the information Kryt0n.

My problem was solved. But this is not due to the Packed decimal fields. Below is the problem and the solution I implemented to make it work.

There are the 4 types of fields listed in the copybook and below different phases of changes I found for these fields.

Code: Select all

Copy book                CFF Stage             output from CFF
PIC X(04)                CHARACTER(04)         char(04)
PIC XX                   CHARACTER(02)         char(02)   
PIC 9(08) COMP           BINARY                Integer (Unsigned) (08)
PIC 9(04)                DISPLAY_NUMERIC       decimal(04)
There is no problem with the first 3 types of fields. The problem I got in the parallel CFF stage was due to the last type "PIC 9(04)" which was transforming into decimal field. There are some null values in the data coming from the data file, and when ever a null was encountered, CFF is rejecting the record saying that there is import error and the data has no default value. So, I changed the CFF stage column definition (for all the fields where a null is expected. In other words, for all the felds which are not part of the PK) from "DISPLAY_NUMERIC" to "CHARACTER" and the job worked with out any problem. My problem is solved. However, I got the following questions. Can anyone please answer these.

1) Does BASIC and Orchestrate handle nulls in different ways? Why did that file worked fine in server job and not in a parallel job?
2) I tried to change the nullability from "NOT NULL" to "NULLABLE" in the table definition after importing from the cobol copybook. It didn't allowed me to change saying that the field cannot be nullable. Can't we change the TD after importing?
3) When DISPLAY_NUMERIC was unable to handle the null character, how did the CHARACTER datatype handled the null?
4) Can we define nullability in a cobol copy book before importing?
5) Is this a bug in parallel CFF stage? Or this is the way it was designed to work?
kravids
Participant
Posts: 9
Joined: Tue Jan 19, 2010 2:33 pm

CFF Import Error

Post by kravids »

deployDS wrote:Thanks for the information Kryt0n.

My problem was solved. But this is not due to the Packed decimal fields. Below is the problem and the solution I implemented to make it work.

There are the 4 types of fields listed in the copybook and below different phases of changes I found for these fields.

Code: Select all

Copy book                CFF Stage             output from CFF
PIC X(04)                CHARACTER(04)         char(04)
PIC XX                   CHARACTER(02)         char(02)   
PIC 9(08) COMP           BINARY                Integer (Unsigned) (08)
PIC 9(04)                DISPLAY_NUMERIC       decimal(04)
There is no problem with the first 3 types of fields. The problem I got in the parallel CFF stage was due to the last type "PIC 9(04)" which was transforming into decimal field. There are some null values in the data coming from the data file, and when ever a null was encountered, CFF is rejecting the record saying that there is import error and the data has no default value. So, I changed the CFF stage column definition (for all the fields where a null is expected. In other words, for all the felds which are not part of the PK) from "DISPLAY_NUMERIC" to "CHARACTER" and the job worked with out any problem. My problem is solved. However, I got the following questions. Can anyone please answer these.

1) Does BASIC and Orchestrate handle nulls in different ways? Why did that file worked fine in server job and not in a parallel job?
2) I tried to change the nullability from "NOT NULL" to "NULLABLE" in the table definition after importing from the cobol copybook. It didn't allowed me to change saying that the field cannot be nullable. Can't we change the TD after importing?
3) When DISPLAY_NUMERIC was unable to handle the null character, how did the CHARACTER datatype handled the null?
4) Can we define nullability in a cobol copy book before importing?
5) Is this a bug in parallel CFF stage? Or this is the way it was designed to work?
Hi Iam also facing the same error .... please can you explain me clearly how to handle this error since iam new to this ....
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Please start your own post with the details of your problem, saying you have "the same error" doesn't really help us help you.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply