Sequential reading error

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vitumati
Participant
Posts: 27
Joined: Tue Sep 07, 2010 11:38 pm

Sequential reading error

Post by vitumati »

Hi Friends,
I designed a job like


Job 1)DB2Stage----->TransformerStage------>Sequential stage...working fine

Above job target sequential file I'm using as source stage

Job 2) Sequential stage---->TransformerStage------>DB2stage

While reading the data I'm getting below error:

Sequential_File_29,0: Field "ZIP_POSTAL_CD__c" with 'delim=end' did not consume entire input, at offset: 175
I'm handling newline characters for all colunms i'm getting error
Can you plz help me.
Abhinav
jwiles
Premium Member
Premium Member
Posts: 1274
Joined: Sun Nov 14, 2004 8:50 pm
Contact:

Post by jwiles »

The error essentially indicates that the source column data is longer than defined by the metadata/schema. This can happen for one of several reasons:

1) There are more columns in the data than defined in the metadata
2) You have a varchar() column with a defined max width property and the data is longer than that
3) You're reading a fixed-width file but the source is longer than the defined width
4) You're using the wrong delimiter for all but your final column

Your example message indicates that the delimiter for ZIP_POSTAL_CD__c is end (delim=end). Is this the final column in the record schema? The final column is the only one for which this option would be appropriate, as end in this context means end-of-record. All other columns can use the delimiter specified in the record-level options.

Regards,
- james wiles


All generalizations are false, including this one - Mark Twain.
Post Reply