Import Error - Consumed more than 10000k bytes

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
pradkumar
Charter Member
Charter Member
Posts: 393
Joined: Wed Oct 18, 2006 1:09 pm

Import Error - Consumed more than 10000k bytes

Post by pradkumar »

Hi Everyone,

I have one job which reads the xml transactions(responses) from MQ and dumps into a sequential file. The next job reads from sequential file with format options as ( Record delimter string =</TCRMService>\n ,final delimeter = none) and dataype as varchat 30000. In the second job iam pulling the each xml record by looking into the above mentioned delimter string and splitting the records into SUCCESS and FAILURE. The jobs were working fine with out any issues in dev. But in QA the same job got aborted with the following errors
SEQ_MQ_Responses,0: Error reading on import.
SEQ_MQ_Responses,0: Consumed more than 100,000 bytes looking for record delimiter; aborting
SEQ_MQ_Responses,0: Import error at record 11.

I looked at the 11th record in the file and i see that the response was very big in size so the job couldnt find the record delimeter string. There are chances of getting the big responses from MQ so could any one suggest how to handle the situation.how to increase the size so that even though the xml record is very big the job shouldnot abort.

Please throw a light on the issue.

Thanks in Advance
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The environment value you are looking for is $APT_MAX_DELIMITED_READ_SIZE and its use is documented in the advanced parallel job developer guide.
Post Reply