Page 1 of 1

Consumed more than 100000 bytes looking for record delimite

Posted: Wed Jun 06, 2012 4:44 pm
by rsunny
Hi,

When i try to run the job i am getting the error as "Consumed more than 100000 bytes looking for record delimiter; aborting".

Source is Sequential File and the Final Delimiter : End , Field Delimiter:comma , Null Field Value:" and Quote: double.

I have created a user defined variable $APT_MAX_DELIMITED_READ_SIZE and defined at the job level to 300000 but still the job got Aborted . Can any one please provide me the solution for the issue

Posted: Wed Jun 06, 2012 6:54 pm
by chulett
What is your record delimiter? You did not mention that and that is what it is saying that it could not find.

Posted: Wed Jun 06, 2012 8:31 pm
by rsunny
Hi craig,

I mentioned record delimeter as Unix new line and ran the job but still got aborted

Posted: Wed Jun 06, 2012 8:36 pm
by SURA
Use reject link in the Sequential file stage. It will help you to track where the issue is!

Posted: Wed Jun 06, 2012 8:42 pm
by chulett
If you used 'UNIX newline' and it couldn't find it, then that's not your record delimiter. What does a "wc -l" on your filename return?

Posted: Wed Jun 06, 2012 8:43 pm
by ray.wurlod
Apparently there is no UNIX newline characters in the first 100000 bytes (or 300000 bytes) of your file. It may, for example, be a fixed-width format file with no record delimiters at all. DataStage can handle that - but you have to set the Record Delimiter property to None.

Posted: Wed Jun 06, 2012 9:02 pm
by chulett
... if so then the answer to my question would be "1". :wink:

Posted: Thu Jun 07, 2012 6:46 am
by rsunny
Hi ,

When i do wc -l filename i got the value as 3028799 .

Even though if i use a reject link for the Sequential stage , the job is getting Aborted.

Is there any possible solution to reject that record instead of Aborting the job?

Posted: Thu Jun 07, 2012 7:34 am
by chulett
You need to determine what the details of your file are so you can read it properly. A hex editor can help or perhaps "od -h" or some other flavor of a dump so you can see the actual hex/octal/decimal values.

Posted: Thu Jun 07, 2012 8:37 am
by zulfi123786
could you please share what is your actual record size based on column metadata ?