Page 1 of 1

Fatal error when reading the input file

Posted: Tue Mar 13, 2007 10:45 pm
by vij
Hi all,

I get this fatal error when I run my job:
SQ_IN_01,0: Consumed more than 100000 bytes looking for record delimiter; aborting
As specified in th error, in this input file, I have some records and each record occupies about 256KB of data and then comes the record delimiter, UNIX newline.

Is there any way I can read this data without any error/problems?

Thanks in advance!!

Posted: Wed Mar 14, 2007 3:01 am
by kumar_s
Data would be read only based on the given metadata. If you need these rows, you need to change the metadata accordingly. And re parse it in the job.
What is the datatype? Is it a LOB or Longvarchar?

Posted: Wed Mar 14, 2007 4:23 am
by vij
Thanks for the reply, Kumar.
The datatype is Varchar. should I change the datatype?
length is 258Kb.

Posted: Wed Mar 14, 2007 4:46 am
by kumar_s
Varchar should be ok. If you could able to read the whole record (without any delimiter) as a single row, you can later parse using Transformer stage.
Why do you want to read those records? If your idea is to store it in a separeate file, extending a reject link will do that job.
What is the total length of the record? How did you manage to give the 258Kb in Varchar column?

Posted: Wed Mar 14, 2007 7:08 am
by ray.wurlod
What delimiter character is used in the file, and what delimiter character is specified in the field level properties?