Consumed more than 100000 bytes looking for record delimite

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
rsunny
Participant
Posts: 223
Joined: Sat Jul 03, 2010 10:22 pm

Consumed more than 100000 bytes looking for record delimite

Post by rsunny »

Hi,

When i try to run the job i am getting the error as "Consumed more than 100000 bytes looking for record delimiter; aborting".

Source is Sequential File and the Final Delimiter : End , Field Delimiter:comma , Null Field Value:" and Quote: double.

I have created a user defined variable $APT_MAX_DELIMITED_READ_SIZE and defined at the job level to 300000 but still the job got Aborted . Can any one please provide me the solution for the issue
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

What is your record delimiter? You did not mention that and that is what it is saying that it could not find.
-craig

"You can never have too many knives" -- Logan Nine Fingers
rsunny
Participant
Posts: 223
Joined: Sat Jul 03, 2010 10:22 pm

Post by rsunny »

Hi craig,

I mentioned record delimeter as Unix new line and ran the job but still got aborted
SURA
Premium Member
Premium Member
Posts: 1229
Joined: Sat Jul 14, 2007 5:16 am
Location: Sydney

Post by SURA »

Use reject link in the Sequential file stage. It will help you to track where the issue is!
Thanks
Ram
----------------------------------
Revealing your ignorance is fine, because you get a chance to learn.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

If you used 'UNIX newline' and it couldn't find it, then that's not your record delimiter. What does a "wc -l" on your filename return?
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Apparently there is no UNIX newline characters in the first 100000 bytes (or 300000 bytes) of your file. It may, for example, be a fixed-width format file with no record delimiters at all. DataStage can handle that - but you have to set the Record Delimiter property to None.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

... if so then the answer to my question would be "1". :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
rsunny
Participant
Posts: 223
Joined: Sat Jul 03, 2010 10:22 pm

Post by rsunny »

Hi ,

When i do wc -l filename i got the value as 3028799 .

Even though if i use a reject link for the Sequential stage , the job is getting Aborted.

Is there any possible solution to reject that record instead of Aborting the job?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You need to determine what the details of your file are so you can read it properly. A hex editor can help or perhaps "od -h" or some other flavor of a dump so you can see the actual hex/octal/decimal values.
-craig

"You can never have too many knives" -- Logan Nine Fingers
zulfi123786
Premium Member
Premium Member
Posts: 730
Joined: Tue Nov 04, 2008 10:14 am
Location: Bangalore

Post by zulfi123786 »

could you please share what is your actual record size based on column metadata ?
- Zulfi
Post Reply