file failure

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
deesh
Participant
Posts: 193
Joined: Mon Oct 08, 2007 2:57 am

file failure

Post by deesh »

Hi Friends,

I am phasing below failure.

SEQ_SAP_FILE,0: Consumed more than 100000 bytes looking for record delimiter; aborting
ShaneMuir
Premium Member
Premium Member
Posts: 508
Joined: Tue Jun 15, 2004 5:00 am
Location: London

Post by ShaneMuir »

Look at the delimiter on the file. The message is saying that it can't find it.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Your metadata is incorrect and it's saying it can't find the record delimiter you said would be there. As noted, correct that.
-craig

"You can never have too many knives" -- Logan Nine Fingers
deesh
Participant
Posts: 193
Joined: Mon Oct 08, 2007 2:57 am

Post by deesh »

chulett wrote:Your metadata is incorrect and it's saying it can't find the record delimiter you said would be there. As noted, correct that. ...

I have created the file in the previous job without using any record delimiter. I am running the jobs on windows 2003 server.

The said job was running fine previously. Is it possible that somebody changed some Admin parameter and then this job is giving this error.

Please help.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Nope. So it's a fixed-width file? Or did you mean you used the default delimiter in the creation job? Regardless, you need to match the definitions in the stage to the actual file, nothing more or less will work.
-craig

"You can never have too many knives" -- Logan Nine Fingers
dsuser_cai
Premium Member
Premium Member
Posts: 151
Joined: Fri Feb 13, 2009 4:19 pm

Post by dsuser_cai »

When you create a file you deliberately specify a delimeter (may be a pipe symbol, or some other) and use the same delimiter in the job that read the file. Some times the data source might have the defaut delimiter that you use, and this might cause some problem,
For example if the source data has a column called COL1 as

COL1
8787,9870
233,488
823723,089873

in the above case if you use the delimiter as a comma, then DS will push the data after the comma to the next column considering that as a new field. so always you specify a seperate (standerd) delimiter. Try this and let us know if you need further assistance.
Thanks
Karthick
Post Reply