Column size is too large

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
vitumati
Participant
Posts: 27
Joined: Tue Sep 07, 2010 11:38 pm

Column size is too large

Post by vitumati »

Hi Friends,

I've column "Comment" in my source.I'm getting approximately 10000 character for that column.while reading the data from sequential file I'm getting below error.


<SF_Profiledb,0> Error reading on import.
<SF_Profiledb,0> Consumed more than 100000 bytes looking for record delimiter; aborting
<SF_Profiledb,0> Import error at record 0.

Can you please help?
Do I need to assign any Environment Variable to increase the size column capacity.

Can Datastage support 10000 character for one column.
Abhinav
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

The error actually says it read more than 10000 characters but couldn't find a record delimiter, not that anything was 'too large'. Check your metadata.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Which, in turn, means that you have probably specified the wrong delimiter. Did you import the table definition for the file and load that into your job design (including Load on the Format tab)?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply