Problem with loading data into a fixed width file

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
dsx999
Participant
Posts: 29
Joined: Mon Aug 11, 2008 3:40 am

Problem with loading data into a fixed width file

Post by dsx999 »

I am loading data from a Teradata table into a fixed width sequential file.

Teradata Fast Export Stage -----> Transformer ------>Seq File

There are around 150 columns in the source and most of them are CHAR(255). All the columns are carrying the data.

Now the problem is, the job is aborting with the following error message:
Transformer: Error reading row: bytes expected: -31600 original read: 65535

Can anybody help me to resolve this issue? Thanks in advance.
Please note that it is a Server Job.
varaprasad
Premium Member
Premium Member
Posts: 34
Joined: Fri May 16, 2008 6:24 am

Post by varaprasad »

In continuation to my above post,
I have made some progress in isolating the error. And I've understood the following:
1. The record size (of each row) in the Database is 22,652 bytes.
2. It is not the problem with FExport because, the log is showing that the data was exported successfully.
3. Not specific to Fixed Width file, because, with a delimited file as target, is also throwing the same error.
4. Not specific to the Sequential file, because I loaded seperately (not through DataStage) using a FExport script.

So I have come to a conclusion that it is a problem with the DataStage where it is not able to read the record of this size with the current settings. Probably I may have to change a parameter setting at the project/server level.
So can anybody help me to know what exactly I have to change to overcome this problem?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

What is the maximum size of a line in a file in your operating system? (This may be the limit you have encountered.)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
varaprasad
Premium Member
Premium Member
Posts: 34
Joined: Fri May 16, 2008 6:24 am

Post by varaprasad »

In that case, (and if I understood your question correctly)I believe it should not work for a direct Fast Export also. But I am able to export the same data in to the same sequential file with same parameters using a stand alone FExport script.
And I am able to open that sequential file and read the data as well.
And I was able to run the same job using Teradata API stage in place of
Teradata Fexport.

I think it may not be a problem with the max line size at the system level considering all the above.

And one more thing is I was able to run the similar job (with same design and params etc) in another system on DataStage 7.5 ver succesfully.

By the way, can you please tell me how to find the "max line size" setting in a Windows environment?
Post Reply