I am loading data from a Teradata table into a fixed width sequential file.
Teradata Fast Export Stage -----> Transformer ------>Seq File
There are around 150 columns in the source and most of them are CHAR(255). All the columns are carrying the data.
Now the problem is, the job is aborting with the following error message:
Transformer: Error reading row: bytes expected: -31600 original read: 65535
Can anybody help me to resolve this issue? Thanks in advance.
Please note that it is a Server Job.
Problem with loading data into a fixed width file
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 34
- Joined: Fri May 16, 2008 6:24 am
In continuation to my above post,
I have made some progress in isolating the error. And I've understood the following:
1. The record size (of each row) in the Database is 22,652 bytes.
2. It is not the problem with FExport because, the log is showing that the data was exported successfully.
3. Not specific to Fixed Width file, because, with a delimited file as target, is also throwing the same error.
4. Not specific to the Sequential file, because I loaded seperately (not through DataStage) using a FExport script.
So I have come to a conclusion that it is a problem with the DataStage where it is not able to read the record of this size with the current settings. Probably I may have to change a parameter setting at the project/server level.
So can anybody help me to know what exactly I have to change to overcome this problem?
I have made some progress in isolating the error. And I've understood the following:
1. The record size (of each row) in the Database is 22,652 bytes.
2. It is not the problem with FExport because, the log is showing that the data was exported successfully.
3. Not specific to Fixed Width file, because, with a delimited file as target, is also throwing the same error.
4. Not specific to the Sequential file, because I loaded seperately (not through DataStage) using a FExport script.
So I have come to a conclusion that it is a problem with the DataStage where it is not able to read the record of this size with the current settings. Probably I may have to change a parameter setting at the project/server level.
So can anybody help me to know what exactly I have to change to overcome this problem?
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 34
- Joined: Fri May 16, 2008 6:24 am
In that case, (and if I understood your question correctly)I believe it should not work for a direct Fast Export also. But I am able to export the same data in to the same sequential file with same parameters using a stand alone FExport script.
And I am able to open that sequential file and read the data as well.
And I was able to run the same job using Teradata API stage in place of
Teradata Fexport.
I think it may not be a problem with the max line size at the system level considering all the above.
And one more thing is I was able to run the similar job (with same design and params etc) in another system on DataStage 7.5 ver succesfully.
By the way, can you please tell me how to find the "max line size" setting in a Windows environment?
And I am able to open that sequential file and read the data as well.
And I was able to run the same job using Teradata API stage in place of
Teradata Fexport.
I think it may not be a problem with the max line size at the system level considering all the above.
And one more thing is I was able to run the similar job (with same design and params etc) in another system on DataStage 7.5 ver succesfully.
By the way, can you please tell me how to find the "max line size" setting in a Windows environment?