Handling data truncation errors.
Moderators: chulett, rschirm, roy
Re: Handling data truncation errors.
Hi,
What are your source and target stages in that job?
What are your source and target stages in that job?
Re: Handling data truncation errors.
Hi,
Sources are predominantly DB Stages - Oracle and DB2. Targets could be files and/or DB Stages (Oracle and DB2).
Sources are predominantly DB Stages - Oracle and DB2. Targets could be files and/or DB Stages (Oracle and DB2).
Re: Handling data truncation errors.
Hi,ag_ram wrote:Hi,
Sources are predominantly DB Stages - Oracle and DB2. Targets could be files and/or DB Stages (Oracle and DB2).
Actually am thinking if we can capture the records with length issue using a reject link for the target stages.
can you try that and let me know.
-
- Premium Member
- Posts: 67
- Joined: Thu Aug 09, 2007 7:51 pm
Does this mean that the reject link does not handle format errors well?
I have a similar situation. I am using sequential file stage and have an input file with two columns. The sequential file stage has an output link to a transformer and a reject link. The format is as follows:
Field 1: VARCHAR(10)
Field 2: VARCHAR(30)
The input data contains 40 characters for Field 2. The execution results in the data being truncated for Field 2, retaining only the first 30 characters. I am somewhat baffled because this would mean that I would have to use a format similar to the following:
Field 1: VARCHAR(8000)
Field 2: VARCHAR(8000)
and check the length of the fields in the transformer stage in order to reject the record.
If I am not mistaken/presumptuous, that kind of defeats the purpose of the reject link.
The documentation is not very helpful. In the "Parallel Job Developer Guide", for reject link, I see
"For reading files, the link uses a single column called rejected containing raw data for columns rejected after reading because they do not match the schema."
Does this imply that data overflow is not considered part of the schema? At least this is not how it is in the database world![Sad :(](./images/smilies/icon_sad.gif)
Thanks!
I have a similar situation. I am using sequential file stage and have an input file with two columns. The sequential file stage has an output link to a transformer and a reject link. The format is as follows:
Field 1: VARCHAR(10)
Field 2: VARCHAR(30)
The input data contains 40 characters for Field 2. The execution results in the data being truncated for Field 2, retaining only the first 30 characters. I am somewhat baffled because this would mean that I would have to use a format similar to the following:
Field 1: VARCHAR(8000)
Field 2: VARCHAR(8000)
and check the length of the fields in the transformer stage in order to reject the record.
If I am not mistaken/presumptuous, that kind of defeats the purpose of the reject link.
The documentation is not very helpful. In the "Parallel Job Developer Guide", for reject link, I see
"For reading files, the link uses a single column called rejected containing raw data for columns rejected after reading because they do not match the schema."
Does this imply that data overflow is not considered part of the schema? At least this is not how it is in the database world
![Sad :(](./images/smilies/icon_sad.gif)
Thanks!
Hi,
Is the issue resolved?
I just went through the following writeup in manual.
See if it helps your case.
" APT_IMPORT_REJECT_STRING_FIELD_OVERRUNS
When set, DataStage will reject any string or ustring fields read that
go over their fixed size. By default these records are truncated. "
Let me know once you try this.
Is the issue resolved?
I just went through the following writeup in manual.
See if it helps your case.
" APT_IMPORT_REJECT_STRING_FIELD_OVERRUNS
When set, DataStage will reject any string or ustring fields read that
go over their fixed size. By default these records are truncated. "
Let me know once you try this.
-
- Premium Member
- Posts: 67
- Joined: Thu Aug 09, 2007 7:51 pm