My job design is as follows:
seq file (fixed width) ----> Copy Stg ----> Shared Container ----> Copy Stg ---> ....
The problem is, when I ran the above job with around 750,000 records, all the records were read through the shared container and some records got rejected inside the shared container to a reject file. ( through a column export stage reject link). The reason is simple, one of the columns in these rejected records contain less number of characters than specified. Now I have trimmed my source file to contain just 10 bad records and 10 good records (captured from my first run). Then I re-ran the job and to my surprise the bad records were rejected as warnings at the sequential file stage itself. Only the 10 good records passed through the shared container.
Any specific reason for this behaviour ? I can provide any additional details if required.
Problem with Sequential File Reject Records
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 34
- Joined: Fri May 16, 2008 6:24 am
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Sequential File stage becomes an import operator; Column Export stage becomes an export operator. In your initial scenario the rows all matched the column definitions (record schema) for the Sequential File; in the second scenario they did not.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 34
- Joined: Fri May 16, 2008 6:24 am
Sorry. I was waiting for the renewal of my premium membership to read your message.
Infact, I haven't made any changes to the job. The job is exactly the same in both the occassions. I changed only the source file.
The trimming was manual. I copied 10 good records from the original file and 10 bad records from the reject file (manual copy & paste) into a text file and ran the job.
In both the scenarios the metadata was same. I haven't re-compiled the job also because I made no changes.
The warning messages were same in both the scenarios. But only difference is the records were rejected into a reject file through shared container in the big file scenario and they didn't even reach the shared container in the small file scenario. Any clues ?
Infact, I haven't made any changes to the job. The job is exactly the same in both the occassions. I changed only the source file.
The trimming was manual. I copied 10 good records from the original file and 10 bad records from the reject file (manual copy & paste) into a text file and ran the job.
In both the scenarios the metadata was same. I haven't re-compiled the job also because I made no changes.
The warning messages were same in both the scenarios. But only difference is the records were rejected into a reject file through shared container in the big file scenario and they didn't even reach the shared container in the small file scenario. Any clues ?