Problem with Sequential File Reject Records

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
varaprasad
Premium Member
Premium Member
Posts: 34
Joined: Fri May 16, 2008 6:24 am

Problem with Sequential File Reject Records

Post by varaprasad »

My job design is as follows:

seq file (fixed width) ----> Copy Stg ----> Shared Container ----> Copy Stg ---> ....

The problem is, when I ran the above job with around 750,000 records, all the records were read through the shared container and some records got rejected inside the shared container to a reject file. ( through a column export stage reject link). The reason is simple, one of the columns in these rejected records contain less number of characters than specified. Now I have trimmed my source file to contain just 10 bad records and 10 good records (captured from my first run). Then I re-ran the job and to my surprise the bad records were rejected as warnings at the sequential file stage itself. Only the 10 good records passed through the shared container.
Any specific reason for this behaviour ? I can provide any additional details if required.
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Maybe you distrubed the record content !!??

How did you trim ?

What are the warning / errors ?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Sequential File stage becomes an import operator; Column Export stage becomes an export operator. In your initial scenario the rows all matched the column definitions (record schema) for the Sequential File; in the second scenario they did not.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
varaprasad
Premium Member
Premium Member
Posts: 34
Joined: Fri May 16, 2008 6:24 am

Post by varaprasad »

Sorry. I was waiting for the renewal of my premium membership to read your message.

Infact, I haven't made any changes to the job. The job is exactly the same in both the occassions. I changed only the source file.

The trimming was manual. I copied 10 good records from the original file and 10 bad records from the reject file (manual copy & paste) into a text file and ran the job.

In both the scenarios the metadata was same. I haven't re-compiled the job also because I made no changes.

The warning messages were same in both the scenarios. But only difference is the records were rejected into a reject file through shared container in the big file scenario and they didn't even reach the shared container in the small file scenario. Any clues ?
Post Reply