Rejecting data

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
deva
Participant
Posts: 104
Joined: Fri Dec 29, 2006 1:54 pm

Rejecting data

Post by deva »

Hi
I am loading data from flat file to hash file through transformer without any constrain.

My question is I want find the duplicates which are not writtening into hash file. how to move the duplicate data into reject link.

Thanks in advance
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

All writes to hashed files are "insert or update", so when writing duplicate records you will never have rejects. You will need to locate duplicates in your job; perhaps by sorting the data on the key and using stage variables in your transform stage.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

You would have to read the hashed file first to check for existence and 'reject' any that succeed. You can do this all in one transformer if you don't cache the lookup or use the dreaded 'lock for update' option.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

To find duplicates in the source stream you must check within the Transformer stage (using stage variables). There is no other choice without changing your job design.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply