Page 1 of 1

look up and write to same hashfile in same job

Posted: Sun Jan 31, 2010 12:49 am
by zulfi123786
I am following the below listed process, Please advice if it is not the best way.

1) read the hashfile with "Pre-Load file to memory=Enabled,Lock for updates"

2) writing to the hashfile with no cache.

In the above what kind of lock is applied to the file, is it lock on the entire file or a record level lock, is it okay to preload the file or "Disabled,Lock for updates" has to be used. As i am not aware of how the locking on hashfile works can anyone give me the reference as to which document explains this mechanism.

Posted: Sun Jan 31, 2010 1:58 am
by kduke
Pre-load should be done for lookups. You should never use a hashed file as both primary source and target. Hashed files can be used as temp storage. So create a new file from the old one. If you have to then have a second job which clears the hashed file and copies back the records from this new file.

Posted: Sun Jan 31, 2010 4:14 am
by ray.wurlod
Locking is at record level, and is triggered by a lookup failing (that is, an expectation is established that the job will insert the record from the same Transformer stage). Do not use a private read-cache in this instance because it is pre-loaded when the job starts. Do not use a write-cache, because writes to memory are not accessible to the Hashed File stage performing the reading.

This advice changes if you move to public (shared) caching, but that's a whole other topic with a whole other menu.