Hashed file issue

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
dr.murthy
Participant
Posts: 224
Joined: Sun Dec 07, 2008 8:47 am
Location: delhi

Hashed file issue

Post by dr.murthy »

Hi,

i used a same hashed file in my job twice one time for referring and one time for writing, but issue is what ever records write into hashed file , same records records are not referring at same run , next run only thise records used as reference,
but i have to use the records as a reference what ever loaded into hashed file at single run.

Any suggestions???

Thanks in advance
D.N .MURTHY
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Don't cache the hashed file for reading. That or read about what the 'Locked for update' option entails.
-craig

"You can never have too many knives" -- Logan Nine Fingers
dr.murthy
Participant
Posts: 224
Joined: Sun Dec 07, 2008 8:47 am
Location: delhi

Post by dr.murthy »

Update action for both options clear before file backup existing is unchecked
D.N .MURTHY
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

That's got nothing to do with read and write cache.

Also, you need to ensure that read and write are in the same Transformer stage. Also recommended is "lock for update" when reading - this sets a row level lock anticipating the write when the record is not found.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
dr.murthy
Participant
Posts: 224
Joined: Sun Dec 07, 2008 8:47 am
Location: delhi

Post by dr.murthy »

Hi ray,

Where can i set this option lock for updates, am not able to see your reply fully.
D.N .MURTHY
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Before we go down that potentially rocky path, why not just make sure the lookup's Pre-load file to memory option is set to Disabled and see if that "fixes" it.
-craig

"You can never have too many knives" -- Logan Nine Fingers
dr.murthy
Participant
Posts: 224
Joined: Sun Dec 07, 2008 8:47 am
Location: delhi

Post by dr.murthy »

yes, i tried to set optin preload file to memory is disabled, even though its not working,
and also that record level read option is disable for me to check
D.N .MURTHY
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Then you need to fully explain your job design / layout in gory detail, particularly how your are reading from and writing to the hashed file in question including all settings in both stages. Reading / writing to the same hashed file in a Server job is fundamental stuff and works just fine when setup properly, so we need to see where you've gone wrong here.
-craig

"You can never have too many knives" -- Logan Nine Fingers
dr.murthy
Participant
Posts: 224
Joined: Sun Dec 07, 2008 8:47 am
Location: delhi

Post by dr.murthy »

Yes, in my job i have a one source file one transformer one target file and two hashed files(same hash used twice for read and write).

i have a two fields say A,B,C coming from my source and in my hashed file having two fields A , D(Populating from B with some transformations) and E (Keymngmt ID) . my target file having three fields A , E, F(populating from D with some transformations). where E and F are coming from same hash.
in this design A is key field.

at first run in my target populating values for only field A, E and F are coming as blank values, because these two fields should populate values from Hash.
but none of the records are hitting at same run.
D.N .MURTHY
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Then the hashed file was not populated correctly. Check your work. Also you should use consistent column naming.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
vinnz
Participant
Posts: 92
Joined: Tue Feb 17, 2004 9:23 pm

Post by vinnz »

You said one transformer - are the target file and the hashed file being populated from the same transformer? In that case, the first time a new key value is encountered it would not already be in the hashed file and will get written to the hashed file & target at the same time. The fields that you are trying to lookup would only return data the second time the key is encountered in your data within that run.

Can you provide some sample data along with what you are trying to accomplish?
prasad.bodduluri
Participant
Posts: 30
Joined: Tue Jan 30, 2007 5:21 am
Location: bangalore

Post by prasad.bodduluri »

Split job into two parts.
prasad
Post Reply