Hash file, reading and writing in same job

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Hash file, reading and writing in same job

Post by Kirtikumar »

Hi,

Sorry for posting the question that has been discussed so many times.
I have searched a lot on this topic.

I am trying to use the hash for ref purpose and update the same hash file if match is not found. Also the reqt is, if 2 rows in stream link have same key and key does not exist in the hash, after inserting 1 st row, second one should not be inserted.

In some of the post it is mentioned that in the ref link hash file stage, preload should be set to 'Disabled, lock for updates' and in some it is mentioened that it should be 'Enabled, Lock for updates'.
For the target hash, Write cache should be disabled.

Can anyone help me out as to which are the correct settings for hash ref stage and hash target stage for this sort of reqt.
Also, if anyone can give the reason or ref. to reason, it will be great help for me.

Thanks in advance.
Regards,
S. Kirtikumar.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

No read or write caching, turn off interprocess and row buffering.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

kcbland wrote:No read or write caching, turn off interprocess and row buffering.
Thanks Kenneth!!!

So I should go for following settings:
Off - Interprocess and row buffering in job properties.

But what should be the settings in Ref hash stage for preload-

Enabled
Enabled lock for updates
Disabled
Disabled, lock for update
Regards,
S. Kirtikumar.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Disabled.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Kirtikumar
Participant
Posts: 437
Joined: Fri Oct 15, 2004 6:13 am
Location: Pune, India

Post by Kirtikumar »

kcbland wrote:Disabled.
Thanks Kenneth!!!
Regards,
S. Kirtikumar.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Disabled, lock for updates.

This sets a record level update lock if the key is not found, on the assumption that you're going to write that key into the hashed file.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

For what it's worth, the last time I had to do this I found that plain old Disabled is what worked for me. I don't recall exactly why, but had some strange behaviour when I tried 'Disabled, Locked for Updates'. Switching back to Disabled made it do exactly what it needed :?

Always best, in my opinion, to build little jobs to specifically test things like this. Switch it around and note how each option change affects the job. Then you'll know which way is right for your particular situation.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Not to argue with Ray, but the disable lock for updates means that the reference of a row will put a lock on that row. Failure to write to that row leaves a lock hanging. The job will degrade in performance as the internal lock table progressively fills with unrelieved locks, until the job basically freezes.

Only use locking if you absolutely need the row locked, meaning that some other job could be accessing and modifying the same row at the same time, which in my opinion is a BAD DESIGN for a lot of reasons.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The lock is only set if the lookup fails. It "expects" that that key is about to be written. Just good database practice; prevents lost updates.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
changming
Participant
Posts: 68
Joined: Wed Oct 13, 2004 3:35 am

read and write same hash file, be careful the performance

Post by changming »

I have a job read and read same hash file. I disable cash and lock for update. performance is bad. (not acceptable).
be careful to use that lock for update option.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Performance is bad (not acceptable).

Presumably lost updates are acceptable! :shock:
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply