Row locking problem in Hash File

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
rleishman
Premium Member
Premium Member
Posts: 252
Joined: Mon Sep 19, 2005 10:28 pm
Location: Melbourne, Australia
Contact:

Row locking problem in Hash File

Post by rleishman »

I have a routine that creates/opens/reads/writes a hash file. It is called from a Transformer (ie. once for each row). The hash file is created on the first row. For any call to the routine, it only writes to the file if the key does not already exist.

The routine works perfectly for the first 600 rows (writing about 50-100 new keys to the hash file) and then the job hangs. I know the routine is at fault because I removed it and the job ran to completion.

The commands I am using to create, open, read, and write the hash file are as follows:

Code: Select all

StrCommand = "$DSHOME/bin/mkdbfile " : HashFile : " 30 1 4 20 50 80 1628
Call DSExecute("UNIX", StrCommand , OutPut, RetCode)

Openpath HashFile TO UniqFileHandle

Readu ValExists From UniqFileHandle, Val

Writeu Val On UniqFileHandle, Val
When I change the readu and writeu to read and write, the job runs to completion. This leads me to believe that I was locking myself out of the file. I am actually happy not to lock the records because the hash file is unique to a single-instance job (no sharing reqd), but I am concerned that my understanding of locking is wrong.

I thought that if my session holds a lock, then I can readu and even writeu to my heart's content in the same session. Am I wrong?

I suspect it is something to do with dynamic re-sizing hash files or possibly caching, because there are definitely duplicates in the first 600 rows where I would be re-locking a row and it gets past these fine.

Some further info:
* I am not using inter-process communcation in the job - there is definitely only one session.
* There is no-one else trying to use the file; I am the only one on the machine, and this is the only job I am running.
* Between runs I had to kill the jobs from Unix and then stop/restart DS.

Any help or ideas would be greatly appreciated.
Ross Leishman
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Change writeu to write. The writeu leaves the record locked.
Mamu Kim
rleishman
Premium Member
Premium Member
Posts: 252
Joined: Mon Sep 19, 2005 10:28 pm
Location: Melbourne, Australia
Contact:

Post by rleishman »

Um... I thought that's what I said :?
rleishman wrote:When I change the readu and writeu to read and write , the job runs to completion.
Thanks anyway.
Ross Leishman
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

No. You need the readu. You do not need the writeu.
Mamu Kim
rleishman
Premium Member
Premium Member
Posts: 252
Joined: Mon Sep 19, 2005 10:28 pm
Location: Melbourne, Australia
Contact:

Post by rleishman »

Ken,

Advice noted: I should not write after a read because someone else may have locked/updated it in the meantime.

In this instance I don't need the writeu because I never intend to update the row again. But what if I did want to update the row again? The SDK routine KeyMgtGetNextValue is an example where it works fine, except that routine uses CREATE.FILE and Open instead of mkdbfile and Openpath.

Are you able to explain why DS waited on the lock when my session was the one holding it?
Ross Leishman
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Who is Ken?

You had to update the same hash file key twice otherwise it would not have to wait. You have to have duplicates or you are using up all the locks available. How many rows we talking about?
Mamu Kim
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

I just looked at KeyMgtGetNextValue. It only writes to one key value. It does not need to write this record until the end of the job. It writes this record more times than it needs to. It should be rewritten but this version is much safer to distribute. The faster version would always need to reseed the value from the target because there is no way to know the last record in the data stream.

When the job finishes the lock is released automatically unless you kill the job with a -9 option or in Task Manager in Windows.
Mamu Kim
rleishman
Premium Member
Premium Member
Posts: 252
Joined: Mon Sep 19, 2005 10:28 pm
Location: Melbourne, Australia
Contact:

Post by rleishman »

Kim,

My sincerest apologies over the "Ken" thing :oops: - I'll get my glasses checked.

The "run out of locks" thing sound promising! The routine does a readu and then just moves on to the next row if it exists - it does not release the row. The write is performed only when the readu fails.

By not releasing the locks I imagine I am exhausting a finite resource.

Have now added release commands as appropriate and it works! Thanks.

Cannot find anything in the doco about number of row locks that are possible. There's probably lots of interesting system config parameters... can anyone point me to the correct manual.
Ross Leishman
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

This is actually quite a complex topic. The sizes of the lock tables are, as you guess, configurable. There are, however, a number of these tables; file locks, group latches, readu locks, readL locks, waiters. The best reference is Administering UniVerse which you can download from IBM's web site
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
rleishman
Premium Member
Premium Member
Posts: 252
Joined: Mon Sep 19, 2005 10:28 pm
Location: Melbourne, Australia
Contact:

Post by rleishman »

Nice, 492 pages :x

I'll check with the client to see if they want to employ a UV DBA. Until I get a response from them, I think I'll just make sure I release my locks. :)
Ross Leishman
Post Reply