Page 1 of 1

Row locking problem in Hash File

Posted: Sun Sep 25, 2005 8:43 pm
by rleishman
I have a routine that creates/opens/reads/writes a hash file. It is called from a Transformer (ie. once for each row). The hash file is created on the first row. For any call to the routine, it only writes to the file if the key does not already exist.

The routine works perfectly for the first 600 rows (writing about 50-100 new keys to the hash file) and then the job hangs. I know the routine is at fault because I removed it and the job ran to completion.

The commands I am using to create, open, read, and write the hash file are as follows:

Code: Select all

StrCommand = "$DSHOME/bin/mkdbfile " : HashFile : " 30 1 4 20 50 80 1628
Call DSExecute("UNIX", StrCommand , OutPut, RetCode)

Openpath HashFile TO UniqFileHandle

Readu ValExists From UniqFileHandle, Val

Writeu Val On UniqFileHandle, Val
When I change the readu and writeu to read and write, the job runs to completion. This leads me to believe that I was locking myself out of the file. I am actually happy not to lock the records because the hash file is unique to a single-instance job (no sharing reqd), but I am concerned that my understanding of locking is wrong.

I thought that if my session holds a lock, then I can readu and even writeu to my heart's content in the same session. Am I wrong?

I suspect it is something to do with dynamic re-sizing hash files or possibly caching, because there are definitely duplicates in the first 600 rows where I would be re-locking a row and it gets past these fine.

Some further info:
* I am not using inter-process communcation in the job - there is definitely only one session.
* There is no-one else trying to use the file; I am the only one on the machine, and this is the only job I am running.
* Between runs I had to kill the jobs from Unix and then stop/restart DS.

Any help or ideas would be greatly appreciated.

Posted: Sun Sep 25, 2005 8:52 pm
by kduke
Change writeu to write. The writeu leaves the record locked.

Posted: Sun Sep 25, 2005 8:54 pm
by rleishman
Um... I thought that's what I said :?
rleishman wrote:When I change the readu and writeu to read and write , the job runs to completion.
Thanks anyway.

Posted: Sun Sep 25, 2005 8:59 pm
by kduke
No. You need the readu. You do not need the writeu.

Posted: Sun Sep 25, 2005 9:18 pm
by rleishman
Ken,

Advice noted: I should not write after a read because someone else may have locked/updated it in the meantime.

In this instance I don't need the writeu because I never intend to update the row again. But what if I did want to update the row again? The SDK routine KeyMgtGetNextValue is an example where it works fine, except that routine uses CREATE.FILE and Open instead of mkdbfile and Openpath.

Are you able to explain why DS waited on the lock when my session was the one holding it?

Posted: Sun Sep 25, 2005 9:28 pm
by kduke
Who is Ken?

You had to update the same hash file key twice otherwise it would not have to wait. You have to have duplicates or you are using up all the locks available. How many rows we talking about?

Posted: Sun Sep 25, 2005 9:34 pm
by kduke
I just looked at KeyMgtGetNextValue. It only writes to one key value. It does not need to write this record until the end of the job. It writes this record more times than it needs to. It should be rewritten but this version is much safer to distribute. The faster version would always need to reseed the value from the target because there is no way to know the last record in the data stream.

When the job finishes the lock is released automatically unless you kill the job with a -9 option or in Task Manager in Windows.

Posted: Sun Sep 25, 2005 10:11 pm
by rleishman
Kim,

My sincerest apologies over the "Ken" thing :oops: - I'll get my glasses checked.

The "run out of locks" thing sound promising! The routine does a readu and then just moves on to the next row if it exists - it does not release the row. The write is performed only when the readu fails.

By not releasing the locks I imagine I am exhausting a finite resource.

Have now added release commands as appropriate and it works! Thanks.

Cannot find anything in the doco about number of row locks that are possible. There's probably lots of interesting system config parameters... can anyone point me to the correct manual.

Posted: Sun Sep 25, 2005 10:45 pm
by ray.wurlod
This is actually quite a complex topic. The sizes of the lock tables are, as you guess, configurable. There are, however, a number of these tables; file locks, group latches, readu locks, readL locks, waiters. The best reference is Administering UniVerse which you can download from IBM's web site

Posted: Sun Sep 25, 2005 11:17 pm
by rleishman
Nice, 492 pages :x

I'll check with the client to see if they want to employ a UV DBA. Until I get a response from them, I think I'll just make sure I release my locks. :)