Page 1 of 1

WriteHash() - Write failed error when loading hashfile

Posted: Thu Aug 30, 2007 4:56 pm
by abhi989
Hi everyone,

I have job in my production environment which loads around 40 million records from a sequential file to a hashfile. It works fine in production. When I imported the same job in my qa environment it gives me the following errir after it loads around 95% of recrods.

Error : JobDs155SeqToAverageCostHash..LatestPrdtAvgCost.DSSJU155_AvgCost: WriteHash() - Write failed for record id '7008503420
10998002'

I checked the disk space and it is at 56% full, so that is not the problem.

Here are some key inputs for hashfile (same both in QA and prodcution).

allow stage write cache, create file, and clear file before wirting are enabled.
File creationg type - type 30(dynamic)
minimum modulus : 531253
group size : 1
split load : 80
merge load : 50
large record : 1628
hash algorithm : general
caching attributes : none

Also Delete file before create is enabled.

Any help would be appreciated,

thanks,
Abhi

Posted: Thu Aug 30, 2007 6:31 pm
by chulett
Does it work if you disable the 'stage write cache' option?

Posted: Thu Aug 30, 2007 6:41 pm
by abhi989
disabling 'stage write cache' produces the same result.

Posted: Thu Aug 30, 2007 8:10 pm
by ArndW
How large are the DATA.30 and OVER.30 files when the error occurs? Anywhere close to 2Gb?

Posted: Thu Aug 30, 2007 8:21 pm
by ray.wurlod
I'd guess either the 2GB limit, or the occurrence of a mark character other than @TM in the key value.

Posted: Tue Sep 04, 2007 1:38 pm
by asorrell
I've worked with Abhi to figure out what the problem was. The original file in production was created with mkdbfile using the -64BIT option, by a user-id with an unlimited ulimit.

We've done the same in QA and the job now works correctly.

Next step: Document this in the job!