I have job in my production environment which loads around 40 million records from a sequential file to a hashfile. It works fine in production. When I imported the same job in my qa environment it gives me the following errir after it loads around 95% of recrods.
Error : JobDs155SeqToAverageCostHash..LatestPrdtAvgCost.DSSJU155_AvgCost: WriteHash() - Write failed for record id '7008503420
10998002'
I checked the disk space and it is at 56% full, so that is not the problem.
Here are some key inputs for hashfile (same both in QA and prodcution).
allow stage write cache, create file, and clear file before wirting are enabled.
File creationg type - type 30(dynamic)
minimum modulus : 531253
group size : 1
split load : 80
merge load : 50
large record : 1628
hash algorithm : general
caching attributes : none
I've worked with Abhi to figure out what the problem was. The original file in production was created with mkdbfile using the -64BIT option, by a user-id with an unlimited ulimit.
We've done the same in QA and the job now works correctly.