WriteHash() - Write failed error when loading hashfile

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
abhi989
Participant
Posts: 28
Joined: Mon Sep 19, 2005 2:31 pm

WriteHash() - Write failed error when loading hashfile

Post by abhi989 »

Hi everyone,

I have job in my production environment which loads around 40 million records from a sequential file to a hashfile. It works fine in production. When I imported the same job in my qa environment it gives me the following errir after it loads around 95% of recrods.

Error : JobDs155SeqToAverageCostHash..LatestPrdtAvgCost.DSSJU155_AvgCost: WriteHash() - Write failed for record id '7008503420
10998002'

I checked the disk space and it is at 56% full, so that is not the problem.

Here are some key inputs for hashfile (same both in QA and prodcution).

allow stage write cache, create file, and clear file before wirting are enabled.
File creationg type - type 30(dynamic)
minimum modulus : 531253
group size : 1
split load : 80
merge load : 50
large record : 1628
hash algorithm : general
caching attributes : none

Also Delete file before create is enabled.

Any help would be appreciated,

thanks,
Abhi
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Does it work if you disable the 'stage write cache' option?
-craig

"You can never have too many knives" -- Logan Nine Fingers
abhi989
Participant
Posts: 28
Joined: Mon Sep 19, 2005 2:31 pm

Post by abhi989 »

disabling 'stage write cache' produces the same result.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

How large are the DATA.30 and OVER.30 files when the error occurs? Anywhere close to 2Gb?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

I'd guess either the 2GB limit, or the occurrence of a mark character other than @TM in the key value.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
asorrell
Posts: 1707
Joined: Fri Apr 04, 2003 2:00 pm
Location: Colleyville, Texas

Post by asorrell »

I've worked with Abhi to figure out what the problem was. The original file in production was created with mkdbfile using the -64BIT option, by a user-id with an unlimited ulimit.

We've done the same in QA and the job now works correctly.

Next step: Document this in the job!
Andy Sorrell
Certified DataStage Consultant
IBM Analytics Champion 2009 - 2020
Post Reply