Write failed on hash file

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
jhensarling

Write failed on hash file

Post by jhensarling »

I have a server job that received a ds_uvput() write error.

ds_uvput() - Write failed for record id 'COUNTRY
JM'

When I re-ran the job it completed without error but some rows were missing from the hash file.

Other posts seemed to be resolved as space issues but I have 7 GB available and this is a very small hash (< 15,000 rows).

The hash is being written to the project directory. The job is not using write caching.

I have recently upgraded to version 7.5 from version 6.0.1. This job has been running for a year and a half under version 6 without failure.

One thing that may be unusual about this job is that it is writing to the same hash file with multiple links. The job reads from a single Oracle stage into a single transformer and then the transformer has multiple links into a single hash file stage. The transformer has logic to set part of the hash file key fields based on the input.

When I re-ran the job it finished without error but all of the rows were not written to the hash. The row count shown in the log is correct but the rows were not in the hash file.

Running the job a third time did write all the rows to the hash file.

I have opened a case with Ascential tech support but I wanted to see if anyone else had run into this type of problem.
sumitgulati
Participant
Posts: 197
Joined: Mon Feb 17, 2003 11:20 pm
Location: India

Post by sumitgulati »

Every time you run the job does it clear the hash file before writing the new records into it. If yes then where are you performing the clear of file.

Regards,
-Sumit
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Also, all links MUST have the hash filename defined and have the exact same metadata. If you have one link clearing the hash file, it must be the link that always gets the first row. Depending on how your constraints are discriminating, link ordering may not be enough as the first output link may not get the first row. A write failure can be because of a reserved character in the primary key, such as a NULL. In your case, who knows?
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

One other possibility is that the hashed file suffered some form of internal corruption. You need to check for this before clearing or re-creating the hashed file.

Write failure for record 'keyvalue' often indicates one bad page (group) within the hashed file structure; all the key values for which this is reported tend to belong on that page.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply