": ds_uvput() - Write failed for record id '38890'"

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
suresh.narasimha
Premium Member
Premium Member
Posts: 81
Joined: Mon Nov 21, 2005 4:17 am
Location: Sydney, Australia
Contact:

": ds_uvput() - Write failed for record id '38890'"

Post by suresh.narasimha »

Dear All,

We wre getting following error while executing a datastage server job.

": ds_uvput() - Write failed for record id '38890'"

When we executed the job with a userid say "A" we got this error. Then we executed the job with admin userid. Job executed successfully.
Next day we again executed the same job with userid "A". This time it executed successfully. We are not able to understand why it failed first time. Then why it successfully executed when executed with same userid secod time. Please help to resolve the issue.

regards,
Deepak
SURESH NARASIMHA
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Most likely the first time it failed the user had read but not write access to the file and the second time, when it work, either the user's profile had changed or the access rights to the file.
loveojha2
Participant
Posts: 362
Joined: Thu May 26, 2005 12:59 am

Post by loveojha2 »

Next day we again executed the same job with userid "A". This time it executed successfully. We are not able to understand why it failed first time. Then why it successfully executed when executed with same userid secod time.


One reason could be what Arndw has suggested and the other could be that it never got the chance to write into the file. :?

Try to run it with the same scenario as it was in the first run and see whether you are able to replicate it again or not.
Success consists of getting up just one more time than you fall.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

What happened on the system between failure and success? Perhaps the admin running the job brought the hashed file into existence, which the other user did not have permission to do.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
deepak_b73
Participant
Posts: 12
Joined: Thu Feb 16, 2006 1:06 am
Location: Bangalore
Contact:

Post by deepak_b73 »

Ray,

Question is why in first place file has gone into non existance. This job was working fine and we were running it with user "A" for long time.
All of sudden we start getting this error with user "A". Then we ran with admin userid. If user "A" didn't have permissions to bring back the file to existance then how come it has been deleting and creating the file every time we ran the job ? We never used admin userid to run the job.

How to make sure that this problem never happens again ?

regards,
Deepak
Deepak Bhat
Bangalore
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Not much you can do unless you can diagnose why it happened when it did. For example, are there multi-instance jobs involved one of which might have locked another out? Or is the hashed file write in a shared container in more than one job?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply