Page 1 of 1

": ds_uvput() - Write failed for record id '38890'"

Posted: Tue Sep 05, 2006 4:00 am
by suresh.narasimha
Dear All,

We wre getting following error while executing a datastage server job.

": ds_uvput() - Write failed for record id '38890'"

When we executed the job with a userid say "A" we got this error. Then we executed the job with admin userid. Job executed successfully.
Next day we again executed the same job with userid "A". This time it executed successfully. We are not able to understand why it failed first time. Then why it successfully executed when executed with same userid secod time. Please help to resolve the issue.

regards,
Deepak

Posted: Tue Sep 05, 2006 4:03 am
by ArndW
Most likely the first time it failed the user had read but not write access to the file and the second time, when it work, either the user's profile had changed or the access rights to the file.

Posted: Tue Sep 05, 2006 10:43 pm
by loveojha2
Next day we again executed the same job with userid "A". This time it executed successfully. We are not able to understand why it failed first time. Then why it successfully executed when executed with same userid secod time.


One reason could be what Arndw has suggested and the other could be that it never got the chance to write into the file. :?

Try to run it with the same scenario as it was in the first run and see whether you are able to replicate it again or not.

Posted: Tue Sep 05, 2006 11:14 pm
by ray.wurlod
What happened on the system between failure and success? Perhaps the admin running the job brought the hashed file into existence, which the other user did not have permission to do.

Posted: Thu Sep 07, 2006 5:15 am
by deepak_b73
Ray,

Question is why in first place file has gone into non existance. This job was working fine and we were running it with user "A" for long time.
All of sudden we start getting this error with user "A". Then we ran with admin userid. If user "A" didn't have permissions to bring back the file to existance then how come it has been deleting and creating the file every time we ran the job ? We never used admin userid to run the job.

How to make sure that this problem never happens again ?

regards,
Deepak

Posted: Thu Sep 07, 2006 6:52 am
by ray.wurlod
Not much you can do unless you can diagnose why it happened when it did. For example, are there multi-instance jobs involved one of which might have locked another out? Or is the hashed file write in a shared container in more than one job?