Dear All,
We wre getting following error while executing a datastage server job.
": ds_uvput() - Write failed for record id '38890'"
When we executed the job with a userid say "A" we got this error. Then we executed the job with admin userid. Job executed successfully.
Next day we again executed the same job with userid "A". This time it executed successfully. We are not able to understand why it failed first time. Then why it successfully executed when executed with same userid secod time. Please help to resolve the issue.
regards,
Deepak
": ds_uvput() - Write failed for record id '38890'"
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 81
- Joined: Mon Nov 21, 2005 4:17 am
- Location: Sydney, Australia
- Contact:
": ds_uvput() - Write failed for record id '38890'"
SURESH NARASIMHA
Most likely the first time it failed the user had read but not write access to the file and the second time, when it work, either the user's profile had changed or the access rights to the file.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Next day we again executed the same job with userid "A". This time it executed successfully. We are not able to understand why it failed first time. Then why it successfully executed when executed with same userid secod time.
One reason could be what Arndw has suggested and the other could be that it never got the chance to write into the file.
Try to run it with the same scenario as it was in the first run and see whether you are able to replicate it again or not.
Success consists of getting up just one more time than you fall.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
What happened on the system between failure and success? Perhaps the admin running the job brought the hashed file into existence, which the other user did not have permission to do.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 12
- Joined: Thu Feb 16, 2006 1:06 am
- Location: Bangalore
- Contact:
Ray,
Question is why in first place file has gone into non existance. This job was working fine and we were running it with user "A" for long time.
All of sudden we start getting this error with user "A". Then we ran with admin userid. If user "A" didn't have permissions to bring back the file to existance then how come it has been deleting and creating the file every time we ran the job ? We never used admin userid to run the job.
How to make sure that this problem never happens again ?
regards,
Deepak
Question is why in first place file has gone into non existance. This job was working fine and we were running it with user "A" for long time.
All of sudden we start getting this error with user "A". Then we ran with admin userid. If user "A" didn't have permissions to bring back the file to existance then how come it has been deleting and creating the file every time we ran the job ? We never used admin userid to run the job.
How to make sure that this problem never happens again ?
regards,
Deepak
Deepak Bhat
Bangalore
Bangalore
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Not much you can do unless you can diagnose why it happened when it did. For example, are there multi-instance jobs involved one of which might have locked another out? Or is the hashed file write in a shared container in more than one job?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.