Page 1 of 1

File access problem - Process cannot access file

Posted: Mon Aug 23, 2004 8:36 am
by Xanadu
Hello,

This is my job:

Table---> TFM--> hash file.
When I run this job I get the following error message:

DSD.UVOpen
rm: (DATA.30) The process cannot access the file because it is being used by another process.
Unable to DELETE file "Filename".
"Filename" is already in your VOC file as a file definition record.
File name =
File not created.

I run DS.TOOLS and there is no other process that uses this file or there is no lock active in this job.
Any solution to this problem ?

Thanks
-Xan

Posted: Mon Aug 23, 2004 8:56 am
by Amos.Rosmarin
Hi

The quickest and easiest way ... close the DS server and kill all uvsh processes or just reboot :lol:

You will not see it in DS.TOOLS, but if you insist look at the control pannel for uvsh pid that you don't see in the ds.tools ... not very scientific but works


HTH
Amso

Posted: Mon Aug 23, 2004 9:05 am
by kcbland
Make sure you don't have an Explorer session looking in the hash file directory, as you won't be able to delete and recreate the hash file.

Posted: Mon Aug 23, 2004 9:09 am
by Xanadu
Thanks Amso.
Actually I already did that and still have the problem.
Infact after I bounced the machine even the earlier error message doesn't come up. The job just enters the running state and stays like that unless I explicitly stop it. When this happened in my earlier reboot I deleted the job and rewrote the job (took me couple of jobs..simple one..)
..again i get that error message. Now when i reboot the machine again the same thing - the job just stays in running state with the link in blue (no error message in the job log..) but no output..

any inputs ?
-Xan

Posted: Mon Aug 23, 2004 9:12 am
by Xanadu
No Ken..
I dont have any explorer session open...
infact after I reboot the machine that error message doesn't come up but the job just stays in running state (soething like an infinite loop..)

thanks ken
-Xan

Posted: Mon Aug 23, 2004 11:06 am
by ketfos
Hi,
You have mentioned that job remains in running state, something like infinite loop.

1. Kill the job and related process on the server and client.
2. Open the job in Designer.

Can you open the job in Designer?

Ketfos

Posted: Mon Aug 23, 2004 11:33 am
by Xanadu
It says the job is still running.
if i recompile it works fine. now if i run it again it shows me that error message again (it doesn't go into "infinite loop" this time..)
I try to delete the file explicitly using delete.file filename
and system throws a Data30 error that the file is already in use.
Funny part is there is some other job that rites to this file and it works fine.
Wait a min..let me try something
*Comes back*
You know what, I write to a new file "delete_file" and it still throws the same error. So its something with the job not that particular hash file.
Infact I tried deleting this file using delete.file and again same error that this delete_file is in use.
hmmmm.....any other inputs..

Thanks for ur response
-Xan

Posted: Mon Aug 23, 2004 12:01 pm
by ketfos
Hi,
What are you doing in TFM and hash file stage?

Ketfos

Posted: Mon Aug 23, 2004 12:04 pm
by ketfos
Hi,
Could you kill the job and related server processes succesfully?

After this, did you check the status of job in Director?

Was it still showing the job status as running?

If it was showing job status as running after killing the processes,
then you need to look for processes related with this job.

Let us know.


Ketfos

Posted: Mon Aug 23, 2004 12:58 pm
by Xanadu
I actually dont have a TFM.... ( I donno what I was thinking when I wrote that file..I am working on tons of jobs of that setup ...so I guess I got confused)
This is the structure:
Table --> Hashfile.

(I just read from the table and write to the hash file. As simple as that..)
yeah..its showing the job status as running even after killing..
When I see the list of processes from DS.TOOLS i see a process like this:

SSELECT RT_LOG871 WITH @ID LIKE '1N0N' COUNT.SUP

That exists even after I "logout of the process" ... Is this a normal process ? Whats with "sselect" ?
( My job ID is 871 (notice this ID in the RD_Log871). I think this is the reason why its showing that the job is running...)

thanks ket..
-Xan

Posted: Mon Aug 23, 2004 2:37 pm
by ketfos
Hi,
What is understood is Even after you kill the job and related processes,
it shows the status as 'RUNNING'
That was the reason i specificallya sked you to look the status of job.

Before doing anything further, you need to kill all processes related to this on your client machine using the Clean Up resources option in Director.


Ketfos

Posted: Mon Aug 23, 2004 3:35 pm
by Xanadu
thanks again ket
i alwys use ds.tools for cleanup..infact i just checked that the cleanup resources isn't even enabled for my job (donno y ?)
Maan this is crazy..
I just tried deleting the job and it says :
"Error while delting job ..
RT_CONFIG871 has apparently not been deleted from directory
RT_STATUS871 has apparently not been deleted from directory
RT_LOG871 has apparently not been deleted from directory"

wat could have went wrong ? :-S
I deleted the job but I am sure this problem wud persist even when I rewrite the job...

-Xan

Posted: Mon Aug 23, 2004 7:36 pm
by dhiraj
Clean up resource is not enabled in your director because you've not enabled run time job administration in your project. to do this log on to administrator , select the project , go to properties, select the check box stating "enable run time job aministration in director". Click ok. close the adminstrator and connect to your director. Voila, you have clean up job resources enabled.

coming to real problem, i have encountered this a couple of times. Director shows the job status as running , but the job isn't actually running. to overcome this open the job using designer and simply press compile.
doing this solved my problems more than a few times. i have no justification as to why this happens.

Dhiraj

Posted: Tue Aug 24, 2004 12:39 am
by ray.wurlod
The job status you see in the Director status view is the most recent entry the job process (DSD.RUN) was able to make in its RT_STATUSnnn file.

Presumably it was aborted in such an abrupt fashion that it never got the chance to update the RT_STATUSnnn file with a "stopped" or "aborted" or "crashed" status.

This is one of the reasons you should never use kill -9 to stop a DataStage DSD.RUN process.