Page 1 of 1

Posted: Tue May 12, 2009 12:58 pm
by chulett
You deleted ones in use by a the currently running job and it logged the fact that it couldn't find it when it went to remove it. Just... don't do that.

Posted: Tue May 12, 2009 1:54 pm
by tbtcust
Thanks chulett.

1) The jobs continue to throw this warning. How can I get the jobs from throwing this warning? Can I just restore the files?
2) What are these files used for?
3) Can I redirect what drive and folder the files are written to. There were about 10 gigs worth when I deleted them.
4) Is there a way to tie the files to the jobs? We have lots of jobs that are no longer valid.

Thanks in advance for your help

Posted: Tue May 12, 2009 2:31 pm
by chulett
1) Because you still have jobs running, I assume. If you restore the ones for the jobs that have not completed yet, I would assume the error would not be generated.

2) Temporary stuff, whatever temp data the process needs to accumulate while the job runs, I would imagine. Honestly can't say other than that.

3) I'm not sure, I believe it defaults to whatever you have set as UVTEMP in the uvconfig file, but believe you can override that in your config file. Someone else will need to clarify.

4) I doubt it but again don't know. Best to clean things like "/tmp" by the age of the file, if you need to do it manually. Typically this is an automatic thing the SA's setup.

Posted: Tue May 12, 2009 5:04 pm
by tbtcust
Thanks chulett. This is very helpful

Posted: Wed May 13, 2009 7:14 pm
by Oritech
I am the new premium member on this fourm....

I am also getting this error in each job which runs.

And finding heaps of files accumulated at tmp space out which older files, we are deleting manually to release space....

thro' UVTEMP in the uvconfig file,we can specify the path for tmp files not the automatic deletion of files...

How to automate the deletion?

Posted: Wed May 13, 2009 8:30 pm
by chulett
As noted, that's typically the purview of whomever administers your system. Either the O/S will have something built in or a cron script will run (typically) once a day to clean out files in "temp" locations over X days old. It's really not something anyone not "in authority" should be doing. Talk to your SysAdmins if it is an issue.

One possible answer is to create "temp" space specific to your ETL processes, somewhere with enough space where you have the credentials to maintain them. Then you can whack whatever needs whacking when it's whacking time without fear of reprisal. Unless you over-whack, of course. :wink:

Posted: Wed May 13, 2009 10:31 pm
by ray.wurlod
Whacko!

Posted: Wed May 13, 2009 11:25 pm
by chulett
<whack!>

Posted: Thu May 14, 2009 8:27 pm
by Oritech
Really Whacko! thanks 8)

Posted: Thu May 14, 2009 9:04 pm
by asorrell
Most UNIX systems automatically clean up /tmp on re-boot, which is good enough for a lot of systems (if you boot moderately often). If you re-boot your Windows system often enough - add a cleanup script to the end of your boot process - before any DataStage jobs start!

Posted: Thu May 14, 2009 10:11 pm
by Oritech
How to add clean scripts?

Posted: Fri May 15, 2009 12:08 am
by ray.wurlod
That's definitely not a DataStage question. Consult your system administrator for assistance with automatic start and shutdown scripts.

If you want to amend the DataStage startup/shutdown script, you will find this in the location reported by the uv -admin -info command. Make very sure you take a backup copy of this script before modifying it!