Scratch disk space full
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 103
- Joined: Wed Jul 06, 2005 12:29 am
Scratch disk space full
I have a scenario where I am running a job and it gets aborted saying scratch disk space full.The job runs in production & I have no privilges to change the config file.Are there any environment variables or any method where in I could resolve the problem without touching the config file.If I break the job into two jobs,can I delete the scratch disk space after the first job run so that I would have space for the second job.Does this affect any other jobs if I delete scratch disk space data.
-
- Premium Member
- Posts: 210
- Joined: Wed Feb 16, 2005 7:17 am
Any temporary datasets get automatically cleaned up once the job is over. Its the persistent datasets which remain. So if your job needs creating big datasets than you might want to spend time with your administrators to get more space or may add more scratch disks of smaller sizes.
If its a job you are just testing, then you can also create your own config file with a larger scratch disk (assuming you have write permission on the directory).
Cheers
aakash
If its a job you are just testing, then you can also create your own config file with a larger scratch disk (assuming you have write permission on the directory).
Cheers
aakash
L'arrêt essayant d'être parfait… évoluons.