Scratch disk space full

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
RAJEEV KATTA
Participant
Posts: 103
Joined: Wed Jul 06, 2005 12:29 am

Scratch disk space full

Post by RAJEEV KATTA »

I have a scenario where I am running a job and it gets aborted saying scratch disk space full.The job runs in production & I have no privilges to change the config file.Are there any environment variables or any method where in I could resolve the problem without touching the config file.If I break the job into two jobs,can I delete the scratch disk space after the first job run so that I would have space for the second job.Does this affect any other jobs if I delete scratch disk space data.
aakashahuja
Premium Member
Premium Member
Posts: 210
Joined: Wed Feb 16, 2005 7:17 am

Post by aakashahuja »

Any temporary datasets get automatically cleaned up once the job is over. Its the persistent datasets which remain. So if your job needs creating big datasets than you might want to spend time with your administrators to get more space or may add more scratch disks of smaller sizes.

If its a job you are just testing, then you can also create your own config file with a larger scratch disk (assuming you have write permission on the directory).

Cheers
aakash
L'arrêt essayant d'être parfait… évoluons.
Post Reply