scratch space issue

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Krazykoolrohit
Charter Member
Charter Member
Posts: 560
Joined: Wed Jul 13, 2005 5:36 am
Location: Ohio

scratch space issue

Post by Krazykoolrohit »

Hi,

we are constantly facing this issue of scratch space full which causes our jobs to abort. We have around 15 jobs using the same scratch space ( 4 nodes) but out of these around 4 fail everyday.

Please let me know of any posible solutions. Do i need a scratch cpace in another file system and use two configuration files. Please note having another file system is not possible at this stage.

waiting for some replies,
Rohit
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

In a word, MORE.

You cannot stint for parallel jobs. If you can't get another file system, then you have no possible solution for "file system full" and you should go back to server jobs until you can get some more disk space. Parallel jobs are far more resource hungry than server jobs; the extra resources are the cost of the increased throughput benefit, which is probably why your employers bought Enterprise Edition. Given the money they spend on that, another disk or two is peanuts in comparison.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
vmcburney
Participant
Posts: 3593
Joined: Thu Jan 23, 2003 5:25 pm
Location: Australia, Melbourne
Contact:

Post by vmcburney »

Are those 15 jobs running concurrently or are they sequenced? You could try running just one job at a time, since this is a parallel architecture they should still run quickly.

As Ray says you shouldn't be getting anywhere close to 100% usage on those disks. Expand them or find another disk somewhere and turn it into a resource pool for a specific function such as sorting.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Some time not all the jobs produced during run time where cleaned. If this is that serious, it may be worth cleaning the tmp directories manually (periodically).
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It's always worth cleaning entries that are no longer needed from file systems. It's the fact that people don't that keeps disk vendors in business (in part at least). Pretty anything in the directories identified as scratch disk resource is a candidate (provided no jobs are running), as well as anything in the directory whose pathname is stored as UVTEMP in the uvconfig file. Ideally /tmp is not used by DataStage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply