Hi,
we are constantly facing this issue of scratch space full which causes our jobs to abort. We have around 15 jobs using the same scratch space ( 4 nodes) but out of these around 4 fail everyday.
Please let me know of any posible solutions. Do i need a scratch cpace in another file system and use two configuration files. Please note having another file system is not possible at this stage.
waiting for some replies,
Rohit
scratch space issue
Moderators: chulett, rschirm, roy
-
- Charter Member
- Posts: 560
- Joined: Wed Jul 13, 2005 5:36 am
- Location: Ohio
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
In a word, MORE.
You cannot stint for parallel jobs. If you can't get another file system, then you have no possible solution for "file system full" and you should go back to server jobs until you can get some more disk space. Parallel jobs are far more resource hungry than server jobs; the extra resources are the cost of the increased throughput benefit, which is probably why your employers bought Enterprise Edition. Given the money they spend on that, another disk or two is peanuts in comparison.
You cannot stint for parallel jobs. If you can't get another file system, then you have no possible solution for "file system full" and you should go back to server jobs until you can get some more disk space. Parallel jobs are far more resource hungry than server jobs; the extra resources are the cost of the increased throughput benefit, which is probably why your employers bought Enterprise Edition. Given the money they spend on that, another disk or two is peanuts in comparison.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 3593
- Joined: Thu Jan 23, 2003 5:25 pm
- Location: Australia, Melbourne
- Contact:
Are those 15 jobs running concurrently or are they sequenced? You could try running just one job at a time, since this is a parallel architecture they should still run quickly.
As Ray says you shouldn't be getting anywhere close to 100% usage on those disks. Expand them or find another disk somewhere and turn it into a resource pool for a specific function such as sorting.
As Ray says you shouldn't be getting anywhere close to 100% usage on those disks. Expand them or find another disk somewhere and turn it into a resource pool for a specific function such as sorting.
Certus Solutions
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
Blog: Tooling Around in the InfoSphere
Twitter: @vmcburney
LinkedIn:Vincent McBurney LinkedIn
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
It's always worth cleaning entries that are no longer needed from file systems. It's the fact that people don't that keeps disk vendors in business (in part at least). Pretty anything in the directories identified as scratch disk resource is a candidate (provided no jobs are running), as well as anything in the directory whose pathname is stored as UVTEMP in the uvconfig file. Ideally /tmp is not used by DataStage.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.