Page 1 of 1

Cleaning the Scratch space

Posted: Mon Dec 18, 2006 12:30 am
by ashik_punar
Hi Everyone,

In one of my job i am facing "Scrtach Space Full" Problem. After going through all the posts that are presetn on the forum i came to know that my scratch spcae is almost full. I am having 4 processors on my server and my config files looks something like this:


{
node "node1"
{
fastname "tfukmhfirp1"
pools ""
resource disk "/opt/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/opt/Ascential/DataStage/Scratch" {pools ""}
}
node "node2"
{
fastname "tfukmhfirp1"
pools ""
resource disk "/opt/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/opt/Ascential/DataStage/Scratch" {pools ""}
}
node "node3"
{
fastname "tfukmhfirp1"
pools ""
resource disk "/opt/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/opt/Ascential/DataStage/Scratch" {pools ""}
}
node "node4"
{
fastname "tfukmhfirp1"
pools ""
resource disk "/opt/Ascential/DataStage/Datasets" {pools ""}
resource scratchdisk "/opt/Ascential/DataStage/Scratch" {pools ""}
}
}

If there is any problem with my Config file then please guide me on the same.

When i am doing 'df' on the scrtach and datasets folders on my Unix. I am getting the following outputs:


$ df -k /opt/Ascential/DataStage/Scratch
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd10opt 20971520 5816064 73% 28088 3% /opt



$ df -k /opt/Ascential/DataStage/Datasets
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/hd10opt 20971520 5815996 73% 28088 3% /opt


as per all the posts on the forum in ordeer to avoid this situation i am supposed to do 2 things:
1) Increase the Scratch Space, That i have to request to higher authorities.
2) I need to clean up the Scratch space.In order to clean the same what i have to do.I don't know what command i need to run.Whether the command is a TCL or a unix command. Please help me on this,


Thanks a lot for all the help you have been providing.

Thanks in Advacne,

Posted: Mon Dec 18, 2006 3:15 pm
by ray.wurlod
1. Request additional file systems (NOT additional directories in the same file system) from your "higher authorities". Ideally, use a separate file system for each processing node, to maximize I/O throughput. Then, on an SMP environment, institute a configuration that "gives all nodes all the disk".

2. Scratch space should be cleaned up automatically. List the content of /opt/Ascential/DataStage/Scratch to see whether this is the case. There should be nothing there, if no jobs are running. Anything that is there, if no jobs are running, can be deleted.

If /opt/Ascential/DataStage/Scratch is empty and you are still getting scratch disk full errors then there is no alternative than to configure additional scratch disk resource. As you begin to use more persistent Data Sets, chances are you will also need more disk resource too.

The following example configuration gives all nodes all the disk, and provides four one-node node pools and two two-node node pools.

Code: Select all

/* Four node configuration giving all nodes all the disk */
{ 
node "node1" 
   { 
   fastname "tfukmhfirp1" 
   pools "" "p1" "h1"
   resource disk "/d0/DataSets" {pools ""} 
   resource disk "/d1/DataSets" {pools ""} 
   resource disk "/d2/DataSets" {pools ""} 
   resource disk "/d3/DataSets" {pools ""} 
   resource scratchdisk "/d1/Scratch" {pools ""} 
   resource scratchdisk "/d2/Scratch" {pools ""}
   resource scratchdisk "/d3/Scratch" {pools ""}
   resource scratchdisk "/d0/Scratch" {pools ""}
   } 
node "node2" 
   { 
   fastname "tfukmhfirp1" 
   pools "" "p2" "h1"
   resource disk "/d1/DataSets" {pools ""} 
   resource disk "/d2/DataSets" {pools ""} 
   resource disk "/d3/DataSets" {pools ""} 
   resource disk "/d0/DataSets" {pools ""} 
   resource scratchdisk "/d2/Scratch" {pools ""} 
   resource scratchdisk "/d3/Scratch" {pools ""}
   resource scratchdisk "/d0/Scratch" {pools ""}
   resource scratchdisk "/d1/Scratch" {pools ""}
   } 
node "node3" 
   { 
   fastname "tfukmhfirp1" 
   pools "" "p3" "h2"
   resource disk "/d2/DataSets" {pools ""} 
   resource disk "/d3/DataSets" {pools ""} 
   resource disk "/d0/DataSets" {pools ""} 
   resource disk "/d1/DataSets" {pools ""} 
   resource scratchdisk "/d3/Scratch" {pools ""} 
   resource scratchdisk "/d0/Scratch" {pools ""}
   resource scratchdisk "/d1/Scratch" {pools ""}
   resource scratchdisk "/d2/Scratch" {pools ""}
   } 
node "node4" 
   { 
   fastname "tfukmhfirp1" 
   pools "" "p4" "h2"
   resource disk "/d3/DataSets" {pools ""} 
   resource disk "/d0/DataSets" {pools ""} 
   resource disk "/d1/DataSets" {pools ""} 
   resource disk "/d2/DataSets" {pools ""} 
   resource scratchdisk "/d0/Scratch" {pools ""} 
   resource scratchdisk "/d1/Scratch" {pools ""}
   resource scratchdisk "/d2/Scratch" {pools ""}
   resource scratchdisk "/d3/Scratch" {pools ""}
   } 
}

Posted: Mon Dec 18, 2006 10:19 pm
by ashik_punar
Hi Ray,

Thanks alot for the help. I will be getting this thing done.

Thanks a again for all the help.

Posted: Tue Dec 19, 2006 2:40 am
by ashik_punar
HI Ray,

Sorry for keeping you busy.But,I wanted to check can i delete all the entries in the /opt/Ascential/DataStage/Datasets folder also,Because when i am checking this folder,its having huge amount of entries.This folder is being used to holding the Datasets. Please provide your valuabel views on this.

Thanks in Advance,

Posted: Tue Dec 19, 2006 3:59 am
by ray.wurlod
Why are they there? Which jobs created them? Are they still required?

The only safe way to delete Data Sets and File Sets is to use either the orchadmin utility or the graphical Data Set Management utility in the Manager client.

DO NOT attempt to delete them using rm commands.

Posted: Tue Dec 19, 2006 6:55 am
by DSguru2B
ray.wurlod wrote:Why are they there? Which jobs created them? Are they still required?

The only safe way to delete Data Sets and File Sets is to use either the orchadmin utility or the graphical Data Set Management utility in the Manager client.

DO NOT attempt to delete them using rm commands.
Except the orchadmin rm command :wink:

Posted: Tue Dec 19, 2006 10:13 am
by johnthomas
What i have noted ,it is the control files and log files for the enterprise stage which is stored here . If a job is running which is using this control file it will create issues . As far as i know i know there should not be any
problem deleting these files(control and log files) if there is no jobs running, since the control files will be generated again .

Posted: Tue Dec 19, 2006 2:40 pm
by ray.wurlod
... unless, of course, the persistent Data Sets contain data staged for tomorrow's run!

Posted: Tue Dec 19, 2006 3:17 pm
by johnthomas
Hi Ray,

What i have noticed is lookup data is stored in this directory
/var/dstage/Ascential/DataStage/Datasets
and they get deleted after the job is complete . Any idea
where the dataset which is used by sqlldr is getting stored??

john

Posted: Tue Dec 19, 2006 5:36 pm
by ray.wurlod
This is a property of the Oracle bulk loader stage (or the Oracle Enterprise stage). The answer, therefore, is "wherever you specify". Right now I don't have access to Enterprise Edition, so can not research what the default value for this property is.