Page 1 of 1

Posted: Fri Dec 18, 2009 6:16 am
by priyadarshikunal
Is the file system full or getting filled when you run jobs?

It can get full even of you don't land your data on the server.

Those are like

&PH& - phantom logs
RT_LOGnnn - job logs
oracle logs in scratch disk
some leftover file in scratch disk like tsort*, lookup* which can consume a lot fo space.

Posted: Fri Dec 18, 2009 9:14 am
by RAI ROUSES
Hi Priyadarshi,
thanks for your help.

We have same big datasets from the test taht we are doing to parallel, i had try to removed them using the dataset managment option, but it return the message :
(DSD.GetScriptDir) Cannot open file RT_SCTEMP

How can i safe removed the dataset ?

The projects where we have created the datastets it's a project test, if we removed the project it also removed the datasets ?
after we removed the project we can manualy removed the datasets ?


Eache datastage server projects have 70M to 80M, for me it's normal.
I had clean the &PH& directory and the RT_LOG files, but the have a small size.

Where is defined the scratch disk ? What can i removed from them ?

merci
8) Rai 8)

Posted: Fri Dec 18, 2009 10:10 am
by mf_arts
Hi Rai,

The scratch disk is defined on configuration file as below example:
{
node "node1"
{
fastname "Servername"
pools ""
resource disk "/uX1/Datasets/" {pools ""}
resource scratchdisk "/uX2/Scratch/" {pools ""}
}
}

For a better performance scratch disk should be on a different resource and if you need to have several sorts also create an additional resource on file system. To change or create a configuration file :
Designer--> Tools-->Configuration File or either create a new or edit the default (system created with at least one node) then save and check (compilation of configuration file).

Regards

Posted: Fri Dec 18, 2009 3:05 pm
by ray.wurlod
Read the manual on orchadmin command. It has additional options that allow you to force things (such as moving/deleting Data Sets) to occur.

Re: Datastage server file system full

Posted: Fri Jul 16, 2010 6:58 am
by PhilHibbs
RAI ROUSES wrote:What we have to do to managed correctly the file system of datastage server ?
I recommend a disk usage analysis utility such as SpaceMonger - it's for Windows, so you'll have to point it at a SAMBA drive mapping. It will tell you where all your disk space has gone, unless it's in directories that the share user can't access (for example, Server Edition creates some temporary directories that can only be read by the user that ran the job, but those should be on your data partition anyway if you have specified a Directory for temporary sort files). There are probably equivalents for Unix, such as the du command, but SpaceMonger has a nice clear visual layout.
http://en.wikipedia.org/wiki/Category:D ... s_software