Page 1 of 1

/tmp getting full

Posted: Fri Jun 11, 2010 2:59 am
by myukassign
Hi

My job getting aborted without any proper warning in the director. I did all I could to see the problem but nothing find wrong in job.

I observed that /tmp directory getting full suddnly from 80% 100% and when it reach 100% it is getting full. I am not directly creating /reading any file from /tmp directory. Also In my configuration file I mentioned the path of resource disk and scratch disk a differnt directory not /tmp. Then how it is getting full.


I logged in to putty as dsadm used and tried to create some files in the scratch /resource disk to verify I lost permission or not but nothing like that. I am able to create the files.Then how datastge is using /tmp. and ho wit is geting full.

Any help, Many thanks in advance.

Posted: Fri Jun 11, 2010 3:28 am
by ETLJOB
What kind of files have been written into /tmp directory recently? Did you analyze that? I remember database related load statistics, log files based on your scheduler and many other log files also would go into the /tmp directory based on the environment settings.

Posted: Fri Jun 11, 2010 3:45 am
by myukassign
ETLJOB wrote:What kind of files have been written into /tmp directory recently? Did you analyze that? I remember database related load statistics, log files based on your scheduler and many other log files also would go into the /tmp directory based on the environment settings.
Yea..there is a big file is creating and I doono why it is creating...

the file name is "machineLog.8001.puxc8204.20100610114625"

and a part of file content is like this...

<?xml version="1.0" encoding="UTF-8" ?>
<machine_resource_output version="0.1" start_date="2010-06-10 11:46:25" framework_revision="IBM WebSphere DataStage Enterprise Edition 8.1.0.5040 ">
<machine_description>
<host name="puxc8204" domain=""/>
<platform name="AIX" compiler="" version="5.3"/>
<cpus count="8" model="PowerPC_POWER5 2096MHz"/>
<memory totalRAM="32768000" totalSwap="82575360"/>
</machine_description>
<layout delimeter="," sc

Posted: Fri Jun 11, 2010 4:17 am
by ETLJOB
Yeah...This is the machine log file which captures the processor information, memory information etc in the server. Either you can ask admins to increase the space for /tmp or you could route this file to be written in some other directory. Lets wait for our big guns to fire their thoughts on this! :wink:

Posted: Mon Jun 14, 2010 1:37 am
by myukassign
I increased the /tmp space and my job is not getting aborted anymore.

Thanks.

Posted: Tue Jun 15, 2010 3:20 am
by priyadarshikunal
I consider this question for parallel as mentioned in description.

/tmp is used by lookup stage to create temporary files during execution, so either make it big enough to meet your lookup requirements or change the environment variable TMPDIR to a disk with more space. You also need to create a user defined environment variable called TEMPDIR and point it to a disk with more space (I use scratch disk location).

If /tmp is almost full you might also need to change TMPDIR variable in UVConfig file, dont forget to stop the datastage engine, regenerate the file and start the services again.

I think after changing these variables DataStage will not use this space for anything.

Posted: Tue Jun 15, 2010 3:34 am
by myukassign
thanks....that really helps

Posted: Tue Jun 15, 2010 6:15 am
by chulett
However, /tmp is really where stuff like that belongs on a UNIX server. If you are having space issues there, I would make sure of two things:

1) An appropriate amount of space has been allocated for it
2) Your SysAdmins are properly managing / pruning / culling files from there

Once you move files of that nature elsewhere, then you become responsible for cleaning up after any processes that fail to clean themselves up properly.