Datastage server directory running out of space

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

DeepakCorning
Premium Member
Premium Member
Posts: 503
Joined: Wed Jun 29, 2005 8:14 am

Post by DeepakCorning »

Nishant,
Pretty Sure that the number of jobs will not fill up ur 8 - 12 gb of space.

One reason can be Log files but as u said they are gettign merged so should not the casue.

Check out wht Kduke had suggested...
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Run the find command suggested earlier. Something is growing rapidly. Note that auto-purge does not occur for jobs that abort. Auto-purge is triggered only by successful completion (status DSJS.RUNOK or DSJS.RUNWARN).
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Code: Select all

#!/bin/ksh 
# 500mb 
find . -size +500000000c -exec ls -l {} \;
The find command has lots of options. The . is the directory it starts to look in. You cn make this / to look in every directory on the system. Usuaully you need to be root to do this or you get lots of errors. The -size says report files based on size. The + says greater than nnn. In this case nnn is 500000000 the c says nnn is based on characters not blocks. So this is 500mb.

Once you find some big files you need to find out what create them and then delete them without breaking you jobs. I would say that means moving them to another filesystem.
Mamu Kim
Post Reply