Server File System Getting Full

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
sasidhar_kari
Premium Member
Premium Member
Posts: 62
Joined: Wed Dec 08, 2004 2:26 am

Server File System Getting Full

Post by sasidhar_kari »

Hi,
In our development environment, the File System on which DataStage is installed is getting full. Currently we are doing some testing and other developement on the server. All our intermediate files are created in a seperate file system. I am unable to find out the reason for this. Searched in the forum, but could not find any posts.

Request some input on this.

Thanks in Advance,
Sasi.
jhmckeever
Premium Member
Premium Member
Posts: 301
Joined: Thu Jul 14, 2005 10:27 am
Location: Melbourne, Australia
Contact:

Post by jhmckeever »

Hi Sasi,

This is a good place to start: viewtopic.php?t=89545

HTH,
John.
<b>John McKeever</b>
Data Migrators
<b><a href="https://www.mettleci.com">MettleCI</a> - DevOps for DataStage</b>
<a href="http://www.datamigrators.com/"><img src="https://www.datamigrators.com/assets/im ... l.png"></a>
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Prevent this from happening. Get more space. Full file systems can result in corrupted hashed files and other bad things, that you really don't want to occur.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Yes, filling up that partition is a Very Bad Thing. Very.

I'd also suggest you check the size of your job's logs, that has been my biggest culprit when looking for space eaters. Specifically, the size of the RT_LOGnnn hashed files in your various projects.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Get more space, especially if you are not using the same file system for your staging files. Keep purging your log files. You must be creating "in account" humongous hashed files. That might be the main culprit. Just get more space.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
sasidhar_kari
Premium Member
Premium Member
Posts: 62
Joined: Wed Dec 08, 2004 2:26 am

Post by sasidhar_kari »

Hi All, Thanks for suggestions.

All the jobs in the project create hashfiles in directory path on a seperate file system. I have 8 GB of space allocated for the specific file system. Each time when the jobs run, the file system gets full abnormally...and i'm processing GBs of data. From your answers and going through some posts here, I guess, this could be because of log files.... can I get some input on how I can check the size of log files. Is there a way to purge the log files other than through director.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

You can set the number of days limit in the director after which it will purge the log files.
Other than the director, there is a job present at ADN which clears all the log files. You might want to take a look at that.
Is there anything specific you are doing which is rendering your file disk space full. Are you doing any unix level sort of GB's of data which might be filling up the tmp space ?
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Sorry, little bit of a rant here but this is a pet peeve of mine. :evil:

When I mentioned log sizes, I should have been more specific as to the why of it. What I find way too often are HUGE log files, primarily in the 'development' projects, where untested jobs are being run from the Designer without bothering to override the default Warning Limit of Unlimited. They can be lulled into a false sense of security thinking their job ran 'just fine' because all links are green.

Argh! A simple check of the actual log could reveal any issues but too many times that's not done, I mean why bother when it ran fine? Then you find zillions of warnings in the log, sometimes enough to blow the log's hashed file over the 2GB Barrier. When enough people do it on the same day - and yes I've seen this happen - your entire server installation can be at jeopardy due to a sudden lack of free disk space.

We specifically monitor that partition every 10 minutes with an 80% full threshold warning to prevent exactly that from happening.

Thus endeth the rant.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You can use find with a -size option to check for anything over, say, 1.5GB
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply