Page 1 of 1

Not enough space

Posted: Tue May 08, 2007 4:34 pm
by Vikas Jain
Hey all,

I m processing about 15 million records and there are multiple lookups for these records and datasets which in turn get created,
The job is failing with the foll message:

Could not map table file "/landing01/dataint/Ascential/DataStage/Datasets/lookuptable.20070508.5vnzzdc (size 727590288 bytes)": Not enough space
Error finalizing / saving table /icnt_dev/dataStage_temp/dynLUT6041624e4f686b2


I hav figured out that the landing directory in Unix box is 83% used, with 60 gigs of free space, and when I run this job, the usage goes upto 84% and then job aborts.
I have gone through previous posts and got to know that it is likely to b running out of space whn the job involves datasets for >1million of records, but I still wonder that there is enuf splace left.

Is there some env variables that need to be enabled/disabled or some space constraints with datasets. Pls advice at the earliest.
:roll:
~Vikas~

Posted: Tue May 08, 2007 5:15 pm
by ray.wurlod
Do you have other file systems with lots more space? Create a configuration file that uses these as your disk resource, possibly in addition to what you currently have specified.

Posted: Thu May 10, 2007 11:06 am
by diamondabhi
When I had this problem I used join stage instead of lookup stage, do that if its feasible for you.

Here are some of the previously posted solutions:


1) If this is AIX I believe that the per-process memory limit for the OSH executable is 512MB. Thus, any lookup stage which attempts to load over 512 MB of memory will abort with an out of space error. However, you can configure the osh executable to use the large memory address by running the following command:

/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh

More memory won't help if OSH is aborting due to lookup stage exceeding 512 MB of memory.


2) I wonder if setting these env-vars in your
job would let it run?
Code:
APT_BUFFERIO_NOMAP=1
APT_IO_NOMAP=1


The memory-map idea is a great performance
booster, but if your nodes are very busy, there
might not be that much memory available.
Or, as Ultramundane suggested, changing
your kernel settings to have a larger ram-limit
is a good idea.

Disabling memory-mapping is probably
easier, however.


3) We had a similar problem in my earlier project so check the file size limit for the user that is used to execute the job. And lookup table size might have crossed that limit. Solution would be increase the limit for the user and execute the job.

Re: Not enough space

Posted: Thu May 10, 2007 2:04 pm
by RAJEEV KATTA
As said above change the "/landing01/dataint/Ascential/DataStage/Datasets" resource disk in configuration file to someother place where you have enough space.