Not enough space

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Vikas Jain
Participant
Posts: 15
Joined: Tue Dec 13, 2005 12:38 am

Not enough space

Post by Vikas Jain »

Hey all,

I m processing about 15 million records and there are multiple lookups for these records and datasets which in turn get created,
The job is failing with the foll message:

Could not map table file "/landing01/dataint/Ascential/DataStage/Datasets/lookuptable.20070508.5vnzzdc (size 727590288 bytes)": Not enough space
Error finalizing / saving table /icnt_dev/dataStage_temp/dynLUT6041624e4f686b2


I hav figured out that the landing directory in Unix box is 83% used, with 60 gigs of free space, and when I run this job, the usage goes upto 84% and then job aborts.
I have gone through previous posts and got to know that it is likely to b running out of space whn the job involves datasets for >1million of records, but I still wonder that there is enuf splace left.

Is there some env variables that need to be enabled/disabled or some space constraints with datasets. Pls advice at the earliest.
:roll:
~Vikas~
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Do you have other file systems with lots more space? Create a configuration file that uses these as your disk resource, possibly in addition to what you currently have specified.
Last edited by ray.wurlod on Thu May 10, 2007 6:36 pm, edited 1 time in total.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
diamondabhi
Premium Member
Premium Member
Posts: 108
Joined: Sat Feb 05, 2005 6:52 pm
Location: US

Post by diamondabhi »

When I had this problem I used join stage instead of lookup stage, do that if its feasible for you.

Here are some of the previously posted solutions:


1) If this is AIX I believe that the per-process memory limit for the OSH executable is 512MB. Thus, any lookup stage which attempts to load over 512 MB of memory will abort with an out of space error. However, you can configure the osh executable to use the large memory address by running the following command:

/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh

More memory won't help if OSH is aborting due to lookup stage exceeding 512 MB of memory.


2) I wonder if setting these env-vars in your
job would let it run?
Code:
APT_BUFFERIO_NOMAP=1
APT_IO_NOMAP=1


The memory-map idea is a great performance
booster, but if your nodes are very busy, there
might not be that much memory available.
Or, as Ultramundane suggested, changing
your kernel settings to have a larger ram-limit
is a good idea.

Disabling memory-mapping is probably
easier, however.


3) We had a similar problem in my earlier project so check the file size limit for the user that is used to execute the job. And lookup table size might have crossed that limit. Solution would be increase the limit for the user and execute the job.
Every great mistake has a halfway moment, a split second when it can be recalled and perhaps remedied.
RAJEEV KATTA
Participant
Posts: 103
Joined: Wed Jul 06, 2005 12:29 am

Re: Not enough space

Post by RAJEEV KATTA »

As said above change the "/landing01/dataint/Ascential/DataStage/Datasets" resource disk in configuration file to someother place where you have enough space.
Post Reply