Page 1 of 1

lookup space problem

Posted: Mon Nov 13, 2006 10:43 am
by laxmi_etl
Hi-

When i am working with one of my jobs i am having problem with lookup stage throughing the fallowing error

Lookup_8,0: Could not map table file "/u01/Ascential/DataStage/Datasets/lookuptable.20061113.sqbu0oa (size 540413848 bytes)": Not enough space
Error finalizing / saving table /tmp/dynLUT872582fe8b11f5

This table has 3178896 records can anybody help me out from this problem.

Thanks

Posted: Mon Nov 13, 2006 10:47 am
by samsuf2002
error shows that u r running out of space .......try cleaning the scratch space.

Posted: Mon Nov 13, 2006 12:32 pm
by lstsaur
I had the same problem as yours last Friday. You need to cleaned up your "/u01/Ascential/DataStage/Datasets" directory.

not sure if this is scratch

Posted: Mon Nov 13, 2006 4:50 pm
by fridge
believe this is temp space used to create memory map file - this is pointed to by the $TMPDIR variable on project - it defaults to /tmp but each file can be quite sizable

best to redirect $TMPDIR to a dedicated directory that has enough 'wiggle room' for these intermediates

Posted: Wed Nov 15, 2006 2:46 am
by jbanse
I'm having the same problem, I tried your solution but the result is the same.
My error is not exactly the same:
<i>Lk_brouillage,0: Could not maptable file
"/catalog/rabptv1/node1/dataset/lookuptable.20061114.c1": Not enough space.</i>

I don't know where there is not enough space
The percentage for the space using is only at 19% for my node (dataset and scratch). I don't know what's wrong. I will try to monitor the player's memory...

Posted: Wed Nov 15, 2006 2:53 am
by kumar_s
What is the job desing? Is it populating the lookupfile or looking up and existing file? If populating lookup file, what is the size of the file, is that huge to occupy the rest of the space when created?

Posted: Wed Nov 15, 2006 4:14 am
by ghila
Hello,

fridge is right. If TMPDIR is blank then /tmp is used by default. You need to have a dedicated repertory for the temporary space.
If this temporary space is used then your lookup data is way too large. You should try consider using a join stage instead ( less performant but better for handling large amount of data ).

Posted: Wed Nov 15, 2006 4:48 am
by jbanse
kumar_s wrote:What is the job desing? Is it populating the lookupfile or looking up and existing file? If populating lookup file, what is the size of the file, is that huge to occupy the rest of the space when created?
The file reference for the lookup is an existing ds file. The node has got around 80Gb for space disk. Is it possible to fill the memory when you use a lookup with a dataset in reference?
Use a join can be a good choice if the reference file is too huge.

Posted: Wed Nov 15, 2006 9:18 am
by Ultramundane
The lookup stage cannot use more than 512 MB of memory unless it is configured to use the large memory address model. If you don't, the job will abort with errors they say cannot map file and not enough space. This occurs just after the lookup fileset exceeds 512 MB of memory. I have posted many times on how to configure "osh" to use the large memory address model.

Posted: Wed Nov 15, 2006 7:11 pm
by fridge
Thanks ghila - nice to get affirmation sometime - but as mundane said I have also seen this when osh isnt configured to use the LMM

would suggest you find if it is the $TMPDIR issue or osh not being configured to large memory model (or both)

first check value of $TMPDIR variable on project (seem via admin --> enviornment) if blank defaults to /tmp

run your job and monitor the directory pointed to my $TMPDIR - u need to do this in realtime , no good looking at the directory after job aborts as PX tidies us better than I do my Kitchen

one way to do this is to do something like

while [[ 1==1 ]]
do
df /tmp
done >> ~/myfile.dat

if it fills then thats your problem


if its the LMM issue - the dataset partss nomally reach a size of 512bytes less than 0.5gig - in which case you need to force osh to use a large memory model

this has been posted elsewhere but as I cant work out how to do links (not charter member material me :-) ) seatch for the following


/usr/ccs/bin/ldedit


hope this helps

Posted: Wed Nov 15, 2006 9:19 pm
by ray.wurlod
You don't need to be a charter member or premium member to search or to follow links. Even at the non-discounted price it still represents excellent value for money imho.

Posted: Thu Nov 16, 2006 2:15 am
by jbanse
Thanks everyone for your help.
As fridge suggested, I modifiy in first the TMPDIR but the result was the same and the directory was not full.

After I tried the following command to change the LMM for the osh:
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa FullPathOf/osh
the full path is return by the command which osh.

fridge spoke about this solution in this topic:
viewtopic.php?t=104125

Now it seems running :D, I will do more test. But in future, I will use lookup with attention.

Posted: Wed May 28, 2008 3:02 pm
by hsahay
hi !

I am facing the same problem. I was trying to use the ldedit solution by fridge but i can't figure out the fullpath to "osh".

"which osh" returns not found

I tried "find" and this is what i get -

/u01/app/product/dsadm/Ascential/DataStage> find . -name osh* -print

./PXEngine.7501.1/bin/osh
./PXEngine.7501.1/lib/osh.o
./PXEngine.7501.2/bin/osh
./PXEngine.7501.2/lib/osh.o
./PXEngine.7501.3/bin/osh
./PXEngine.7501.3/lib/osh.o

Which osh should i apply ldedit -bmaxdata to ?

Posted: Wed May 28, 2008 3:16 pm
by ray.wurlod
${DSHOME}/../PXEngine/bin

Posted: Wed May 28, 2008 4:38 pm
by hsahay
Thanks Ray. Our PXEngine was symbolically linked to ./PXEngine.7501.3/bin. This ldedit solution fixed our problem.