lookup space problem

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
laxmi_etl
Charter Member
Charter Member
Posts: 117
Joined: Thu Sep 28, 2006 9:10 am

lookup space problem

Post by laxmi_etl »

Hi-

When i am working with one of my jobs i am having problem with lookup stage throughing the fallowing error

Lookup_8,0: Could not map table file "/u01/Ascential/DataStage/Datasets/lookuptable.20061113.sqbu0oa (size 540413848 bytes)": Not enough space
Error finalizing / saving table /tmp/dynLUT872582fe8b11f5

This table has 3178896 records can anybody help me out from this problem.

Thanks
samsuf2002
Premium Member
Premium Member
Posts: 397
Joined: Wed Apr 12, 2006 2:28 pm
Location: Tennesse

Post by samsuf2002 »

error shows that u r running out of space .......try cleaning the scratch space.
hi sam here
lstsaur
Participant
Posts: 1139
Joined: Thu Oct 21, 2004 9:59 pm

Post by lstsaur »

I had the same problem as yours last Friday. You need to cleaned up your "/u01/Ascential/DataStage/Datasets" directory.
fridge
Premium Member
Premium Member
Posts: 136
Joined: Sat Jan 10, 2004 8:51 am

not sure if this is scratch

Post by fridge »

believe this is temp space used to create memory map file - this is pointed to by the $TMPDIR variable on project - it defaults to /tmp but each file can be quite sizable

best to redirect $TMPDIR to a dedicated directory that has enough 'wiggle room' for these intermediates
jbanse
Participant
Posts: 3
Joined: Tue Mar 21, 2006 10:46 am

Post by jbanse »

I'm having the same problem, I tried your solution but the result is the same.
My error is not exactly the same:
<i>Lk_brouillage,0: Could not maptable file
"/catalog/rabptv1/node1/dataset/lookuptable.20061114.c1": Not enough space.</i>

I don't know where there is not enough space
The percentage for the space using is only at 19% for my node (dataset and scratch). I don't know what's wrong. I will try to monitor the player's memory...
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

What is the job desing? Is it populating the lookupfile or looking up and existing file? If populating lookup file, what is the size of the file, is that huge to occupy the rest of the space when created?
Last edited by kumar_s on Wed Nov 15, 2006 4:40 am, edited 1 time in total.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
ghila
Premium Member
Premium Member
Posts: 41
Joined: Mon Mar 15, 2004 2:37 pm
Location: France

Post by ghila »

Hello,

fridge is right. If TMPDIR is blank then /tmp is used by default. You need to have a dedicated repertory for the temporary space.
If this temporary space is used then your lookup data is way too large. You should try consider using a join stage instead ( less performant but better for handling large amount of data ).
Regards,

Daniel
jbanse
Participant
Posts: 3
Joined: Tue Mar 21, 2006 10:46 am

Post by jbanse »

kumar_s wrote:What is the job desing? Is it populating the lookupfile or looking up and existing file? If populating lookup file, what is the size of the file, is that huge to occupy the rest of the space when created?
The file reference for the lookup is an existing ds file. The node has got around 80Gb for space disk. Is it possible to fill the memory when you use a lookup with a dataset in reference?
Use a join can be a good choice if the reference file is too huge.
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

The lookup stage cannot use more than 512 MB of memory unless it is configured to use the large memory address model. If you don't, the job will abort with errors they say cannot map file and not enough space. This occurs just after the lookup fileset exceeds 512 MB of memory. I have posted many times on how to configure "osh" to use the large memory address model.
fridge
Premium Member
Premium Member
Posts: 136
Joined: Sat Jan 10, 2004 8:51 am

Post by fridge »

Thanks ghila - nice to get affirmation sometime - but as mundane said I have also seen this when osh isnt configured to use the LMM

would suggest you find if it is the $TMPDIR issue or osh not being configured to large memory model (or both)

first check value of $TMPDIR variable on project (seem via admin --> enviornment) if blank defaults to /tmp

run your job and monitor the directory pointed to my $TMPDIR - u need to do this in realtime , no good looking at the directory after job aborts as PX tidies us better than I do my Kitchen

one way to do this is to do something like

while [[ 1==1 ]]
do
df /tmp
done >> ~/myfile.dat

if it fills then thats your problem


if its the LMM issue - the dataset partss nomally reach a size of 512bytes less than 0.5gig - in which case you need to force osh to use a large memory model

this has been posted elsewhere but as I cant work out how to do links (not charter member material me :-) ) seatch for the following


/usr/ccs/bin/ldedit


hope this helps
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

You don't need to be a charter member or premium member to search or to follow links. Even at the non-discounted price it still represents excellent value for money imho.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
jbanse
Participant
Posts: 3
Joined: Tue Mar 21, 2006 10:46 am

Post by jbanse »

Thanks everyone for your help.
As fridge suggested, I modifiy in first the TMPDIR but the result was the same and the directory was not full.

After I tried the following command to change the LMM for the osh:
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa FullPathOf/osh
the full path is return by the command which osh.

fridge spoke about this solution in this topic:
viewtopic.php?t=104125

Now it seems running :D, I will do more test. But in future, I will use lookup with attention.
hsahay
Premium Member
Premium Member
Posts: 175
Joined: Wed Mar 21, 2007 9:35 am

Post by hsahay »

hi !

I am facing the same problem. I was trying to use the ldedit solution by fridge but i can't figure out the fullpath to "osh".

"which osh" returns not found

I tried "find" and this is what i get -

/u01/app/product/dsadm/Ascential/DataStage> find . -name osh* -print

./PXEngine.7501.1/bin/osh
./PXEngine.7501.1/lib/osh.o
./PXEngine.7501.2/bin/osh
./PXEngine.7501.2/lib/osh.o
./PXEngine.7501.3/bin/osh
./PXEngine.7501.3/lib/osh.o

Which osh should i apply ldedit -bmaxdata to ?
vishal
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

${DSHOME}/../PXEngine/bin
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
hsahay
Premium Member
Premium Member
Posts: 175
Joined: Wed Mar 21, 2007 9:35 am

Post by hsahay »

Thanks Ray. Our PXEngine was symbolically linked to ./PXEngine.7501.3/bin. This ldedit solution fixed our problem.
vishal
Post Reply