lookup space problem
Moderators: chulett, rschirm, roy
lookup space problem
Hi-
When i am working with one of my jobs i am having problem with lookup stage throughing the fallowing error
Lookup_8,0: Could not map table file "/u01/Ascential/DataStage/Datasets/lookuptable.20061113.sqbu0oa (size 540413848 bytes)": Not enough space
Error finalizing / saving table /tmp/dynLUT872582fe8b11f5
This table has 3178896 records can anybody help me out from this problem.
Thanks
When i am working with one of my jobs i am having problem with lookup stage throughing the fallowing error
Lookup_8,0: Could not map table file "/u01/Ascential/DataStage/Datasets/lookuptable.20061113.sqbu0oa (size 540413848 bytes)": Not enough space
Error finalizing / saving table /tmp/dynLUT872582fe8b11f5
This table has 3178896 records can anybody help me out from this problem.
Thanks
-
- Premium Member
- Posts: 397
- Joined: Wed Apr 12, 2006 2:28 pm
- Location: Tennesse
not sure if this is scratch
believe this is temp space used to create memory map file - this is pointed to by the $TMPDIR variable on project - it defaults to /tmp but each file can be quite sizable
best to redirect $TMPDIR to a dedicated directory that has enough 'wiggle room' for these intermediates
best to redirect $TMPDIR to a dedicated directory that has enough 'wiggle room' for these intermediates
I'm having the same problem, I tried your solution but the result is the same.
My error is not exactly the same:
<i>Lk_brouillage,0: Could not maptable file
"/catalog/rabptv1/node1/dataset/lookuptable.20061114.c1": Not enough space.</i>
I don't know where there is not enough space
The percentage for the space using is only at 19% for my node (dataset and scratch). I don't know what's wrong. I will try to monitor the player's memory...
My error is not exactly the same:
<i>Lk_brouillage,0: Could not maptable file
"/catalog/rabptv1/node1/dataset/lookuptable.20061114.c1": Not enough space.</i>
I don't know where there is not enough space
The percentage for the space using is only at 19% for my node (dataset and scratch). I don't know what's wrong. I will try to monitor the player's memory...
What is the job desing? Is it populating the lookupfile or looking up and existing file? If populating lookup file, what is the size of the file, is that huge to occupy the rest of the space when created?
Last edited by kumar_s on Wed Nov 15, 2006 4:40 am, edited 1 time in total.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Hello,
fridge is right. If TMPDIR is blank then /tmp is used by default. You need to have a dedicated repertory for the temporary space.
If this temporary space is used then your lookup data is way too large. You should try consider using a join stage instead ( less performant but better for handling large amount of data ).
fridge is right. If TMPDIR is blank then /tmp is used by default. You need to have a dedicated repertory for the temporary space.
If this temporary space is used then your lookup data is way too large. You should try consider using a join stage instead ( less performant but better for handling large amount of data ).
Regards,
Daniel
Daniel
The file reference for the lookup is an existing ds file. The node has got around 80Gb for space disk. Is it possible to fill the memory when you use a lookup with a dataset in reference?kumar_s wrote:What is the job desing? Is it populating the lookupfile or looking up and existing file? If populating lookup file, what is the size of the file, is that huge to occupy the rest of the space when created?
Use a join can be a good choice if the reference file is too huge.
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
The lookup stage cannot use more than 512 MB of memory unless it is configured to use the large memory address model. If you don't, the job will abort with errors they say cannot map file and not enough space. This occurs just after the lookup fileset exceeds 512 MB of memory. I have posted many times on how to configure "osh" to use the large memory address model.
Thanks ghila - nice to get affirmation sometime - but as mundane said I have also seen this when osh isnt configured to use the LMM
would suggest you find if it is the $TMPDIR issue or osh not being configured to large memory model (or both)
first check value of $TMPDIR variable on project (seem via admin --> enviornment) if blank defaults to /tmp
run your job and monitor the directory pointed to my $TMPDIR - u need to do this in realtime , no good looking at the directory after job aborts as PX tidies us better than I do my Kitchen
one way to do this is to do something like
while [[ 1==1 ]]
do
df /tmp
done >> ~/myfile.dat
if it fills then thats your problem
if its the LMM issue - the dataset partss nomally reach a size of 512bytes less than 0.5gig - in which case you need to force osh to use a large memory model
this has been posted elsewhere but as I cant work out how to do links (not charter member material me ) seatch for the following
/usr/ccs/bin/ldedit
hope this helps
would suggest you find if it is the $TMPDIR issue or osh not being configured to large memory model (or both)
first check value of $TMPDIR variable on project (seem via admin --> enviornment) if blank defaults to /tmp
run your job and monitor the directory pointed to my $TMPDIR - u need to do this in realtime , no good looking at the directory after job aborts as PX tidies us better than I do my Kitchen
one way to do this is to do something like
while [[ 1==1 ]]
do
df /tmp
done >> ~/myfile.dat
if it fills then thats your problem
if its the LMM issue - the dataset partss nomally reach a size of 512bytes less than 0.5gig - in which case you need to force osh to use a large memory model
this has been posted elsewhere but as I cant work out how to do links (not charter member material me ) seatch for the following
/usr/ccs/bin/ldedit
hope this helps
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
You don't need to be a charter member or premium member to search or to follow links. Even at the non-discounted price it still represents excellent value for money imho.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks everyone for your help.
As fridge suggested, I modifiy in first the TMPDIR but the result was the same and the directory was not full.
After I tried the following command to change the LMM for the osh:
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa FullPathOf/osh
the full path is return by the command which osh.
fridge spoke about this solution in this topic:
viewtopic.php?t=104125
Now it seems running :D, I will do more test. But in future, I will use lookup with attention.
As fridge suggested, I modifiy in first the TMPDIR but the result was the same and the directory was not full.
After I tried the following command to change the LMM for the osh:
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa FullPathOf/osh
the full path is return by the command which osh.
fridge spoke about this solution in this topic:
viewtopic.php?t=104125
Now it seems running :D, I will do more test. But in future, I will use lookup with attention.
hi !
I am facing the same problem. I was trying to use the ldedit solution by fridge but i can't figure out the fullpath to "osh".
"which osh" returns not found
I tried "find" and this is what i get -
/u01/app/product/dsadm/Ascential/DataStage> find . -name osh* -print
./PXEngine.7501.1/bin/osh
./PXEngine.7501.1/lib/osh.o
./PXEngine.7501.2/bin/osh
./PXEngine.7501.2/lib/osh.o
./PXEngine.7501.3/bin/osh
./PXEngine.7501.3/lib/osh.o
Which osh should i apply ldedit -bmaxdata to ?
I am facing the same problem. I was trying to use the ldedit solution by fridge but i can't figure out the fullpath to "osh".
"which osh" returns not found
I tried "find" and this is what i get -
/u01/app/product/dsadm/Ascential/DataStage> find . -name osh* -print
./PXEngine.7501.1/bin/osh
./PXEngine.7501.1/lib/osh.o
./PXEngine.7501.2/bin/osh
./PXEngine.7501.2/lib/osh.o
./PXEngine.7501.3/bin/osh
./PXEngine.7501.3/lib/osh.o
Which osh should i apply ldedit -bmaxdata to ?
vishal
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: