lookup step:not enough step
Moderators: chulett, rschirm, roy
lookup step:not enough step
Hi,
I have a problem with a lookup step, the error is:
Lookup2,21: Could not map table file "/dstageeetl1/DataSet/lookuptable.20050302.b3j3ypb (size 1183357452 bytes)": Not enough space
Lookup2,21: Error finalizing / saving table /tmp/dynLUT43555482be25
How can I solve it?
Thanks!
I have a problem with a lookup step, the error is:
Lookup2,21: Could not map table file "/dstageeetl1/DataSet/lookuptable.20050302.b3j3ypb (size 1183357452 bytes)": Not enough space
Lookup2,21: Error finalizing / saving table /tmp/dynLUT43555482be25
How can I solve it?
Thanks!
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
Check whether you have enough space in that partition's DataSet directory.
Alternatively try to control your configuration using different config file or specifying process nodes in that stage to use.
The job that creates it must also have same configuration in order to spread data properly and collect them later.
Alternatively try to control your configuration using different config file or specifying process nodes in that stage to use.
The job that creates it must also have same configuration in order to spread data properly and collect them later.
Are you using AIX?
If so, there is a memory limitation issue you have to deal with. Read this:
http://publib.boulder.ibm.com/infocente ... upport.htm
There are a number of solutions for the above problem.
If you are using other generic UNIX systems, is the TOTAL amount of reference data you are providing to the single lookup greater than 2gb?
If so, there is a memory limitation issue you have to deal with. Read this:
http://publib.boulder.ibm.com/infocente ... upport.htm
There are a number of solutions for the above problem.
If you are using other generic UNIX systems, is the TOTAL amount of reference data you are providing to the single lookup greater than 2gb?
-
- Charter Member
- Posts: 47
- Joined: Fri Mar 18, 2005 5:59 am
Hi,
The OS indeed being AIX.
The Url currently points to non-existing page.However, i have checked the partition's Dataset directory, which has sufficient space.
The volumes here are just around 0.3 million in lookups and 30 million as input stream.
We have 490 MB in /tmp.
Would it be possible to list the various options which you were suggesting.
Thanks in anticipation
The OS indeed being AIX.
The Url currently points to non-existing page.However, i have checked the partition's Dataset directory, which has sufficient space.
The volumes here are just around 0.3 million in lookups and 30 million as input stream.
We have 490 MB in /tmp.
Would it be possible to list the various options which you were suggesting.
Thanks in anticipation
Our dear friend T42 hasn't posted here is a year and a half. However, it looks like the page they were linking to in the past may now be here:
http://publib.boulder.ibm.com/infocente ... upport.htm
See if that helps. Regardless, I'm sure someone else with AIX knowledge will pop in and help.
http://publib.boulder.ibm.com/infocente ... upport.htm
See if that helps. Regardless, I'm sure someone else with AIX knowledge will pop in and help.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
Your PX installation might not be using /tmp for temporary files, check your configuration file for the actual scratch space. Monitor this during runitme to make sure it doesn't actually fill up and then empty itself after the abort.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Charter Member
- Posts: 47
- Joined: Fri Mar 18, 2005 5:59 am
Hi,
I checked the Scratch disk space and also pointed my TMPDIR to the same scratch disk space.
While executing the jobs, i monitored the usage of this disk, GB of free space continued to remain available. However, while running 'topas', out of the 16 cpu's only 4 cpu's were getting very heavily used, while the rest ones were not.
Does one need to partition my input stream and the lookup data. Just curious. , since the error is from the lookup stage.
As for the ulimit -a option, all options are set to unlimited, with the exception of stack which is 4194304 and core files.
I tried to run the command lparstat , however was not sure what i was looking there.
Still searching for the elusive soln,
Thanks for the continued help,
I checked the Scratch disk space and also pointed my TMPDIR to the same scratch disk space.
While executing the jobs, i monitored the usage of this disk, GB of free space continued to remain available. However, while running 'topas', out of the 16 cpu's only 4 cpu's were getting very heavily used, while the rest ones were not.
Does one need to partition my input stream and the lookup data. Just curious. , since the error is from the lookup stage.
As for the ulimit -a option, all options are set to unlimited, with the exception of stack which is 4194304 and core files.
I tried to run the command lparstat , however was not sure what i was looking there.
Still searching for the elusive soln,
Thanks for the continued help,
-
- Charter Member
- Posts: 47
- Joined: Fri Mar 18, 2005 5:59 am
I was going through the link which Craig posted, how does one to set LDR_CNTRL env variable in AIX and check the current value already set.
While going through the log, i was confused on reading the error message below:-
[img]Lkp_Rd_Opr_Dist_Bnd_Mob,6:%20Could%20not%20map%20table%20file%20"/db2fs1/Datasets/lookuptable.20060903.iuinzmd":%20Not%20enough%20space[/img]
I do not have any datasets in my job design. The lookup table buffering is set to default. I have turned the JobMon to False.
Why does this error message come with the Lookup file set. I expected my Scratch disk to be mentioned in the error message and not my resource disk. Am i missing something obvious.
I checked the files which were created in diskspace for the lookup stage, none of them were more than 1 GB in size.
Having read threads addressing similar problems, there was a solutn of increasing mem space ( for HP-UX). Anything similar for AIX besides ulimit.
Another option mentioned amongst the threads was using the Join stage, however would still like to understand any plausible reason of the error.
Thanks for your continued help,
While going through the log, i was confused on reading the error message below:-
[img]Lkp_Rd_Opr_Dist_Bnd_Mob,6:%20Could%20not%20map%20table%20file%20"/db2fs1/Datasets/lookuptable.20060903.iuinzmd":%20Not%20enough%20space[/img]
I do not have any datasets in my job design. The lookup table buffering is set to default. I have turned the JobMon to False.
Why does this error message come with the Lookup file set. I expected my Scratch disk to be mentioned in the error message and not my resource disk. Am i missing something obvious.
I checked the files which were created in diskspace for the lookup stage, none of them were more than 1 GB in size.
Having read threads addressing similar problems, there was a solutn of increasing mem space ( for HP-UX). Anything similar for AIX besides ulimit.
Another option mentioned amongst the threads was using the Join stage, however would still like to understand any plausible reason of the error.
Thanks for your continued help,
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: