Parallel job is being Aborted -- showing Not enough space
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 7
- Joined: Tue Dec 06, 2005 3:14 am
Parallel job is being Aborted -- showing Not enough space
Hi,
We are facing a problem with a Parallel job , which has an input dataset and a lookup dataset is being joined with a LookupStage and then output is copy stage --> Tranesformer --> creating a dataset.
But before creating the output dataset job is aborting, giving this log.
LKP,1: Could not map table file "/path/xx/lookuptable.20061006.drmm0ac (size 552711192 bytes)": Not enough space
Error finalizing / saving table /path/xx/ds_temp/dynLUT143294a5578e98
We have enough space (30GB) free, still the problem.
Please Advice.
We are facing a problem with a Parallel job , which has an input dataset and a lookup dataset is being joined with a LookupStage and then output is copy stage --> Tranesformer --> creating a dataset.
But before creating the output dataset job is aborting, giving this log.
LKP,1: Could not map table file "/path/xx/lookuptable.20061006.drmm0ac (size 552711192 bytes)": Not enough space
Error finalizing / saving table /path/xx/ds_temp/dynLUT143294a5578e98
We have enough space (30GB) free, still the problem.
Please Advice.
Thanks & Regards
Sriram.
Sriram.
-
- Participant
- Posts: 7
- Joined: Tue Dec 06, 2005 3:14 am
-
- Premium Member
- Posts: 397
- Joined: Wed Apr 12, 2006 2:28 pm
- Location: Tennesse
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
If this is AIX I believe that the per-process memory limit for the OSH executable is 512MB. Thus, any lookup stage which attempts to load over 512 MB of memory will abort with an out of space error. However, you can configure the osh executable to use the large memory address by running the following command:
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
If you have not got enough space, whether it's memory or disk space, your hardware vendor will be happy to sell you some more space.
Otherwise you must reduce total demand for space. Run fewer processes simultaneously, tune the job designs (buffer sizes, memory limits for sort, for example), add more file systems to your disk and scratchdisk resources.
Otherwise you must reduce total demand for space. Run fewer processes simultaneously, tune the job designs (buffer sizes, memory limits for sort, for example), add more file systems to your disk and scratchdisk resources.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 7
- Joined: Tue Dec 06, 2005 3:14 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
If this is AIX I believe that the per-process memory limit for the OSH executable is 512MB. Thus, any lookup stage which attempts to load over 512 MB of memory will abort with an out of space error. However, you can configure the osh executable to use the large memory address by running the following command:
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh
More memory won't help if OSH is aborting due to lookup stage exceeding 512 MB of memory.
/usr/ccs/bin/ldedit -bmaxdata:0x80000000/dsa osh
More memory won't help if OSH is aborting due to lookup stage exceeding 512 MB of memory.
Maybe disable memory-mapping?
I wonder if setting these env-vars in your
job would let it run?
The memory-map idea is a great performance
booster, but if your nodes are very busy, there
might not be that much memory available.
Or, as Ultramundane suggested, changing
your kernel settings to have a larger ram-limit
is a good idea.
Disabling memory-mapping is probably
easier, however.
Good luck:
John G.
job would let it run?
Code: Select all
APT_BUFFERIO_NOMAP=1
APT_IO_NOMAP=1
booster, but if your nodes are very busy, there
might not be that much memory available.
Or, as Ultramundane suggested, changing
your kernel settings to have a larger ram-limit
is a good idea.
Disabling memory-mapping is probably
easier, however.
Good luck:
John G.
sriramjagannadh wrote:Thanks
Why silly discussion and diverting the topic to hardware vendor
I am looking for real help and want to discuss in this forum ,like sam advised to use Join Stage and any problems with lookupstage in particular