Hi,
We are getting following error in the lookup stage, at the lookup size of 512 MB and at row count more than 12508429
Lookup_106,0: Could not map table file "E:/projects/Development/nodecfg/node1/Datasets/lookuptable.20160825.xiby3sc (size 627421552 bytes)": Not enough space
for Datastage 8.5 on Windows Server 2003 R2
How can we increase this limit of 512 MB?
I have read posts where it is mentioned that datastage 8.5 has the limit of 2 GB , I have also read posts where it is mentioned the maximum memory a process can be of in Windows server 2003 R2 is 2 GB. I tried changing the environment setting of APT_BUFFERIO_MAP and APT_IOMAP to true in DS Administrator, but that also didnt work. We have lot of free space on all the drives on our server.
Lookup memory issue
Moderators: chulett, rschirm, roy
Lookup memory issue
Warm Regards,
Riya Yawalkar
Riya Yawalkar
Note that is error about "not enough space" is a bit confusing as it has nothing to do with disk space but is really all about RAM driven by the "bit-ness" of the application as you noted in your subject. You are running a 32bit version of DataStage which has the 2GB limit you mention on the amount of RAM it can use and you've exceeded that.
A typical solution would be to run on multiple nodes and ensure the lookup data is partitioned appropriately so that each instance stays under the limit.
A typical solution would be to run on multiple nodes and ensure the lookup data is partitioned appropriately so that each instance stays under the limit.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
It doesn't reach 2 GB. the job fails when it crosses 512 MB and only in the case of Lookup or Lookup fileset stages. When I tried with Join it worked fine. But we need to use Lookup. The whole project is running on single node configuration, using 2 nodes would mean to check each job for possible issues due to data partitioning.
Warm Regards,
Riya Yawalkar
Riya Yawalkar
It all by itself doesn't have to reach 2GB, it just has to push the total used by that process over 2GB.
Wow, the entire project has everything running on a single node? Kind of defeats the purpose of the tool IMHO. And yes, using multiple nodes would mean understanding and testing partitioning. So... it looks like you have a choice: join or multiple nodes.
Unless someone else has a suggestion.
Wow, the entire project has everything running on a single node? Kind of defeats the purpose of the tool IMHO. And yes, using multiple nodes would mean understanding and testing partitioning. So... it looks like you have a choice: join or multiple nodes.
Unless someone else has a suggestion.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers