Job failing in lookup Stage
Posted: Wed Dec 07, 2011 2:44 pm
Hello all,
I am getting this error in one of our job. This error is occurring in lookup stage. This job runs fine in low volume Data.
Error:
Could not map table file "/opt/IBM/InformationServer/Server/Datasets/lookuptable.20111207.hezfuxb (size 3938972368 bytes)": Cannot allocate memory
Our environment as as below:
Version: 8.1 FP1
OS: Linux 64 bit
ulimit -a command returns following:
address space limit (kbytes) (-M) unlimited
core file size (blocks) (-c) unlimited
cpu time (seconds) (-t) unlimited
data size (kbytes) (-d) unlimited
file size (blocks) (-f) unlimited
locks (-L) unlimited
locked address space (kbytes) (-l) 32
nofile (-n) 1024
nproc (-u) unlimited
pipe buffer size (bytes) (-p) 4096
resident set size (kbytes) (-m) unlimited
socket buffer size (bytes) (-b) 4096
stack size (kbytes) (-s) 8192
threads (-T) not supported
process size (kbytes) (-v) unlimited
Job appears to be aborting around 4 GB size of in memory table.
I searched around in this forum and got various options but I am still not clear on few things:
1. Is this 4 GB limit is set by application or Operating System?
2. Is this limitation on Primary Input Data size or Lookup data? Our primary source is DB2 table which has around 32 million rows and lookup is also a DB2 table but with very few records. Primary input is growing everyday and today seems to hit 4 GB limit because since yesterday it was working fine with low volume. My understanding is that Lookup Data ( which has very few records) will be in memory so I am confused as to why it is throwing an error based on size of primary input?
3. Can we modify the limit either at application or OS level? if yes then how and what varibale values needs to be changed?
4. I read that sometime it can happen if TMPDIR is left blank or set to /TMP. I set it to a directory with lot of space available but it still threw same error.
Any help would be highly appreciated.
Thanks,
I am getting this error in one of our job. This error is occurring in lookup stage. This job runs fine in low volume Data.
Error:
Could not map table file "/opt/IBM/InformationServer/Server/Datasets/lookuptable.20111207.hezfuxb (size 3938972368 bytes)": Cannot allocate memory
Our environment as as below:
Version: 8.1 FP1
OS: Linux 64 bit
ulimit -a command returns following:
address space limit (kbytes) (-M) unlimited
core file size (blocks) (-c) unlimited
cpu time (seconds) (-t) unlimited
data size (kbytes) (-d) unlimited
file size (blocks) (-f) unlimited
locks (-L) unlimited
locked address space (kbytes) (-l) 32
nofile (-n) 1024
nproc (-u) unlimited
pipe buffer size (bytes) (-p) 4096
resident set size (kbytes) (-m) unlimited
socket buffer size (bytes) (-b) 4096
stack size (kbytes) (-s) 8192
threads (-T) not supported
process size (kbytes) (-v) unlimited
Job appears to be aborting around 4 GB size of in memory table.
I searched around in this forum and got various options but I am still not clear on few things:
1. Is this 4 GB limit is set by application or Operating System?
2. Is this limitation on Primary Input Data size or Lookup data? Our primary source is DB2 table which has around 32 million rows and lookup is also a DB2 table but with very few records. Primary input is growing everyday and today seems to hit 4 GB limit because since yesterday it was working fine with low volume. My understanding is that Lookup Data ( which has very few records) will be in memory so I am confused as to why it is throwing an error based on size of primary input?
3. Can we modify the limit either at application or OS level? if yes then how and what varibale values needs to be changed?
4. I read that sometime it can happen if TMPDIR is left blank or set to /TMP. I set it to a directory with lot of space available but it still threw same error.
Any help would be highly appreciated.
Thanks,