Page 1 of 2

Lookup job fails with error "Could not alloc memory for

Posted: Tue Oct 09, 2012 4:26 am
by vintipa
Hi Experts,

I have job that looks up data from a table and loads the target file.
It is failing with the error :
"Could not alloc memory for table file (7052615504 bytes). Ulimit settings may need to be adjusted, or memory may be exhausted.

There is no issue with the design as the same jobs runs fine with same databasein another server. Please advise as to any server settings needs to be done? how do I solve this problem ?

thanks in advance.

Posted: Tue Oct 09, 2012 4:34 am
by ArndW
It is possible that the "ulimit" settings are indeed different for the DataStage user on both your machines, as the error message suggests. Have you looked at the values on the two machines?

Posted: Tue Oct 09, 2012 4:48 am
by tehavele
Hi,

Sorry, Ulimit values on both the servers are the same :

Server 1 (2 core):
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 786428
stack(kbytes) 8192
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 8000


Server 2 (4 Core):
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 786428
stack(kbytes) 8192
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 8000

Posted: Tue Oct 09, 2012 4:54 am
by ArndW
Ulimit values for your datastage process are the relevant ones. You could write a datastage sequence that calls the UNIX "ulimit -a" command and check the runtime values.

Posted: Tue Oct 09, 2012 4:55 am
by ArndW
Please delete your other duplicate thread before someone replies to it.

Posted: Tue Oct 09, 2012 4:58 am
by tehavele
Hi,

Sorry, I dont understand exactly. U mean I need to check what Ulimit value is utilized by Datastage alone specifically???

1 thing I have noticed that the scratch space ulitization during the job run, goes to 3 - 4 % and then comes bck to the minimal 1% until the job fails. Is this significant?

Posted: Tue Oct 09, 2012 5:06 am
by ArndW
Yes, if you just login and check "ulimit -a" then you get that process' information and that might not be the same as that used in DataStage. You can compare the "dsenv" file contents to see if the ulimit values are explicitly expanded in one and not the other, but best would be to issue the "ulimit -a" command from a job to get the actual runtime values. This is a much more likely cause of your problem than any scratch space utilization.

Posted: Tue Oct 09, 2012 5:26 am
by tehavele
Thanks ArndW,

There is no specific mention in the DSEnv file. I will put a command "ulimit -a" in the same job and run.
But One question as never done this, Where do I get the output? Is is continuous changing out put or one value used throughout the job run that will be displayed in the logs?

Please suggest.

Posted: Tue Oct 09, 2012 5:56 am
by ArndW
The output will be in the DataStage log file and the ulimit is generally not changed during a job run, so it will be just one value.

Posted: Thu Oct 18, 2012 12:32 am
by tehavele
Hi,

I did check the command "ulimit -a" from within a job. It gave me the same values. So how do we go further !!! shall I just increase the value more and try?

JB_STG3_POD_REGISTRY_SAP_UPLOAD_10..BeforeJob (ExecSH): Executed command: ulimit -a
*** Output from command was: ***
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 786428
stack(kbytes) 8192
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 8000



Which among the above values can we increase ?

Lookup job fails with error "Could not alloc memory for

Posted: Thu Oct 18, 2012 8:24 am
by rohitagarwal15
Can you tell me what is your database and how big is size of data you are trying to fetch. Also the hardware architecture of both the servers are identical ??
What is your job design, means no of stages and partioning type.

Posted: Fri Oct 19, 2012 8:40 am
by ArndW
Ok, we've established that your settings are correct in DataStage and not limited.

I just looked at the number in question, 7,052,615,504 -> 6.7Gb

That means that your lookup set is huge and you should either change this stage to a JOIN stage or you could see if you can reduce the number of columns and/or rows that you are trying to hold in memory.

Re: Lookup job fails with error "Could not alloc memory

Posted: Sun Oct 21, 2012 8:54 pm
by tehavele
Hi Rohit,

THe database is Oracle. Data is over 9 million or around 7 Gb in the reference.
The hardware architecture of both servers are not same. It is very strange that same jobs runs fine with same data/DB in a server with 2 CPU, 15 Gb Ram less space allocated to Scratch and Datasets.But is fails in the currect server that is better in terms of resources like 4 CPU, 24 Gb Ram and more space allocated to Scratch and Datasets.I want the lookup stage to work fine for this server. Somewhere some server setting is not allowing the Lookup stage to make most of the available resources, especially memory.

need to figure out what is that setting !!!!

Posted: Sun Oct 21, 2012 9:03 pm
by tehavele
Hi ArndW,

Thanks, this is a very nice workaround. But I want the lookup stage to work in this case.
This will also help me solve a bigger issue of this perticular server not performing well in case of jobs with lookup stage. This server performs slower than a server with less resources and very same Infosphere Instance.

Fast server CPU info:
-----------------------------------
2 Intel(R) Itanium 2 9100 series processors (1.59 GHz, 12 MB)
266 MHz bus, CPU version A1

Memory: 16378 MB (15.99 GB)

Slow server CPU info:
-----------------------------------
4 Intel(R) Itanium(R) Processor 9350s (1.73 GHz, 6 MB)
2.39 GT/s QPI, CPU version E0

Memory: 24570 MB (23.99 GB)

Posted: Mon Oct 22, 2012 2:38 am
by ArndW
The amount of data you can have in your lookup reference data is limited to 2Gb in AIX. (see http://www-01.ibm.com/support/docview.w ... wg21383512)

Thus your 6.7Gb of reference cannot fit into the confines of a lookup stage and you will need to use a join instead.