Page 1 of 1

Fatal Error Help

Posted: Tue Jan 11, 2005 9:00 am
by Tpaulsen
We are trying to run ETL maps that will load over 20 million rows. We are getting the following error.

DataStage Job 1957 Phantom 16601
kgefec: fatal error 0
OCI-21503: program terminated by fatal error
OCI-04030: out of process memory when trying to allocate 6784 bytes (Alloc statemen,pref col alloc)
Errors in file :
OCI-21503: program terminated by fatal error
OCI-04030: out of process memory when trying to allocate 6784 bytes (Alloc statemen,pref col alloc)

We are on an AIX 690 box with 2 CPUs and 4G of memory. The ulimit for time,file,data,stack and memory are all set to unlimited. Nofiles is set to 2000.

Any advice or suggestions are welcome. I'm also posting this on the Ascentials forum and opening a case with PeopleSoft.

Posted: Tue Jan 11, 2005 10:15 am
by ArndW
Hello Tpaulsen,

a couple of questions to narrow down the possible cause:
- Are you doing any aggregations / sorts?
- Are you loading files into memory?
- Are you reading from or writing to the OCI stage?
- After how many rows does the program die; and can you monitor the memory usage while it is running?

One of the basic tenets of DP is that no matter how much memory (Ram,Disk,Tape) you have, it will fill up - sooner rather than later :roll:

Posted: Tue Jan 11, 2005 12:53 pm
by Tpaulsen
I'm not doing any aggregates or sorts. It died on me yesterday before processing any rows. However, some die after processing millions of rows. We've gotten this error on several different ETLs, not just on one.

We do lookups against hash files and we run more than one ETL at a time.

I don't know what you mean by OCI stage - Ascentials is very new to us.

Posted: Tue Jan 11, 2005 2:17 pm
by ArndW
Evening, TPaulsen,

if we start at the beginning, what kind of a file are you reading from and what are you writing to? Have you loaded the hash reference file(s) into memory for lookup? Some part of the DS job is trying to malloc more memory than the process has available to it -if it happens late in the processing then a memory leak might be the cause but if it happens at the beginning of processing the cause lies elsewhere. Did you have a reproduceable error scenario?

Posted: Tue Jan 11, 2005 3:57 pm
by ray.wurlod
Login to the UNIX machine as the user ID you use to run DataStage jobs (or su to this user ID), and execute the command ulimit -a before discussing with your UNIX administrator the requirement to increase the data, stack and memory settings for that user.

Posted: Tue Jan 11, 2005 4:29 pm
by Tpaulsen
The ulimits are at unlimited on all except for Nofiles which is set to 2000. This was run as psdsadm.

demo:devapl46>ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) unlimited
memory(kbytes) unlimited
coredump(blocks) 0
nofiles(descriptors) 2000

Posted: Tue Jan 11, 2005 8:40 pm
by ray.wurlod
Time to log a call with your support provider. The OCI stage is requesting just over 6KB of memory, and is hitting some per-process memory limit, but not the obvious ones reported by the ulimit command.

Posted: Sat Jan 15, 2005 3:50 am
by ogmios
ulimit should be executed via DataStage itself via a DSExcute. In ds.rc the ulimit is set as well (probably to a lower value as system designed). See e.g. also viewtopic.php?t=90527.

Ogmios