Page 1 of 1

ETL Server space problem

Posted: Mon Sep 10, 2007 10:01 am
by ankita
Hi All,
Few jobs are failing in production with below error.

node_node1: Fatal Error: Unable to start ORCHESTRATE process on node node1 (<ETL server name>): APT_PMPlayer::APT_PMPlayer: fork() failed, Not enough space

My understanding says that it happened when the transactional volume of these jobs (running parallely) were not fitting into the node space.Please let me know if that's the actual scenario.
If yes, then is the node space shared between stream data and persistent Datasets ?
Please advise what to do for now , also for future.

Thanks !
Ankita

Posted: Mon Sep 10, 2007 3:21 pm
by ray.wurlod
Please search the forum for this error message.

It has nothing at all to do with transactional volume. The fork() function is involved in getting processes started.

Posted: Tue Sep 11, 2007 2:13 am
by harshada
Check the maxuproc id on unix with the help fo the command

Code: Select all

lsattr -EHl sys0 | grep maxuproc

This gives the maximum processes allowed and check the number of processes running for your DataStage job. if it exceeds the maxuproc id then one usually gets the fork() failed error. Try to kill some of the old processes or Get the maxuproc id reset or restart the unix box , one of these should solve the problem.

Posted: Tue Sep 11, 2007 2:19 am
by harshada
Maximum number of PROCESSES allowed per user

Posted: Tue Sep 11, 2007 12:25 pm
by ankita
Thanks for your suggestions !
I have tried below command but it can't recognize it. Ours is SunOS 5.9, may be that's why it didn't work.

$ lsattr -EHl sys0 | grep maxuproc
ksh: lsattr: not found

I was also checking the ulimit option to get the resouce limits and below is the output as in production,
$ ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 256
vmemory(kbytes) unlimited

Can you please tell me some details about 'nofiles(descriptors) ' ? What should be the standard limit for it ? I see it's less than that of dev.

Thanks,
Ankita

Posted: Tue Sep 11, 2007 3:03 pm
by chulett
ankita wrote:I was also checking the ulimit option to get the resouce limits and below is the output as in production,
$ ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 256
vmemory(kbytes) unlimited
If this information was gather by executing the command by hand from the command line, it really isn't correct. You need to capture it from a running job to know what the limit is in that environment, even if you used 'the same user' at the command line. Add an 'ExecSH' call to run that command before job in any job and let us know what it reports.