What does this error message mean?

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
cbres00
Participant
Posts: 34
Joined: Tue Sep 21, 2004 9:20 am

What does this error message mean?

Post by cbres00 »

node_node1: Fatal Error: Unable to start ORCHESTRATE process on node node1 (slsudv18): APT_PMPlayer::APT_PMPlayer: fork() failed, Not enough space

Regards,
cbres00
Amos.Rosmarin
Premium Member
Premium Member
Posts: 385
Joined: Tue Oct 07, 2003 4:55 am

Post by Amos.Rosmarin »

Hi,

Check the disk space on where your datasets are (accoring to the APT_CONFIG_FILE)
Check the disk space on your tmp directory (if you did not change it in the uvconfig maybe it's time to do so).
Check the number of processes by executing

Code: Select all

ulimit -a
from a datastage server routine, it shoud be high (much more then the default 100).

You did not say what kind of unix you're using. It's very important.


HTH,
Amos
cbres00
Participant
Posts: 34
Joined: Tue Sep 21, 2004 9:20 am

Post by cbres00 »

Strangely enough this is a near duplicate of a job I ran just a few minutes before...but it worked that time.

Where would I find APT_CONFIG_FILE? In Administrator?

We're using Solaris.

Thanks
cbres00
Amos.Rosmarin wrote:Hi,

Check the disk space on where your datasets are (accoring to the APT_CONFIG_FILE)
Check the disk space on your tmp directory (if you did not change it in the uvconfig maybe it's time to do so).
Check the number of processes by executing

Code: Select all

ulimit -a
from a datastage server routine, it shoud be high (much more then the default 100).

You did not say what kind of unix you're using. It's very important.


HTH,
Amos
trokosz
Premium Member
Premium Member
Posts: 188
Joined: Thu Sep 16, 2004 6:38 pm
Contact:

Config File Locations

Post by trokosz »

You find the APT_CONFIG_FILE in one of two places:

1. Go to Manager go to Tools | Configurations and there they are....

2. Go to cd $DSHOME, up one cd../Configurations and there they are where you can cat or vi them...
T42
Participant
Posts: 499
Joined: Thu Nov 11, 2004 6:45 pm

Post by T42 »

Do a 'df -k' - and see if there's anything that are at or near 100% utility rate. Check the configuration file to see which mountpoint you are mapping to, and go from there.
dsxuserrio
Participant
Posts: 82
Joined: Thu Dec 02, 2004 10:27 pm
Location: INDIA

Post by dsxuserrio »

Cbres00
This is clearly a memory issue as pointed by others.
A few more things
How many sorts are you using in your job??
Chack the config file and see how much space is allocated for scratch and data using df -k.

Sometimes after the job fails, the scratch disk is cleaned and df -k will not give the correct picture because it is already cleaned.
Thanks
dsxuserrio
dsxuserrio

Kannan.N
Bangalore,INDIA
raviyn
Participant
Posts: 57
Joined: Mon Dec 16, 2002 6:03 am

Post by raviyn »

Hi,

Hi we are also getting the same error ... We monitored the Df- k we are not hitting 100% ..Also This is the output of ulimit -a

time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 1048576
stack(kbytes) 392192
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 2048

Pleasae let me know.
Post Reply