While running the batches which contains around 5 jobs, we are getting following error:
node_node1: Fatal Error: Unable to start ORCHESTRATE process on node node1 (server_name): APT_PMPlayer::APT_PMPlayer: fork() failed, Not enough space
Is there any need to modify some of the jobs from ETL side ( means we have replace lookup by join) or only the DBA team has to increase the memory inorder to remove thsi error?
Can anybody please tell us exactly what is meant by fork error and why this error occurs(how can we eliminate). The job is running fine but in between it gets aborted due the above mention error.
Fork Error
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 334
- Joined: Fri Dec 01, 2006 5:17 am
- Location: Texas
Can anyone tell me how i can permantly remove fork error from our jobs??
Any documents?
On seraching this forum i found the following:
viewtopic.php?t=112451&highlight=Fork+error
I am unable to get this : call to DSExecute()... It is quiet confusing
Any documents?
On seraching this forum i found the following:
viewtopic.php?t=112451&highlight=Fork+error
I am unable to get this : call to DSExecute()... It is quiet confusing
From the error message you get (and concidering the forum and job type you posted under), you first need to realize your on the enterprise edition (PX) of the product.
And it seems that there is an allocation issue with the resources your configuration file specifies and what you actually have.
I have no access to get to the specific point at hand but running out of some disk space might be the problem.
(I'll also move the post to the proper Forum for no extra fee )
IHTH,
And it seems that there is an allocation issue with the resources your configuration file specifies and what you actually have.
I have no access to get to the specific point at hand but running out of some disk space might be the problem.
(I'll also move the post to the proper Forum for no extra fee )
IHTH,
Roy R.
Time is money but when you don't have money time is all you can afford.
Search before posting:)
Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Time is money but when you don't have money time is all you can afford.
Search before posting:)
Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
fork() is a function used to create a child process.
If there is insufficient resource (for example memory, places in the O/S process table) then fork() will fail.
The only way "permanently" to remove these messages is to guarantee at all times that there are sufficient resources; that is, by not overloading your machine and by cleaning up regularly - running the deadlock daemon (dsdlockd) can assist with the latter.
If there is insufficient resource (for example memory, places in the O/S process table) then fork() will fail.
The only way "permanently" to remove these messages is to guarantee at all times that there are sufficient resources; that is, by not overloading your machine and by cleaning up regularly - running the deadlock daemon (dsdlockd) can assist with the latter.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Fork() error implies the Resource Manager of the server could not accomodate the demands of the Job and thi sis caused by:
1. Not enough swap space (real memory has nothing to do with it)
2. Decrease the degrees of parallelism such as run as 2node.apt vs. 4node.apt....
3. Stagger the scheduling of Jobs or tryinf to execute too many at once....
4. Too many active Stages in a Job or try to break up the Job into more manageable units of work. For example, are there too many Lookups, Joins and so on.....
1. Not enough swap space (real memory has nothing to do with it)
2. Decrease the degrees of parallelism such as run as 2node.apt vs. 4node.apt....
3. Stagger the scheduling of Jobs or tryinf to execute too many at once....
4. Too many active Stages in a Job or try to break up the Job into more manageable units of work. For example, are there too many Lookups, Joins and so on.....