Efficient DS Job design to avoid errors
Moderators: chulett, rschirm, roy
Efficient DS Job design to avoid errors
I have a job with nearly 100 stages and has multiple instance enabled and it runs 8x8. Until now it was working fine but today the job erred with the below message:
APT_PMPlayer::APT_PMPlayer: fork() failed, Resource temporarily unavailable
Seems that the above is due to resource contention issues.
But I also want to understand if the way my job is built (number of stages)has also contributed towards this error.
Can you please tell me if there are best practices related to the number of stages in a DS Job.
Thank you
APT_PMPlayer::APT_PMPlayer: fork() failed, Resource temporarily unavailable
Seems that the above is due to resource contention issues.
But I also want to understand if the way my job is built (number of stages)has also contributed towards this error.
Can you please tell me if there are best practices related to the number of stages in a DS Job.
Thank you
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Nice to have that vote of confidence, but we're far from being the only helpful posters on this site.
Getting this kind of error occasionally suggests that you're working your server close to its limits, and very occasionally exceeding them. It's simple supply and demand: you need to increase the supply of resources (CPU, memory, wherever the bottleneck is) or to schedule tasks more cleverly so that demand at any particular time is reduced. I'm not sure what you mean by 8x8; but it looks like you may have 16 other hours to play with.
Getting this kind of error occasionally suggests that you're working your server close to its limits, and very occasionally exceeding them. It's simple supply and demand: you need to increase the supply of resources (CPU, memory, wherever the bottleneck is) or to schedule tasks more cleverly so that demand at any particular time is reduced. I'm not sure what you mean by 8x8; but it looks like you may have 16 other hours to play with.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
What Ray said.
General design principle: Never do in one job what you can logically break into two or more jobs. In Cobol development slang, that 100-stage job qualifies easily as "spaghetti code".
The primary caution in that principle is not execution performance, it's coding maintenance "performance". If I can spend (say) one hour searching through six jobs for the one where my coding changes need to be done, I can easily spend multiples of that time searching through one job that is as large or larger than those six job combined. If the original coders paid any sort of attention to naming conventions, that hour might be much less.
General design principle: Never do in one job what you can logically break into two or more jobs. In Cobol development slang, that 100-stage job qualifies easily as "spaghetti code".
The primary caution in that principle is not execution performance, it's coding maintenance "performance". If I can spend (say) one hour searching through six jobs for the one where my coding changes need to be done, I can easily spend multiples of that time searching through one job that is as large or larger than those six job combined. If the original coders paid any sort of attention to naming conventions, that hour might be much less.
Franklin Evans
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
"Shared pain is lessened, shared joy increased. Thus do we refute entropy." -- Spider Robinson
Using mainframe data FAQ: viewtopic.php?t=143596 Using CFF FAQ: viewtopic.php?t=157872
In addition to the options of breaking apart the job, examine it from the viewpoint of: Does the job really require 100 stages to do what it's doing? And, does it really need to run 8x8 (does your job run with 64 logical nodes per instance)?
If you haven't already, I suggest reading the IBM Redbook on DataStage Parallel Framework Standard Practices (Google it for the free PDF download). While it doesn't cover some of the newer functions such as transformer looping, you may find suggestions applicable to your situation.
Regards,
If you haven't already, I suggest reading the IBM Redbook on DataStage Parallel Framework Standard Practices (Google it for the free PDF download). While it doesn't cover some of the newer functions such as transformer looping, you may find suggestions applicable to your situation.
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
All generalizations are false, including this one - Mark Twain.
http://www.redbooks.ibm.com/abstracts/S ... SEF&mync=Ejwiles wrote:If you haven't already, I suggest reading the IBM Redbook on DataStage Parallel Framework Standard Practices
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers