Parallel job taking most of CPU
Moderators: chulett, rschirm, roy
Parallel job taking most of CPU
Hi,
I'm running a parllel job and it is taking most of server cpu and others are not able to work when my job is running.
The job is doing a look up between 3 tables and using column generator and a funel stage and writing to a data set.
I've tried running on 2 ,4 nodes the cpu is 100% for both of them but the completion time is different.
When i have used APT_EXECUTION_MODE to one process the cpu is less than 100 % though.
Please advice!
I'm running a parllel job and it is taking most of server cpu and others are not able to work when my job is running.
The job is doing a look up between 3 tables and using column generator and a funel stage and writing to a data set.
I've tried running on 2 ,4 nodes the cpu is 100% for both of them but the completion time is different.
When i have used APT_EXECUTION_MODE to one process the cpu is less than 100 % though.
Please advice!
That's the nature of parallel jobs. They don't get the checkmark for "works and plays well with others".
I'm thrilled when I can get 100% cpu utilization out of a parallel application since it generally means I am utilizing all of the capability of the hardware.
When you say "others are not able to work", do you mean they can't run jobs or do you mean thay can't use other applications? The engine layer should be on a server all to itself.
Mike
I'm thrilled when I can get 100% cpu utilization out of a parallel application since it generally means I am utilizing all of the capability of the hardware.
When you say "others are not able to work", do you mean they can't run jobs or do you mean thay can't use other applications? The engine layer should be on a server all to itself.
Mike
-
- Participant
- Posts: 152
- Joined: Tue Jan 13, 2009 8:59 am
Then I assume that your metadata and domain layers were installed on the same server as your engine layer. They need to be separate so that running jobs does not impact designing and compiling jobs.
Running jobs requires the engine layer, so they would just have to wait their turn for resources (or add more resources to support more concurrency).
Mike
Running jobs requires the engine layer, so they would just have to wait their turn for resources (or add more resources to support more concurrency).
Mike
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
OK, then your server is simply overloaded. It's a supply versus demand situation.
Some, including some at IBM, are coming to realize that having a tiny development server and a huge production server just doesn't "cut it" any more - you also need substantial grunt in the development server, particularly if you have multiple developers all unit-testing components.
Some, including some at IBM, are coming to realize that having a tiny development server and a huge production server just doesn't "cut it" any more - you also need substantial grunt in the development server, particularly if you have multiple developers all unit-testing components.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Parallel jobs are an efficient consumer of resources. You seem to have a severely undersized server. You have to beef up your development environment considerably to support more developers... and I would also recommend moving the metadata and domain layers to a separate server so that running jobs won't significantly degrade design and compile activities.
Mike
Mike
...additional to Ray's point, I've seen a number of scenarios now where a much larger Dev machine was needed than production. That throws people until they realize that 10 developers doing unit tests concurrently will eat up a lot more resources than their production jobs running (in those cases) serially.
Ernie
Ernie
Ernie Ostic
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
You need to monitor your operating system to determine which are the "resource hogs".sravanthi wrote:This also happened when i was the only one running jobs and no one else.
Parallel jobs have some reporting options that will help, such as being able to report the process ID of each player process and to report the cpu consumed by each of these. Enable such options by setting the appropriate environment variables.
Connections from clients also consume resources. These will mainly be seen as processes running dsapi_slave, dsapi_server and/or dscs.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.