web service application not releasing memory segments

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
tbtcust
Premium Member
Premium Member
Posts: 230
Joined: Tue Mar 04, 2008 9:07 am

web service application not releasing memory segments

Post by tbtcust »

Hi all.

We have a web service application with 30+ jobs in it. When it is deployed the server slowly starts to crawl to the point of being unusable. There are a bunch of memory segments taken by dsadm that do not get released. The only way to clean them up is to reboot.

Has anyone seen this issue before? If so what did you do to resolve it?

Thanks in advance for any help.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Some questions for further detail and more discussion....

a) ISD application?
b) Server or EE Jobs?
c) Always on? How many of them? (Jobs with wISDInput Stages)..
d) How complicated are the jobs...(average number of stages)

That should help us get started with things to review and check out.

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
tbtcust
Premium Member
Premium Member
Posts: 230
Joined: Tue Mar 04, 2008 9:07 am

Post by tbtcust »

Thanks eostic.

a) ISD application? Yes. Exposed as Webservice
b) Server or EE Jobs? EE
c) Always on? How many of them? (Jobs with wISDInput Stages) Yes. 30+, eventually about 70
d) How complicated are the jobs? The average number of stages should do. There about 5 to 7 stages per job.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

Hard to say. Assuming that you are current with patches and things, one potential scenario is that you are in need of more memory.

EE Jobs generate a lot of processes. 5 to 7 Stages apiece is not a lot, but when you start adding things up, it can become large.

There are things you can investigate...if you know how to look at your dumpscore and other methods to determine the number of osh processes that exist for each job, you will be able to see some of the impact of having these jobs running concurrently. It is no different than if you had 30 (ultimately 70) batch jobs running at the same time.

I've seen evidence that each of those osh processes can take up 20 to 40m of memory. That's not too mention what else you have going on with the machine, such as other jobs running at the same time unrelated to this application.

How many instances of each of the 30 do you have running. That will impact it also, as each "instance" is another full set of the stages...and hopefully all the jobs are running squentially, and not with multiple nodes......

If the problem occurs over time, it could be that the processes aren't getting cleaned up...that could be due to other things, especially if the jobs are aborting.

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
Post Reply