How to find memory/resource utilisation by a job
Moderators: chulett, rschirm, roy
How to find memory/resource utilisation by a job
Hi All,
We are facing some performance issues.
Initially we had some jobs in PROD envt due to the performance issue we moved some jobs (say 4) to UAT envt.
There was no issues in UAT before moving the jobs from PROB.
After we migrated the jobs , we are facing some performance issues in UAT.
Could anyone let me know how to find the job which is utilising the memory/resource more than the other jobs or how to find
the resource/memory utilised by a particular job.
Thanks in advance.
Regards,
Hema K
We are facing some performance issues.
Initially we had some jobs in PROD envt due to the performance issue we moved some jobs (say 4) to UAT envt.
There was no issues in UAT before moving the jobs from PROB.
After we migrated the jobs , we are facing some performance issues in UAT.
Could anyone let me know how to find the job which is utilising the memory/resource more than the other jobs or how to find
the resource/memory utilised by a particular job.
Thanks in advance.
Regards,
Hema K
The traditional method involves running/monitoring standard Unix commands while running your 4 jobs in question, one by one, in a stable, controlled environment (no other jobs running). Commands can vary depending upon your flavor of Unix/Linux.
Commands include vmstat, iostat, mpstat, top, topas, sar.
Use the man <command> command to read the manual page on any command during your telnet session.
If you are not familiar with the commands given, you can seek Unix admin help in monitoring and interpreting the results.
Commands include vmstat, iostat, mpstat, top, topas, sar.
Use the man <command> command to read the manual page on any command during your telnet session.
If you are not familiar with the commands given, you can seek Unix admin help in monitoring and interpreting the results.
Choose a job you love, and you will never have to work a day in your life. - Confucius
In addition to what Eric has suggested, you can also check out the project environment variables in the reporting category. APT_PM_PLAYER_TIMING is particularly useful for finding cpu hogs. APT_PM_PLAYER_MEMORY may be useful for you as well. Be sure to use APT_DUMP_SCORE so that you have a frame of reference for the reported items.
I like to include a ps_Reporting parameter set in every job so that these useful environment variables can be turned on as needed (exception being APT_DUMP_SCORE which I have on at the project level all the time).
Mike
I like to include a ps_Reporting parameter set in every job so that these useful environment variables can be turned on as needed (exception being APT_DUMP_SCORE which I have on at the project level all the time).
Mike
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
My suggestion is to use the "generate operational metadata" option. This collects operational metadata into XML files that can be loaded into your metadata repository and analysed and reported on using Metadata Workbench.
You can also use DataStage's internal Performance Analyzer. Again you have to enable collection of performance data, and you can view the performance data graphically and generate reports from DataStage Designer.
You can also use DataStage's internal Performance Analyzer. Again you have to enable collection of performance data, and you can view the performance data graphically and generate reports from DataStage Designer.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
It is very difficult to tie the run stats from a job to vmstats. We automated this once but even the SQL was a bunch of between joins on timestamps. Very ugly stuff. There is a freeware UNIX utility to get vmstat output as snapshots for every so many seconds. It puts it into a comma delimited file which is easy to load into a table. Just need to tie the EtlStats tables to this new table with RAM usage and system wait times. Preston did some nice bar graphs with job names and CPU, RAM and disk io bar graphs. Very cool. It was several years ago not sure I can remember the details. But it turned out great.
Mamu Kim
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: