SORT:Restrict Memory Usage
Posted: Thu Sep 25, 2008 2:08 am
hi,
our production server is heavily loaded on CPU-usage and on IO, but we see still a lot of free memory. Since a lot of sort-stages are used I want to take a look at optimizing memory-usage in these.
Untill now the option for Restrict Memory Usage is not used anywhere. I understood that this limits the process to 20MB of memory(per node). I would like to get better settings for this per sort-stage, but I'm a bit in the dark about the effects.
Some questions:
- if we increase this for a stage, but no memory is available will the job then abort, Or will it just start using swap-space? (anyway if both memory and swap are full there's definately problems).
- If we use varchar-fields the scratch-disk usage during sorts handles these as chars(so a varchar(100) takes 100 bytes for every record, no matter what the real length is). Is this also the way we would need to think in memory-usage?
-what would be the approach which gets the most result the fastest:
* increase the setting for all large sorts to 100MB
* take out some jobs and get them to work completely in memory
(probably here the first remark will be that this depends on my environment and type of jobs, but I still would like to know what you would do within your environment)
our production server is heavily loaded on CPU-usage and on IO, but we see still a lot of free memory. Since a lot of sort-stages are used I want to take a look at optimizing memory-usage in these.
Untill now the option for Restrict Memory Usage is not used anywhere. I understood that this limits the process to 20MB of memory(per node). I would like to get better settings for this per sort-stage, but I'm a bit in the dark about the effects.
Some questions:
- if we increase this for a stage, but no memory is available will the job then abort, Or will it just start using swap-space? (anyway if both memory and swap are full there's definately problems).
- If we use varchar-fields the scratch-disk usage during sorts handles these as chars(so a varchar(100) takes 100 bytes for every record, no matter what the real length is). Is this also the way we would need to think in memory-usage?
-what would be the approach which gets the most result the fastest:
* increase the setting for all large sorts to 100MB
* take out some jobs and get them to work completely in memory
(probably here the first remark will be that this depends on my environment and type of jobs, but I still would like to know what you would do within your environment)