Hashed File - Read and Write Cache Setting
Moderators: chulett, rschirm, roy
Hashed File - Read and Write Cache Setting
Hi All,
We are on V9.1 and only using Server Jobs only and use a lot of hash Files.
The Default project setting for Read and Write Cache is 128 MB.
Does changing value higher, say to 256 MB, assist in improving job performance that are reading or writing to a Hashed File?
Any impact of increasing these default values?
Also does turning ON the Active-Active Link performance to Enable Row Buffer In Process, aid in performance improvement
Thanks,
NV
We are on V9.1 and only using Server Jobs only and use a lot of hash Files.
The Default project setting for Read and Write Cache is 128 MB.
Does changing value higher, say to 256 MB, assist in improving job performance that are reading or writing to a Hashed File?
Any impact of increasing these default values?
Also does turning ON the Active-Active Link performance to Enable Row Buffer In Process, aid in performance improvement
Thanks,
NV
Re: Hashed File - Read and Write Cache Setting
No. In all my born Server days, I don't recall ever needing to change that. Except maybe once and I no longer recall the circumstances.nvalia wrote:The Default project setting for Read and Write Cache is 128 MB. Does changing value higher, say to 256 MB, assist in improving job performance that are reading or writing to a Hashed File?
Perhaps, probably not... can't really make a blanket statement on that one. It's no silver bullet however, and I'd suggest you search the forum here for previous discussions on the perils of simply flipping that switch.nvalia also wrote:Also does turning ON the Active-Active Link performance to Enable Row Buffer In Process, aid in performance improvement
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I think all those settings are intended to improve performance. Some will require additional memory to use, so make sure that physical memory is available. I would be curious if anyone has ever measured a performance improvement.
Choose a job you love, and you will never have to work a day in your life. - Confucius
I vaguely recall the hashed file read cache and write cache making some jobs run faster. If you use read hashed files, also enable the pre-load to memory option. It seems like that cache size defaulted to 128 MB but also had a max of 1024 MB.
Choose a job you love, and you will never have to work a day in your life. - Confucius
Sure... was addressing the posted question of the hashed file size, there wasn't any mention of the actual read/writing caching options. There are a small number of situations where you wouldn't have read caching turned on and that's on by default from what I recall. Write caching on the other hand can be a bit of a double-edged sword.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Changing the maximum size of the caches will have no effect unless you are actually using the caches. There will only be a performance improvement if you are currently demanding more cache than the presently configured amount (in which case there will be warnings in job logs).
Enabling row buffering will primarily help jobs that process larger volumes of data, by causing more processes to be started to run stages and to set up buffering between them. As others have noted, memory is required for these buffers, plus you have a tiny overhead of more processes.
Enabling row buffering will primarily help jobs that process larger volumes of data, by causing more processes to be started to run stages and to set up buffering between them. As others have noted, memory is required for these buffers, plus you have a tiny overhead of more processes.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
We are, sorry... having a bit of a brain fart trying to squeeze answers in between real work and not having access to my docs. More later when the codeine wears off and I try to get my crazy train back in the rails.qt_ky wrote:I'm not sure we're looking at the same post.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I'm still going to say that increasing the size doesn't really 'improve performance' per se unless perhaps your data was not fitting in the cache and thus not cached at all. It would allow your data to be cached (and remove the warnings Ray mentions) but if it is already cached and you just make the space it lives in bigger that brings you nothing.
Ray - I don't remember it as being a maximum, I remember it as being the size. We had a fact build with a ton of cached hashed file lookups for the dimensions and I seem to recall bumping the size up and freaking out when the job just fell over dead. Each hashed file tried to allocate the new increased size (whether it actually needed it or not) and ran out of memory. At least that's what I recall.
Ray - I don't remember it as being a maximum, I remember it as being the size. We had a fact build with a ton of cached hashed file lookups for the dimensions and I seem to recall bumping the size up and freaking out when the job just fell over dead. Each hashed file tried to allocate the new increased size (whether it actually needed it or not) and ran out of memory. At least that's what I recall.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact: