Hash Building job took longer than normal
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 182
- Joined: Thu Jun 16, 2005 2:05 am
Hash Building job took longer than normal
Hi All,
We have some 3 hash building jobs which usually load around 31564649 records in the hash. Daily i could see some 30000 to 50000 records increasing in the hash files.We use static hash file and use the option delete and create a new file regularly.Normally this job runs around 2hr 30 minutes. For the past 1 week it used to take aroung 3hr 30 minutes. No major changes happend in the system environment.
Is there any way to increase the processing speed to minimize the runtime?All the successor jobs were waiting for the above hash job to complete and obviously we will iss the SLA. Let me know your valuable thoughts on this.
Hash FileSize:
SalesHash
1310081024 Aug 7 01:17 OVER.30
3554734080 Aug 7 02:27 DATA.30
4897.250 CPU seconds used, -75664.600 seconds elapsed.
Thanks,
Satheesh
We have some 3 hash building jobs which usually load around 31564649 records in the hash. Daily i could see some 30000 to 50000 records increasing in the hash files.We use static hash file and use the option delete and create a new file regularly.Normally this job runs around 2hr 30 minutes. For the past 1 week it used to take aroung 3hr 30 minutes. No major changes happend in the system environment.
Is there any way to increase the processing speed to minimize the runtime?All the successor jobs were waiting for the above hash job to complete and obviously we will iss the SLA. Let me know your valuable thoughts on this.
Hash FileSize:
SalesHash
1310081024 Aug 7 01:17 OVER.30
3554734080 Aug 7 02:27 DATA.30
4897.250 CPU seconds used, -75664.600 seconds elapsed.
Thanks,
Satheesh
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
Dynamic hashed files have the effect that, even with the "shrink" set, the OVER.30 overflow file never shrinks. A "RESIZE HashedFileName * * *" cures this and can increase performance significantly.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Given the size of OVER.30 and the fact that the complaint is not one of failure, we can assume 64-bit addressing. An answer to the "append/overwrite" question is keenly awaited before speculating further.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Again, the big DATA.30 is usually due to the file growing, then being deleted again. In my current project I have a job that runs daily to shrink down those bloated files and scavenges over 10Gb daily from these files alone.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Arnd, I'm a little lost on the first sentence and unsure why deleting and recreating a hashed file would cause "bigness" issues. Seems to me that *not* doing that would more likely be the reason but I must be missing some subtlety this morning. And did you mean OVER.30?
I'm also curious about your job to resize, sounds like it might be handy. Is it fairly straight-forward for you because they're all already in an account so you can find them all via a query and resize them straight away? Or do you have another mechanism where certain directories are swept and temporary VOC records are established for the resize?
I'm also curious about your job to resize, sounds like it might be handy. Is it fairly straight-forward for you because they're all already in an account so you can find them all via a query and resize them straight away? Or do you have another mechanism where certain directories are swept and temporary VOC records are established for the resize?
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
[duplicate]
[got distracted]
[failed to delete before replied to]
[redacted]
[got distracted]
[failed to delete before replied to]
[redacted]
Last edited by chulett on Fri Aug 07, 2009 8:04 am, edited 1 time in total.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I think we still need answers/confirmation back from the OP - lots of guessing games going on here. Another possibility perhaps, which might give reason to the file not shrinking, could be that in fact a delete isn't happening and instead it's a clear. Again though, requires input from the OP to determine cause further.
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>