Hash Building job took longer than normal

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

satheesh_color
Participant
Posts: 182
Joined: Thu Jun 16, 2005 2:05 am

Hash Building job took longer than normal

Post by satheesh_color »

Hi All,

We have some 3 hash building jobs which usually load around 31564649 records in the hash. Daily i could see some 30000 to 50000 records increasing in the hash files.We use static hash file and use the option delete and create a new file regularly.Normally this job runs around 2hr 30 minutes. For the past 1 week it used to take aroung 3hr 30 minutes. No major changes happend in the system environment.

Is there any way to increase the processing speed to minimize the runtime?All the successor jobs were waiting for the above hash job to complete and obviously we will iss the SLA. Let me know your valuable thoughts on this.

Hash FileSize:
SalesHash
1310081024 Aug 7 01:17 OVER.30
3554734080 Aug 7 02:27 DATA.30

4897.250 CPU seconds used, -75664.600 seconds elapsed.

Thanks,
Satheesh
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Do you recalculate the static hashed file structure based on daily append or it is static all the time.

Check whether the increase halts after some time and you appear to have crossed 2GB size.

Is your DataStage and Hashed file set on 64 bit architecture ?
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Do you recalculate the static hashed file structure based on daily append or it is static all the time.

Check whether the increase halts after some time and you appear to have crossed 2GB size.

Is your DataStage and Hashed file set on 64 bit architecture ?
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

You appear to have a large amount of records held in overflow. The hashed file structure needs to be revisited to ensure it is fit for purpose (size/volume).
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Dynamic hashed files have the effect that, even with the "shrink" set, the OVER.30 overflow file never shrinks. A "RESIZE HashedFileName * * *" cures this and can increase performance significantly.
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Oh !! Noticed just now.

But the o/p was saying about static ones !!??
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

Indeed he/she did, that's what I thought
We use static hash file and use the option delete and create a new file regularly
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Problem is there are no ".30" files in a Static hashed file. Pop quiz, boys and girls - a Type 30 hashed file is also known as?
-craig

"You can never have too many knives" -- Logan Nine Fingers
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

:lol: Good point...

Answer: Dynamic
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Given the size of OVER.30 and the fact that the complaint is not one of failure, we can assume 64-bit addressing. An answer to the "append/overwrite" question is keenly awaited before speculating further.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Given the fact that they allegedly "use the option delete and create a new file regularly" I would not assume 64bit addressing. Still, clarifications keenly awaited. :wink:
-craig

"You can never have too many knives" -- Logan Nine Fingers
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Again, the big DATA.30 is usually due to the file growing, then being deleted again. In my current project I have a job that runs daily to shrink down those bloated files and scavenges over 10Gb daily from these files alone.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Arnd, I'm a little lost on the first sentence and unsure why deleting and recreating a hashed file would cause "bigness" issues. Seems to me that *not* doing that would more likely be the reason but I must be missing some subtlety this morning. And did you mean OVER.30? :?

I'm also curious about your job to resize, sounds like it might be handy. Is it fairly straight-forward for you because they're all already in an account so you can find them all via a query and resize them straight away? Or do you have another mechanism where certain directories are swept and temporary VOC records are established for the resize?
-craig

"You can never have too many knives" -- Logan Nine Fingers
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

[duplicate]
[got distracted]
[failed to delete before replied to]
[redacted]
Last edited by chulett on Fri Aug 07, 2009 8:04 am, edited 1 time in total.
-craig

"You can never have too many knives" -- Logan Nine Fingers
miwinter
Participant
Posts: 396
Joined: Thu Jun 22, 2006 7:00 am
Location: England, UK

Post by miwinter »

I think we still need answers/confirmation back from the OP - lots of guessing games going on here. Another possibility perhaps, which might give reason to the file not shrinking, could be that in fact a delete isn't happening and instead it's a clear. Again though, requires input from the OP to determine cause further.
Mark Winter
<i>Nothing appeases a troubled mind more than <b>good</b> music</i>
Post Reply