analyze.shm
Moderators: chulett, rschirm, roy
analyze.shm
hi,
I want to find out how my hashed file is tuned.
Trying to run the analyze.shm command, but it is not in the VOC.
How to add it in VOC?
Thanks
I want to find out how my hashed file is tuned.
Trying to run the analyze.shm command, but it is not in the VOC.
How to add it in VOC?
Thanks
Can someone tell me the syntax for this command?
Code: Select all
>ANALYZE.FILE
File name = "/dsadm/hash/myhashfile"
Must specify file name.
First establish a pointer in the VOC by issuing the command
Next
Code: Select all
SETFILE /dsadm/hash/myhashfile myhashfile;
Code: Select all
ANALYZE.FILE myhashfile;
Narasimha Kade
Finding answers is simple, all you need to do is come up with the correct questions.
Finding answers is simple, all you need to do is come up with the correct questions.
[quote="narasimha"]First establish a pointer in the VOC by issuing the command
I get this message
what do you want to call it in your VOC file =
Code: Select all
SETFILE /dsadm/hash/myhashfile myhashfile;
what do you want to call it in your VOC file =
thanks.
here is the output.
File type .................. DYNAMIC
Hashing Algorithm .......... GENERAL
No. of groups (modulus) .... 12003 current ( minimum 1 )
Large record size .......... 1628 bytes
Group size ................. 2048 bytes
Load factors ............... 80% (split), 50% (merge) and 80% (actual)
Total size ................. 32661504 bytes
is it badly tuned?
here is the output.
File type .................. DYNAMIC
Hashing Algorithm .......... GENERAL
No. of groups (modulus) .... 12003 current ( minimum 1 )
Large record size .......... 1628 bytes
Group size ................. 2048 bytes
Load factors ............... 80% (split), 50% (merge) and 80% (actual)
Total size ................. 32661504 bytes
is it badly tuned?
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Thanks Narasimha.
The issue is that we are trying to do lookup from hashed files and the throughput is very low, it is like 17 rows/sec.
Is there any issue with the hashed file index or do we need to recreate the hashed files again?
Is there any other thing I can do to improve the performance?
Thanks
The issue is that we are trying to do lookup from hashed files and the throughput is very low, it is like 17 rows/sec.
Is there any issue with the hashed file index or do we need to recreate the hashed files again?
Is there any other thing I can do to improve the performance?
Thanks
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Without the STATISTICS keyword, ANALYZE.FILE reports only the tuning settings (the parameters that can be set when the hashed file is created). There is no way to tell from that whether the hashed file is well tuned.
Add this keyword to have sizing information reported.
ANALYZE.FILE myhashedfile STATISTICS
Note, however, that a dynamic hashed file is a moving target; as the data volume to be stored in it changes, it will automatically alter its shape (in particular the number of groups, or modulus).
Therefore "tuned" is an ephemeral characteristic.
Add this keyword to have sizing information reported.
ANALYZE.FILE myhashedfile STATISTICS
Note, however, that a dynamic hashed file is a moving target; as the data volume to be stored in it changes, it will automatically alter its shape (in particular the number of groups, or modulus).
Therefore "tuned" is an ephemeral characteristic.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Thanks Ray. I ran it as you suggested
here is the o/p
File type .................. DYNAMIC
Hashing Algorithm .......... GENERAL
No. of groups (modulus) .... 12003 current ( minimum 1, 0 empty,
3896 overflowed, 1 badly )
Number of records .......... 297932
Large record size .......... 1628 bytes
Number of large records .... 0
Group size ................. 2048 bytes
Load factors ............... 80% (split), 50% (merge) and 80% (actual)
Total size ................. 32661504 bytes
Total size of record data .. 18122595 bytes
Total size of record IDs ... 1787593 bytes
Unused space ............... 12747220 bytes
Total space for records .... 32657408 bytes
any advise?
here is the o/p
File type .................. DYNAMIC
Hashing Algorithm .......... GENERAL
No. of groups (modulus) .... 12003 current ( minimum 1, 0 empty,
3896 overflowed, 1 badly )
Number of records .......... 297932
Large record size .......... 1628 bytes
Number of large records .... 0
Group size ................. 2048 bytes
Load factors ............... 80% (split), 50% (merge) and 80% (actual)
Total size ................. 32661504 bytes
Total size of record data .. 18122595 bytes
Total size of record IDs ... 1787593 bytes
Unused space ............... 12747220 bytes
Total space for records .... 32657408 bytes
any advise?
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Thanks for the info guys,
Our issue us still not resolved. we are doing couple of lookups using hashed file and the throughput is very slow e.g. 17 rows/sec.
I tried tuning the performance by increasing row-buffering to 1024KB but still does not helps.
what other options do I have, we are using a Dynamic 30 hashed file, can i resize it ? The memory on the box is also 100% when I run nmon, and hashed files are using pre-load file to memory option, can i disable that?
My job design is
Appreciate your input.
Thanks
Our issue us still not resolved. we are doing couple of lookups using hashed file and the throughput is very slow e.g. 17 rows/sec.
I tried tuning the performance by increasing row-buffering to 1024KB but still does not helps.
what other options do I have, we are using a Dynamic 30 hashed file, can i resize it ? The memory on the box is also 100% when I run nmon, and hashed files are using pre-load file to memory option, can i disable that?
My job design is
Code: Select all
I/P --> Link Collector --> Transformer <--Hashed File
|
|
Transformer <-- Hahsed file
| |
| HF --> Transformer <-- Hashed File
I/P |
|
HF --> Transformer
|
Seq File
Thanks