Hi there,
I am processing a large resultset (6M + records) through the USPREP ruleset.
My question is what is the best way to run such a resultset through the standardize stage efficiently. Right now I'm processing at 1240 rec/sec.
It appears that the standardize stage is really slow. I can understand why, it's doing alot of work. Just wondering what are some of the best practices for making this run efficiently.
Thanks.
processing large resultsets
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
If the system is not running out of resources, use a configuration file with more nodes.
It might be possible to write a more efficient rule set, but the benefit is probably not worth the cost.
It might be possible to write a more efficient rule set, but the benefit is probably not worth the cost.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.