Posted: Fri Jun 15, 2007 9:05 am
Thats enough information Ken. As always, you rock, and we learn.
Regards,
Regards,
A short text to describe your forum
http://dsxchange.com./
Somehow, some way, you're going to have to hit them all. But with a DF you only hit the appropriate one for each record. I think, on balance, that's a "pro".chulett wrote:How much benefit is there to using a Distributed Hashed File in this situation? I've got a similar issue where multiple copies of a particular hashed file are being created by a MI job that mod's the input stream and distributes the result across the X hashed files.
However, the final job that consumes them doesn't currently follow the same rules for some reason, so ends up will all of them in it. Meaning they all get looked up against in spite of the known fact that only one will get a hit. Of course, it 'adversely impacts' the processing speed and the all important rows/second metric goes into the toilet.
I've been exploring re-arranging everything to do exactly what Ken stated (in my spare time, ha!) but with the reminder that DHF exist, I'm wondering if it might be a 'quickie' way to get this back to 'one lookup' in the interim. Is it worth considering? I haven't looked at them in detail for three years, not since I sat through a presentation at the 2004 Ascential World on the subject. Pros/Cons?