Page 1 of 1

Maximum Number of Lookups in single transformer

Posted: Tue Jul 24, 2007 1:11 pm
by jpr196
Hi All,

This topic isn't really regarding an error, but best practice. Is there any magical number of lookups you can do in one transformer before you should use a second tranformer? Is there a performance difference between doing 20 lookups in one transformer and 10 lookups each in 2 transformers for example? Is it all relative?

Thanks

Posted: Tue Jul 24, 2007 1:36 pm
by DSguru2B
Really depends, as you said, its relative. If having 20 lookups does not slow down the job, then go for it. Else I would recomend doing it in two seperate jobs.
Like, keeping 5 heavy lookups in one job and 15 small lookups in another job.
This will also help you in restratability.

Posted: Tue Jul 24, 2007 2:10 pm
by ray.wurlod
Using multiple Transformer stages and inter-process row buffering can give you some performance gains, particularly if you have more than one processor.

Re: Maximum Number of Lookups in single transformer

Posted: Wed Jul 25, 2007 7:09 am
by reddy.vinod
jpr196 wrote:Hi All,

This topic isn't really regarding an error, but best practice. Is there any magical number of lookups you can do in one transformer before you should use a second tranformer? Is there a performance difference between doing 20 lookups in one transformer and 10 lookups each in 2 transformers for example? Is it all relative?

Thanks
Hi,
U try to use less than or equal to 7 lkp ups on a single transformer,else u r performance will go down..if u need to take more than 7 tkae them in another transformer.

Posted: Wed Jul 25, 2007 7:26 am
by ArndW
There is no particular reason for '7' references. What I will do with a particularly complex transform stage on a multi-cpu system is monitor the %CPU of that tranform stage. If it is close to 100% then the job can be made somewhat faster by splitting into 2 transforms stages (assuming interprocess buffering being enabled). This splitting can be continued until CPU use on a single transform is no longer the bottleneck. Note that the procedure applies in similar fashion to both server and PX jobs.