Page 1 of 1

Aggregator Output

Posted: Tue Dec 15, 2009 8:57 pm
by dsconultant
Hi,
I have a job design as follows.

Database--------->lookup------->Xfm1---->aggregator-------->TRGDatabse

In the Xfm1 stage I put a dummy row called Key_Col and default the value to 1. In the aggregator stage I group by the Key_col and count the rows and insert it to the TRGDatabse. Every thing works fine except when I have no records from the input. When the lookup results in no result I need to insert count = 0 in the TRGDatabse. I have tried different options in Aggregator stage but I have not been able to handle this scenario. What is the best way to handle this?

Posted: Tue Dec 15, 2009 9:32 pm
by chulett
Zero rows in equals zero rows out, I'm afraid. Can you union your input sql to include a 'dummy' row that you recognize as zero in your processing when that's all you see?

Posted: Tue Dec 15, 2009 9:47 pm
by dsconultant
I do not think I can include a union query with my input sql because it will anyway fail in the lookup.

Posted: Wed Dec 16, 2009 3:43 am
by Sainath.Srinivasan
There are many ways to handle this - apart from union.

1.) You can create another stream with only one row with a row gen.

That one row can have a dummy column - say value 1 - which you can lookup after your aggregator and then funnel the failure of this link's lookup with your aggregator output.

You will have to include another column

2.) Make the join within the db and send the resulting count to datastage

3.) Use link counts to count the input and output to lookup stage and decide on a dummy job to insert zeros.

Posted: Wed Dec 16, 2009 8:42 am
by chulett
Or course there are many ways, that was only the opening salvo. :wink:

The whole point was to understand that you must process something to have any output. Or as noted, use a job that just handles the 'nothing to do today' part and run one or the other.