Matching Row Count
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
Matching Row Count
Hi All,
I need to develop a job that will give me the number of rows pulled from the source tables and the number of rows loaded into the final table and whether they are equal or not.
I need to develop a job that will give me the number of rows pulled from the source tables and the number of rows loaded into the final table and whether they are equal or not.
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Create an after-job subroutine that uses DSGetLinkInfo() to retrieve the row counts, then compares them. What you need to do with that number appears not to be in your specification.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Participant
- Posts: 24
- Joined: Wed Oct 17, 2007 1:28 am
- Location: hyderabad
Re: Matching Row Count
use peek stage or an environment variable called APT_DUMP_SCORE which will return the number of records u collected with this u can solve the problem...pravin1581 wrote:Hi All,
I need to develop a job that will give me the number of rows pulled from the source tables and the number of rows loaded into the final table and whether they are equal or not.
say cheers always................
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
So? Using a consistent stage and link naming convention in your jobs can ease that difficulty immensely. For example contemplate suffixes such as _INP, _OUT, _REJ, _LKP (or prefixes if you prefer) on your links that connect to the "outside world".
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
I think, I misinterpreted the problem. The situation is I need to verify the summation of columns from 2 tables. I have approached the problem in this way : We have taken the reqd. column from the table and by an aggregator we have found out the sum same for the coulmn also and now we need to compare the carry that information in the file, values of both the columns and their difference in summation.ray.wurlod wrote:So? Using a consistent stage and link naming convention in your jobs can ease that difficulty immensely. For example contemplate suffixes such as _INP, _OUT, _REJ, _LKP (or prefixes if you prefer) on your links that connect to the "outside world".
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Ah, that's rather a different problem. It is solved using a "fork join" technique - split your data into two streams using a Copy stage, run one stream through the Aggregator stage, and re-unite them using a Join stage. The join key and the grouping key are probably the same.
Code: Select all
------> Copy stage ---------------> Join Stage ------>
| ^
| |
+------> Aggregator Stage ------+
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
The data is coming in two different streams via two different tables and summing through the aggregator , how can I use the Join stage because then I will not get the values when they differ. I have joined them using Funnel but the problem is they are becoming one col and I am unable to calculate the difference between them or either split the columns. The design that we have implemented is as follows:ray.wurlod wrote:Ah, that's rather a different problem. It is solved using a "fork join" technique - split your data into two streams using a Copy stage, run one stream through the Aggregator stage, and re-unite them using a Join stage. The join key and the grouping key are probably the same.
Code: Select all
------> Copy stage ---------------> Join Stage ------> | ^ | | +------> Aggregator Stage ------+
Code: Select all
Table 1------Aggregator
Table 2------Aggregator
[code]
I have not depicted the flow after aggregator as I am not sure that this is a correct way. As mentioned above we have implemented Funnel after aggregator and then TRANSFORMER.
Any sort of help is solicited.
Thanks
-
- Premium Member
- Posts: 497
- Joined: Sun Dec 17, 2006 11:52 pm
- Location: Kolkata
- Contact:
pravin1581 wrote:The data is coming in two different streams via two different tables and summing through the aggregator , how can I use the Join stage because then I will not get the values when they differ. I have joined them using Funnel but the problem is they are becoming one col and I am unable to calculate the difference between them or either split the columns. The design that we have implemented is as follows:ray.wurlod wrote:Ah, that's rather a different problem. It is solved using a "fork join" technique - split your data into two streams using a Copy stage, run one stream through the Aggregator stage, and re-unite them using a Join stage. The join key and the grouping key are probably the same.
Code: Select all
------> Copy stage ---------------> Join Stage ------> | ^ | | +------> Aggregator Stage ------+
Code: Select all
Table 1------Aggregator Table 2------Aggregator [code] I have not depicted the flow after aggregator as I am not sure that this is a correct way. As mentioned above we have implemented Funnel after aggregator and then TRANSFORMER. Any sort of help is solicited. Thanks
I have managed to achieve the sumed values and their difference in diff columns in a file but the design is becoming very cumbersome.
Code: Select all
Table 1-------> Aggr-------->
|
Join ------->Transformer----File
|
Table 2-------->Aggr-------->