Hi
I have a job in which I am using aggregator stage.Hash file is the input for aggregator and I group by two columns and take min of the third column.I have 2.5 million records in hash file and the job gets aborted at 1.51M every time i run.It aborts at the same row num and the log does not have any information except for " ABNORMAL TERMINATION OF AGGREGATOR STAGE DETECTED".If anyone has insight into this problem please let me know.
Thanks
S
Error in Aggregator Stage
Moderators: chulett, rschirm, roy
I won't advise to use datastage aggregator plug-in if have millions of records unsorted. The efficient way is to put your records into a Temp Stage table and use the power of your database engine to do the aggregation, max, min, group. You will be amazed to see the difference between those two.
Thanks
Thanks
Regards
Siva
Listening to the Learned
"The most precious wealth is the wealth acquired by the ear Indeed, of all wealth that wealth is the crown." - Thirukural By Thiruvalluvar
Siva
Listening to the Learned
"The most precious wealth is the wealth acquired by the ear Indeed, of all wealth that wealth is the crown." - Thirukural By Thiruvalluvar
I agree with Rasi on avoiding aggregator stage for huge data. As sugessted you can dump the data into a temp table and do a group by query on the table to find the min value.
Another alternative is that u can dump the data into a hash file with a transofmer in between. Let the key for your target hash file be you group by columns. In the transformer you will have stage variables to find the minimum value between the current and the previous values. Always the minimum value will keep on overwritting on the same record, since the key of the hash file is same as that of your group variable.
Another alternative is that u can dump the data into a hash file with a transofmer in between. Let the key for your target hash file be you group by columns. In the transformer you will have stage variables to find the minimum value between the current and the previous values. Always the minimum value will keep on overwritting on the same record, since the key of the hash file is same as that of your group variable.