Page 1 of 1

Aggregator

Posted: Mon Feb 09, 2009 3:40 pm
by kittu.raja
Hi,

I want to find out the count of file. So I am using the design

Seq File ----> agg ---->seq File.

I am geeting an error saying that "Aggregator_7: When checking operator: Operator of type "APT_HashedGroup2Operator": will partition despite the
preserve-partitioning flag on the data set on input port 0."

I am using hash partitioning in the aggregator stage. I searched in the forums but did not find the solution.

Any help would be appreciated

Thanks,

Posted: Mon Feb 09, 2009 4:27 pm
by ray.wurlod
How is the Preserve Partitioning flag set on the stage that is upstream of the Aggregator?

Re: Aggregator

Posted: Mon Feb 09, 2009 4:33 pm
by betterthanever
select "CLEAR" preserve partioning falg upstream...

Re: Aggregator

Posted: Tue Feb 10, 2009 9:16 am
by kittu.raja
I kept as clear, but still I am getting this warning.

Posted: Tue Feb 10, 2009 9:18 am
by kittu.raja
Ray, I put clear in all the stages, but still I am getting this warning

Re: Aggregator

Posted: Tue Feb 10, 2009 11:08 am
by betterthanever
set "CLEAR" to stage downstream and see

Re: Aggregator

Posted: Tue Feb 10, 2009 11:25 am
by kittu.raja
there are no more stages to set clear in downstream.

Re: Aggregator

Posted: Tue Feb 10, 2009 12:45 pm
by betterthanever
the aggregator not connected to any stages downstream???

Re: Aggregator

Posted: Tue Feb 10, 2009 2:30 pm
by kittu.raja
It is connected to Sequential file.

Re: Aggregator

Posted: Tue Feb 10, 2009 2:42 pm
by betterthanever
on the outputside of aggregator stage set the preserve partioning to "CLEAR"

Re: Aggregator

Posted: Tue Feb 10, 2009 2:46 pm
by kittu.raja
Yes I did but I am getting the same error. I set all the stages to clear in the preserver partitioning tab but no use.

Re: Aggregator

Posted: Fri Jul 07, 2017 1:25 pm
by parilango
I am also getting the same warning after the partion set as CLEAR also.

any solution please

Posted: Fri Jul 07, 2017 1:49 pm
by chulett
parilango - If would be best if you started a new post with all of the details of your version of this issue, including the job design and actual error message.