Page 1 of 1

main_program: "same" operator may not have a parti

Posted: Sun Jul 15, 2007 11:29 am
by Titto
Hi,

couple of my jobs are aborting with following error

Code: Select all

"same" operator may not have a partitioner or collector on its input
The same job worked fine other environment - no changes in the jobs from Dev environment to Prod environment.

Any ideas, what could be the cause...

Thanks,
T

Posted: Sun Jul 15, 2007 1:16 pm
by ray.wurlod
Clearly there is at least one difference between environments.

Dump the score in each and look for those differences. Does the error message indicate which stage/operator is generating the error?

Posted: Sun Jul 15, 2007 4:24 pm
by Titto
ray.wurlod wrote:Clearly there is at least one difference between environments.

Dump the score in each and look for those differences. Does the error message indicate which stage/operator is generating the error? ...
Hi Ray,

job is aborting with in 2 seconds - it is not mentioning any stage names.
it is just saying

Code: Select all

main_program: This step has no datasets.
It has 1 operator:
op0[1p] {(sequential APT_CombinedOperatorController:
      (APT_LicenseCountOp in APT_LicenseOperator)
      (APT_LicenseCheckOp in APT_LicenseOperator)
    ) on nodes (
      node1[op0,p0]
    )}
It runs 1 process on 1 node.
and it abort with

Code: Select all

main_program: "same" operator may not have a partitioner or collector on its input
Thanks,
T

Posted: Sun Jul 15, 2007 4:36 pm
by ray.wurlod
That's the score for licensing, and doesn't help. Is there a second score event logged?
If so, can you please post that score here?
If not, you may have an issue with licensing, or access to DataStage software on other processing nodes.

Posted: Sun Jul 15, 2007 6:10 pm
by boppanakrishna
hi,

Are there any message handlers enabled for the Job?
If so please disable them and then run again, U many get get some additional information



Regards,
Boppana

Posted: Tue Jul 17, 2007 5:45 pm
by Titto
Yes there was a message handler, i disable that and ran the same job with $APT_DUMP_SCORE=True , still the same it is not producing any extra dump.

I found one more interesting -
On developement box the same job is working fine, then i made a copy of the job with my initial, and i changed the output file, so that my job won't overwrite the development files, then i submit the job and it is producing the same error and aborting.

So, the dev version is working fine, my version is aborting and Production copy is aborting...

Any ideas??

Thanks,
T

Posted: Mon Nov 12, 2007 10:37 am
by bcarlson
Does anyone know if this topic was resolved? It is not marked as such, but I am hoping that since it was from July that maybe the OP got it working.

We are getting the same error with "hash" instead of "same". It is on a new process, so we don't have it working in one environment and not another - so I don't know if it is environmental or not.

Has anyone else run into this?

Brad.

Posted: Mon Nov 12, 2007 11:46 am
by bcarlson
Just a follow up on my own posting here...

The job I was working on had a bunch of extra hash partitioners on it. I removed everything that was not part of a join prep and the problem went away.

I am not sure if getting rid of hashes was the required fix or if I simply eliminated a problem hash by coincidence. So I guess my question remains - what does this error mean and why am I getting it?

Brad.

Posted: Mon Nov 12, 2007 1:19 pm
by ray.wurlod
The message seems to imply that somewhere in the job there is an input link on which there is no partitioner/collector defined, or that Same has been selected as the partitioning/collecting algorithm but there is non-partitioned data arriving. For example, SeqFile (sequential) ----> AnyStage (parallel) and forcing Same as the partitioning algorithm on the downstream stage might be able to cause this symptom.

Posted: Mon Nov 12, 2007 2:26 pm
by bcarlson
Well, I took a copy of the original job (that was failing) and started removing hashing one-by-one. I identified 2 stages that caused the problem, but still don't understand what went wrong. There were 2 modifies that had hash partitioning on their inputs. The field they reference for partitioning is valid - it exists in the input schema (same name, datatype, nullity, etc.). If the hash is added,t he job fails. If I remove it the job works fine.

Still puzzled...

Brad.