I have a job datastage that cannot be optimized: in other words the optimized is equal to the original.
In the Optimization logs I find 2 warnings that I'm translating from Italian, that state:
WARNING: It is not possible to verify the sorting keys in the stage Read_table_1. The pattern is abandoned. (ORIGINAL: Impossibile verificare le chiavi di ordinamento nello stage Read_table_1. Il pattern viene abbandonato.)
WARNING: It is not possible to generate combined query instructions for the link link_to_rem_dup, link_to_transformer(ORIGINAL: Impossibile generare istruzioni query combinate per i link link_to_rem_dup, link_to_transformer. Il pattern viene abbandonato.)
Properties of job optimization: isGenOrdBy,PushProcToSrcs,PushProcToTrgts,PushAllToDB
This is how the job design looks like:
![Image](https://i.imgur.com/hB2PHRp.png)
- the teradata connector simply read from a table join with another one (no order by clause defined)
- the remove duplicates stage
a) stage tab: uses field_1 to define duplicates with options "case insensitive" and "nulls last" defined. field_1 is set with option "duplicate to retain " = first
b) input tab: sorts data with hash partition. 4 fields are used to sort data
- the teradata connector and the remove duplicate stage have the same primary keys definition
Does anyone know why this cannot be optimized? Do the warnings suggest you anything in specific?
Thank you very much,