The modify operator has a binding for non-existent output

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

deepa_shenoy
Participant
Posts: 95
Joined: Thu Sep 24, 2009 12:15 am
Location: India

The modify operator has a binding for non-existent output

Post by deepa_shenoy »

Hi All,

I am getting the following warning:

ORA_CUSTOMER: When checking operator: The modify operator has a binding for the non-existent output field "BILLING".

My source stage is Oracle Enterprise and so is my target, through a copy stage. All the columns from source to target, through the copy stage, have valid derivations.

Any idea why this warning is being thrown?

Thanks in advance.

-Deepa
hamzaqk
Participant
Posts: 249
Joined: Tue Apr 17, 2007 5:50 am
Location: islamabad

Post by hamzaqk »

What is the source of the 'missing' column? a SQL ?
Teradata Certified Master V2R5
deepa_shenoy
Participant
Posts: 95
Joined: Thu Sep 24, 2009 12:15 am
Location: India

Post by deepa_shenoy »

Yes it is a SQL. All columns are present in the SQL.
kamalshil
Participant
Posts: 179
Joined: Mon Jun 23, 2008 1:19 am

Post by kamalshil »

have you check column metadata is available for the column in stage you are putting query?
teddycarebears
Participant
Posts: 18
Joined: Wed May 12, 2010 11:57 pm

Post by teddycarebears »

kamalshil wrote:have you check column metadata is available for the column in stage you are putting query?
I know it is an old post but I don't think it's necessary to open another for the same problem.

I am taking the data from a dataset. My question is what if I have Runtime Column Propagation checked for all job ? Metadata still needs to be present in that Modify stage ?

I am getting same error and I can't figure it out why.
Able was I ere I saw Elba
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Turn on $APT_PRINT_SCHEMAS for this job, it will show the actual schemas between each of the stages in the job and goes a long way into finding the sources of errors.
teddycarebears
Participant
Posts: 18
Joined: Wed May 12, 2010 11:57 pm

Post by teddycarebears »

ArndW wrote:Turn on $APT_PRINT_SCHEMAS for this job, it will show the actual schemas between each of the stages in the job and goes a long way into finding the sources of errors. ...
Many thanks ArndW, for me that did the trick and helped me find where it was the problem. The variable name in my version is $OSH_PRINT_SCHEMAS but I wouldn't have found it without your help.
Able was I ere I saw Elba
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

The variable name in my version is $OSH_PRINT_SCHEMAS as well, I just can't type very well.... :)
iq_etl
Premium Member
Premium Member
Posts: 105
Joined: Tue Feb 08, 2011 9:26 am

Post by iq_etl »

I'm going to go ahead and continue this thread as well.

I'm getting the same error on a table coming from an sql server. I'm using an ODBC Connector stage to connect to the table and have loaded the table definition and can in fact see the columns and table data when I select 'view data'. The columns are in the output tab as well.

When I look in the Transformer Stage I see all of he columns there, so all looks good.

Still, when I run the job I get:

'When checking operator: The modify operator has a binding for the non-existent output field "AAA".' - on the ODBC Connector Stage and

'error when checking operator: Could not find input field "AAA"' - on the Transformer Stage

How am I able to view columns and data in the ODBC Connector stage, yet I receive these errors when I run the job?

Thanks!
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

iq_etl - did you add the environment variable as suggested in this thread and see exactly what columns are being used?
iq_etl
Premium Member
Premium Member
Posts: 105
Joined: Tue Feb 08, 2011 9:26 am

Post by iq_etl »

If you mean setting $OSH_PRINT_SCHEMAS to 'True', then yes and the error message tells me the name of the column (there are 5) and just used 'AAA' as an example name.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

Just to clarify, the output contains the schemas for each stage in the job. In the input link to the transform stage the schema contains a column name, i.e. "AAA", which is then given in the error message? If yes, please cut-and-paste the schema and the error message to the thread.
iq_etl
Premium Member
Premium Member
Posts: 105
Joined: Tue Feb 08, 2011 9:26 am

Post by iq_etl »

Sure. In my parallel job, I have an input ODBC stage 'odbc_DBO_COGNOSBUDGETS' which reads rows from a table on an SQL server and goes to a Transformer stage called 'xfm_BUDGETS'. In the below messages, 'ID' is obviously a column.

Here are the error messages:
Warning (type): odbc_DBO_COGNOSBUDGETS: When checking operator: The modify operator has a binding for the non-existent output field "ID".

Fatal (type): xfm_BUDGETS: Error when checking operator: Could not find input field "ID".

Again column "ID" is in the table definition, can be viewed with data in the ODBC input stage, and is in the Transformer stage.

EDIT: When I set 'Fail on size mismatch' to 'No' for that SQL server input table I no longer receive those errors. Looks like that's the way to go.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

That was one half of the request, now please cut-and-paste the schema going into your modify stage as shown in the log file from the output of $OSH_PRINT_SCHEMAS.

I am fairly certain that the column -isn't- in the input schema. The source column metadata for ID and what you've declared don't match, so the column wasn't taken into the schema with your previous setting. The correct choice is to make the data types and sizes match.
iq_etl
Premium Member
Premium Member
Posts: 105
Joined: Tue Feb 08, 2011 9:26 am

Post by iq_etl »

I agree, the correct choice is to have the data types and sizes match. Unfortunately, this table is owned by an external client and we have to work with what they have.

On the upside, this is a one time load as future data will be put on a table that matches ours.

At any rate, here's the schema. It looks to have all of the columns (let me know if it isn't what you wanted to see):

DSDisplayWidth={ID=11, code1=11, code2=11, code3=11, fiscalYear12_13=22, fiscalYear13_14=22},
DSSQLType={ID=4, code1=4, code2=4, code3=4, fiscalYear12_13=6, fiscalYear13_14=6},
DSSQLPrecision={ID=10, code1=10, code2=10, code3=10, fiscalYear12_13=53, fiscalYear13_14=53},
DSSchema='record
(
ID\:int32\;
code1\:nullable int32\;
code2\:nullable int32\;
code3\:nullable int32\;
fiscalYear12_13\:nullable sfloat\;
fiscalYear13_14\:nullable sfloat\;
)'
}'
Post Reply