Page 1 of 1

Unexpected exit status 1

Posted: Fri Sep 01, 2006 10:24 am
by DS_MJ
Hello All:

My Environment:
Source : DB2 AIX 5.2
Target : Seq File

On Db2 some Column Names are Named as follows:
COL_ID#
NEXT_COL#

(ending with a '#')

When I do a straight map to a Seq File in the Transform I see red on column names that end with '#'. So I remove the '#' from the column name on both the sides i.e the DB2 and Seq File. (EX: Col_Name).

In the SQL use a custom query and use the "#" for the column names or else I wont be able to view the data. Am able to straight map the column. However the job fails.

When I go into the director I see the following error:
Table Name,0: Failure during execution of operator logic.
Table Name,0: Output 0 produced 0 records.
Table Name,0: Fatal Error: Fatal: [IBM][CLI Driver][AS] SQL0206N "COL_ID# DSCAPIOP_" is not valid in the context where it is used. SQLSTATE=42703
SQLExecDirect: Error executing statement 'SELECT
COL_ID#DSCAPIOP_,
NEXT_COL#,
COL_NAME3
FROM ServerName.Library.Table WHERE Col_Name <> 'C'. See following DB2 message for details.
node_node1: Player 1 terminated unexpectedly.
main_program: Unexpected exit status 1
Xfm_Table Name,0: Failure during execution of operator logic.
Xfm_Table Name,0: Fatal Error: waitForWriteSignal(): Premature EOF on node HostNameofServer No such file or directory
In the SQL statement next to COL_ID# I see "DSCAPIOP_" Which is not in the SQL statement but gets added.

What does "DSCAPIOP_" mean? How did it get added to that column name?

I deleted the col name and rewrote the SQL the job runs then that same error comes on another table that I am trying to extract data from which has the same column naming convention. i.e. column name ending with a'#'.

So I rewrite the SQL query and run the job. The job Fails. Now the previous table where I had rewritten the SQL again has "DSCAPIOP_" added and hence the query cannot be recognized and hence fails.

But when I go to DB2 API stage and view the data am able to view the data. The Query does not have "DSCAPIOP_" in it.

Any explanation would be appreciated.

Posted: Fri Sep 01, 2006 10:39 am
by thumsup9
Keep you Column name as
COL_ID
NEXT_COL
and in the derivation mention
COL_ID#
NEXT_COL#

in your table definitions and see how it works..And one more thing are the columns named as COL_ID#,NEXT_COL# on the DB2 database side.

Posted: Fri Sep 01, 2006 10:54 am
by DS_MJ
thumsup9 wrote:Keep you Column name as
COL_ID
NEXT_COL
and in the derivation mention
COL_ID#
NEXT_COL#

in your table definitions and see how it works..And one more thing are the columns named as COL_ID#,NEXT_COL# on the DB2 database side.
Hello thumsup9

I have already done that. Yes the columns named as COL_ID#,NEXT_COL# are on the DB2 database side

Posted: Fri Sep 01, 2006 11:24 am
by chulett
You need to read the DB2/UDB API Stage Guide pdf that is in your Docs directory. There is a section there that explains how to handle column names with $ or # in them.

ps. This same issue (and solution) exists for the Oracle stages.

Posted: Fri Sep 01, 2006 11:32 am
by DS_MJ
I have noticed that when I "remove" the column derivations is when this error happens. When I leave the column derivations the job runs successfully.

However I need to remove the column derivations (From the DB2 API Pluggin columns) since I want to pass job parameters consisting of tables existing in different libraries(Schemas) but still run the same job.

Thanks

Posted: Fri Sep 01, 2006 12:59 pm
by DS_MJ
chulett wrote:You need to read the DB2/UDB API Stage Guide pdf that is in your Docs directory. There is a section there that explains how to handle column names with $ or # in them.

ps. This same issue (and solution) exists for the Oracle stages.
Thank you all:

I was able to resolve the issue. :)

Posted: Fri Sep 01, 2006 9:17 pm
by ray.wurlod
The column derivations become the SELECT clause of the query that is generated. That's why you can have DISTINCT, SUM(), expressions and so on in the derivations.