Multiple Ambiguous Errors on ODBC load to Oracle DB

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
sjordery
Premium Member
Premium Member
Posts: 202
Joined: Thu Jun 08, 2006 5:58 am

Multiple Ambiguous Errors on ODBC load to Oracle DB

Post by sjordery »

Hi All,

I have a Parallel job that is loading data from a Dataset to an Oracle table using the Enterprise ODBC stage. The job compiles fine, but is aborting at runtime with multiple fatal messages.

The job goes:

dataset ----> Transformer ----> Enterprise ODBC

The table definition used in the ODBC stage was imported from the database directly, but one column was removed as it is an auto-increment field that I want the DBMS to populate rather than DataStage. Other than that, all but one field is mapped straight from the dataset and are a combination of different datatypes, including timestamp, VarChar and Decimal. The only other column is also a timestamp, but is populated from a stage variable that is set to CurrentTimestamp()

Upon execution, I get these messages as the job crumbles into a heap:
Item #: 4
Event ID: 50
Timestamp: 2007-12-18 12:18:58
Type: Info
User Name: dsadm
Message: main_program: Ascential DataStage(tm) Enterprise Edition 7.5.2
Copyright (c) 2004, 1997-2004 Ascential Software Corporation.
All Rights Reserved

Item #: 5
Event ID: 51
Timestamp: 2007-12-18 12:18:58
Type: Info
User Name: dsadm
Message: main_program: orchgeneral: loaded
orchsort: loaded
orchstats: loaded

Item #: 6
Event ID: 52
Timestamp: 2007-12-18 12:18:58
Type: Info
User Name: dsadm
Message: main_program: APT configuration file: /opt/DataStage/Ascential/DataStage/Configurations/ID_Master_Config.apt
{
node "node1"
{
fastname "ukdstaged1"
pools ""
resource disk "/data/Datastage/Projects/ID_Master/Datasets" {pools ""}
resource scratchdisk "/data/Datastage/Projects/ID_Master/Scratch" {pools ""}
}
}


Item #: 7
Event ID: 53
Timestamp: 2007-12-18 12:19:01
Type: Info
User Name: dsadm
Message: APT_CombinedOperatorController,0: Numeric string expected of the appropriate decimal precision . Use default value.

Item #: 8
Event ID: 54
Timestamp: 2007-12-18 12:19:01
Type: Fatal
User Name: dsadm
Message: CTDEV_GEMIDREF,0: Failure during execution of operator logic.

Item #: 9
Event ID: 55
Timestamp: 2007-12-18 12:19:01
Type: Info
User Name: dsadm
Message: CTDEV_GEMIDREF,0: Input 0 consumed 1 records.

Item #: 10
Event ID: 56
Timestamp: 2007-12-18 12:19:01
Type: Fatal
User Name: dsadm
Message: APT_CombinedOperatorController,0: Fatal Error: Invalid time

Item #: 11
Event ID: 57
Timestamp: 2007-12-18 12:19:01
Type: Fatal
User Name: dsadm
Message: node_node1: Player 1 terminated unexpectedly.

Item #: 12
Event ID: 58
Timestamp: 2007-12-18 12:19:06
Type: Fatal
User Name: dsadm
Message: main_program: Unexpected exit status 1

Item #: 13
Event ID: 59
Timestamp: 2007-12-18 12:19:06
Type: Fatal
User Name: dsadm
Message: main_program: Step execution finished with status = FAILED.

Item #: 14
Event ID: 60
Timestamp: 2007-12-18 12:19:07
Type: Info
User Name: dsadm
Message: main_program: Startup time, 0:07; production run time, 0:01.

Item #: 15
Event ID: 61
Timestamp: 2007-12-18 12:19:07
Type: Control
User Name: dsadm
Message: Job GemXr30_LoadIdData2 aborted.

Any assitance, advice, constructive criticism :wink: much appreciated and most welcome. If you need any further info, please let me know.

Thanks in advance.

S
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Operator combination is thwarting the diagnostic process - we only see that the errors are coming from APT_CombinedOperatorController. Try running the job with operator combination disabled so you can see which stage (operator) is actually generating the errors. At first glance it appears to be a problem with decimal precision.

Have you imported the ODBC table definition for the target Oracle table?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
sjordery
Premium Member
Premium Member
Posts: 202
Joined: Thu Jun 08, 2006 5:58 am

Post by sjordery »

Thanks for your reply Ray.

The ODBC table definition was imported from the target Oracle table. This is one of the peculiar things - viewing the table definition in the repository, two of the fields appear as:

FieldOne - Double(15)
FieldTwo - Decimal(38,0)

However, if I load the tab defn from the repository into the job, the values go to:

FieldOne - Decimal(38,10)
FieldTwo - Decimal(38,10)

Very odd.

I disabled the Operator Combination and re-ran the job, with the following results:
Item #: 4
Event ID: 192
Timestamp: 2007-12-19 10:57:48
Type: Info
User Name: dsadm
Message: main_program: Ascential DataStage(tm) Enterprise Edition 7.5.2
Copyright (c) 2004, 1997-2004 Ascential Software Corporation.
All Rights Reserved

Item #: 5
Event ID: 193
Timestamp: 2007-12-19 10:57:48
Type: Info
User Name: dsadm
Message: main_program: orchgeneral: loaded
orchsort: loaded
orchstats: loaded

Item #: 6
Event ID: 194
Timestamp: 2007-12-19 10:57:48
Type: Info
User Name: dsadm
Message: main_program: APT configuration file: /opt/DataStage/Ascential/DataStage/Configurations/ID_Master_Config.apt
{
node "node1"
{
fastname "ukdstaged1"
pools ""
resource disk "/data/Datastage/Projects/ID_Master/Datasets" {pools ""}
resource scratchdisk "/data/Datastage/Projects/ID_Master/Scratch" {pools ""}
}
}


Item #: 7
Event ID: 195
Timestamp: 2007-12-19 10:57:51
Type: Info
User Name: dsadm
Message: tfMapFields,0: Numeric string expected of the appropriate decimal precision . Use default value.

Item #: 8
Event ID: 196
Timestamp: 2007-12-19 10:57:51
Type: Fatal
User Name: dsadm
Message: CTDEV_GEMIDREF,0: Failure during execution of operator logic.

Item #: 9
Event ID: 197
Timestamp: 2007-12-19 10:57:51
Type: Info
User Name: dsadm
Message: CTDEV_GEMIDREF,0: Input 0 consumed 1 records.

Item #: 10
Event ID: 198
Timestamp: 2007-12-19 10:57:51
Type: Fatal
User Name: dsadm
Message: CTDEV_GEMIDREF,0: Fatal Error: Invalid time

Item #: 11
Event ID: 199
Timestamp: 2007-12-19 10:57:51
Type: Fatal
User Name: dsadm
Message: node_node1: Player 3 terminated unexpectedly.

Item #: 12
Event ID: 200
Timestamp: 2007-12-19 10:57:56
Type: Fatal
User Name: dsadm
Message: main_program: Unexpected exit status 1

Item #: 13
Event ID: 201
Timestamp: 2007-12-19 10:57:56
Type: Fatal
User Name: dsadm
Message: main_program: Step execution finished with status = FAILED.

Item #: 14
Event ID: 202
Timestamp: 2007-12-19 10:57:56
Type: Info
User Name: dsadm
Message: main_program: Startup time, 0:07; production run time, 0:01.

Item #: 15
Event ID: 203
Timestamp: 2007-12-19 10:57:56
Type: Control
User Name: dsadm
Message: Job GemXr30_LoadIdData2 aborted.

Item #: 16
Event ID: 204
Timestamp: 2007-12-19 10:58:13
Type: Reset
User Name: dsadm
Message: Log prior to last run cleared by user
The job, bizarrely, now shows green lines on the Designer screen despite all the underlying failures!

CTDEV_GEMIDREF is the Oracle table - and the name of the ODBC stage.

The two messages I guess that are clues are:

* Numeric string expected of the appropriate decimal precision, and
* Message: CTDEV_GEMIDREF,0: Fatal Error: Invalid time

Having fiddled around with decimal precision etc, I still can't get it to run.

Any ideas?

Thanks very much in advance.
sjordery
Premium Member
Premium Member
Posts: 202
Joined: Thu Jun 08, 2006 5:58 am

Post by sjordery »

Hi All,

OK, I went back to basics and took out the fields to update before adding them back one by one and running. I have now managed to find the source of the problem...

I had a timeStamp field that on the input contained '***********' which caused the invalid time message. Basically, this was down to a previous job which parsed an XML file. I changed that job to allow this field to be nullable - so it put NULL rather than '************'. I then modified the load job to handle NULLs in the way I required.

The other problem was because I was attempting to set the current timeStamp using a value generated by DataStage, but the field on the DB was already set up to generate its own timestamp when the record was stored/updated. :oops: With these two issues sorted, it is now updating as expected.

Thanks again for looking.
Post Reply