Page 1 of 2

Warning: "dfloat" to result type decimal

Posted: Fri May 19, 2006 6:03 pm
by kris007
I am trying to join data from two tables using a Join stage and 2 odbc stages for Oracle.
The job is hanging giving the following warning:
ODBC_Enterprise_10: When checking operator: When binding output interface field "ADDRESS_ID" to field "ADDRESS_ID": Implicit conversion from source type "dfloat" to result type "decimal[38,10]": Possible range/precision limitation.
.

However I looked up ADDRESS_ID Data type in Toad it says NUMBER but when I import it to Datastage the datatype gets converted to Decimal
automatically.
Not sure if job is hanging for this reason but I cannot see any more messages in the Director log , I have manually stop the job. Any idea of whats happening please share

Posted: Fri May 19, 2006 7:45 pm
by vinaymanchinila
Hi,

How are you importing the meta data, are you using the Orchbuilt?

Posted: Sat May 20, 2006 6:04 am
by kumar_s
Are you sure about the fact that not even a single record goes beyond 38,10?

Posted: Sat May 20, 2006 9:08 am
by kris007
kumar_s wrote:Are you sure about the fact that not even a single record goes beyond 38,10?
I am importing metadata using odbc dsns's , I am not sure about data but those are all id's and they are defined as NUMBERS as seen from toad , I am not sure it would correspond to double or dfloat ..

Posted: Sat May 20, 2006 3:45 pm
by ray.wurlod
The parallel Aggregator stage seems always to generate dfloat output for numeric input.

Posted: Sat May 20, 2006 10:11 pm
by kris007
ray.wurlod wrote:The parallel Aggregator stage seems always to generate dfloat output for numeric input.
But I am not using aggregator, I am just using ODBC stage.. and I am not sure if job hanging has anything to do with.

Posted: Sun May 21, 2006 6:50 am
by ray.wurlod
It was just an observation. The first time I encountered this error message that was the cause. When next someone is searching I had hoped to cover the extra possibility.

Posted: Sun May 21, 2006 6:57 am
by chulett
Just curious - where did the [38,10] come from? A NUMBER in Oracle, one with no precision specified, is the same as saying NUMBER(38). There's no scale associated with it.

I'd suggest trying [38] or maybe [38,0]. Not familiar enough with PX to get any more specific than that.

Some stages over on the Server side have a quirk where you have to define the metadata to be '40' rather than '38' to stop odd errors about data truncation, from what I recall. Something to keep in mind if it becomes an issue over here.

Posted: Sun May 21, 2006 8:53 am
by kris007
Yeah its just a NUMBER without any precision, when I import metadata its automatically takes (38,10) I tried to change the data type to Integer and lenght to 38 . But job still hangs, rows are coming from odbc stage but from the other odbc stage no rows are coming and hanging :( .
Is there any preprocessing to be done before using JOIN /LOOKUP stage?
One of the ODBC source might have duplicate records on joining jey, but the rows from that ODBC source are coming alrite.

Posted: Sun May 21, 2006 11:19 pm
by kumar_s
Yes there are some pre processing required, something like sorting, de-duplication.... but the error you got is not relevent to that.
Just try to make that particular field to dfloat.

Posted: Mon May 22, 2006 6:00 am
by thompsonp
A few observations, using the Oracle Enterprise stage rather than ODBC stage, that may help:

Defining a NUMBER with no precision in Oracle is not the same as saying NUMBER(38) or NUMBER(38,0). Only the NUMBER will accept a decimal place. The other two are constrained to hold only integer values.

The metadata for an Oracle number or number(38) defaults to Decimal(38,10). DataStage PX treats this as a floating point decimal and provides a warning that it 'is not fully supported; adjusting scale' when read from Oracle.

In PX an Integer of length 38 is not possible - at least not on any flavour of Unix I have used. Remember that there are different types of integer that should be chosen for the best match to size - TinyInt, Integer and BigInt. From recollection an Integer only supports about 9 digits (2 to the power 32) and a BigInt 19 digits. Bigger than that and you need decimals, floats or doubles.

As you are not sure what is causing the job to hang, try separating the Oracle stages from the rest of the job. Write each one to a dataset first and see if that works.

Are you getting any data throughput showing at all?

Posted: Mon May 22, 2006 2:42 pm
by kris007
thompsonp wrote: Are you getting any data throughput showing at all?
I am getting data from one of the JOIN sets but not from the other.
I have sorted and hash partitioned both sources. :cry: .
I got this fatal error message:
Innerjoin_Addrrid.InAddress_Sort,0: Failure during execution of operator logic.
I finally resorted to writing user defined sql which works fine but the data
looks garbled with all junk characters , I think its got something to do with
the NLS , which btw is not installed in our server.
[/quote]

Posted: Mon May 22, 2006 11:33 pm
by kumar_s
So I guess you have few many issue to lookup on.
As observed by thompsonp, The metadata for an Oracle number or number 38 defaults to Decimal(38,10). might be a valid information.

I am getting data from one of the JOIN sets but not from the other.
You mean input dataset is not getting read?

data looks garbled with all junk characters
In input or in output table?

Posted: Tue May 23, 2006 10:04 am
by kris007

I am getting data from one of the JOIN sets but not from the other.
You mean input dataset is not getting read?
Yes, from one of the inputs (odbc source though).

data looks garbled with all junk characters
In input or in output table?.
In both input and output as well. Input looks like all ????? while output has all wierd characters, output is a flat file. I think that is a NLS problem as NLS is not installed in our server.
But no data is passed from Join stage. I tried to put some peek stages and copy stages before and after Join stage, data actually flows through that odbc source even though it displays 0 rows per sec. But peekstage after Join stage doesnot pass any rows.

Posted: Tue May 23, 2006 2:47 pm
by ray.wurlod
0 rows/sec is not the same as 0 rows. These rates are rounded to the nearest whole number, which means that the link finished in less than 0.5 sec. Another reason I believe that rows/sec is not a useful metric.