Warning: "dfloat" to result type decimal

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Warning: "dfloat" to result type decimal

Post by kris007 »

I am trying to join data from two tables using a Join stage and 2 odbc stages for Oracle.
The job is hanging giving the following warning:
ODBC_Enterprise_10: When checking operator: When binding output interface field "ADDRESS_ID" to field "ADDRESS_ID": Implicit conversion from source type "dfloat" to result type "decimal[38,10]": Possible range/precision limitation.
.

However I looked up ADDRESS_ID Data type in Toad it says NUMBER but when I import it to Datastage the datatype gets converted to Decimal
automatically.
Not sure if job is hanging for this reason but I cannot see any more messages in the Director log , I have manually stop the job. Any idea of whats happening please share
vinaymanchinila
Premium Member
Premium Member
Posts: 353
Joined: Wed Apr 06, 2005 8:45 am

Post by vinaymanchinila »

Hi,

How are you importing the meta data, are you using the Orchbuilt?
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Are you sure about the fact that not even a single record goes beyond 38,10?
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

kumar_s wrote:Are you sure about the fact that not even a single record goes beyond 38,10?
I am importing metadata using odbc dsns's , I am not sure about data but those are all id's and they are defined as NUMBERS as seen from toad , I am not sure it would correspond to double or dfloat ..
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

The parallel Aggregator stage seems always to generate dfloat output for numeric input.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

ray.wurlod wrote:The parallel Aggregator stage seems always to generate dfloat output for numeric input.
But I am not using aggregator, I am just using ODBC stage.. and I am not sure if job hanging has anything to do with.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It was just an observation. The first time I encountered this error message that was the cause. When next someone is searching I had hoped to cover the extra possibility.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Just curious - where did the [38,10] come from? A NUMBER in Oracle, one with no precision specified, is the same as saying NUMBER(38). There's no scale associated with it.

I'd suggest trying [38] or maybe [38,0]. Not familiar enough with PX to get any more specific than that.

Some stages over on the Server side have a quirk where you have to define the metadata to be '40' rather than '38' to stop odd errors about data truncation, from what I recall. Something to keep in mind if it becomes an issue over here.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

Yeah its just a NUMBER without any precision, when I import metadata its automatically takes (38,10) I tried to change the data type to Integer and lenght to 38 . But job still hangs, rows are coming from odbc stage but from the other odbc stage no rows are coming and hanging :( .
Is there any preprocessing to be done before using JOIN /LOOKUP stage?
One of the ODBC source might have duplicate records on joining jey, but the rows from that ODBC source are coming alrite.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Yes there are some pre processing required, something like sorting, de-duplication.... but the error you got is not relevent to that.
Just try to make that particular field to dfloat.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
thompsonp
Premium Member
Premium Member
Posts: 205
Joined: Tue Mar 01, 2005 8:41 am

Post by thompsonp »

A few observations, using the Oracle Enterprise stage rather than ODBC stage, that may help:

Defining a NUMBER with no precision in Oracle is not the same as saying NUMBER(38) or NUMBER(38,0). Only the NUMBER will accept a decimal place. The other two are constrained to hold only integer values.

The metadata for an Oracle number or number(38) defaults to Decimal(38,10). DataStage PX treats this as a floating point decimal and provides a warning that it 'is not fully supported; adjusting scale' when read from Oracle.

In PX an Integer of length 38 is not possible - at least not on any flavour of Unix I have used. Remember that there are different types of integer that should be chosen for the best match to size - TinyInt, Integer and BigInt. From recollection an Integer only supports about 9 digits (2 to the power 32) and a BigInt 19 digits. Bigger than that and you need decimals, floats or doubles.

As you are not sure what is causing the job to hang, try separating the Oracle stages from the rest of the job. Write each one to a dataset first and see if that works.

Are you getting any data throughput showing at all?
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

thompsonp wrote: Are you getting any data throughput showing at all?
I am getting data from one of the JOIN sets but not from the other.
I have sorted and hash partitioned both sources. :cry: .
I got this fatal error message:
Innerjoin_Addrrid.InAddress_Sort,0: Failure during execution of operator logic.
I finally resorted to writing user defined sql which works fine but the data
looks garbled with all junk characters , I think its got something to do with
the NLS , which btw is not installed in our server.
[/quote]
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

So I guess you have few many issue to lookup on.
As observed by thompsonp, The metadata for an Oracle number or number 38 defaults to Decimal(38,10). might be a valid information.

I am getting data from one of the JOIN sets but not from the other.
You mean input dataset is not getting read?

data looks garbled with all junk characters
In input or in output table?
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »


I am getting data from one of the JOIN sets but not from the other.
You mean input dataset is not getting read?
Yes, from one of the inputs (odbc source though).

data looks garbled with all junk characters
In input or in output table?.
In both input and output as well. Input looks like all ????? while output has all wierd characters, output is a flat file. I think that is a NLS problem as NLS is not installed in our server.
But no data is passed from Join stage. I tried to put some peek stages and copy stages before and after Join stage, data actually flows through that odbc source even though it displays 0 rows per sec. But peekstage after Join stage doesnot pass any rows.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

0 rows/sec is not the same as 0 rows. These rates are rounded to the nearest whole number, which means that the link finished in less than 0.5 sec. Another reason I believe that rows/sec is not a useful metric.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply