Warning: "dfloat" to result type decimal

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Check with you odbc source whether it really dose not have any data in it.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
rony_daniel
Participant
Posts: 36
Joined: Thu Sep 01, 2005 5:44 am
Location: Canada

Post by rony_daniel »

I wonder why the links during processing of records always shows as n rows/sec , when actually it is the total records that has been processed. Why is the /Sec required anyways because we will be more interested in knowing if all the records have been processed or not and not at what speed it is processing? :shock: That if anybody is interested can be calculated based on the total time taken to run the job and the total number of records processed....
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Not always. Actually the staticts shows both n rows as well as n rows / sec.
Both full quantity as well as the rate at which the rows been processed.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

kumar_s wrote:Check with you odbc source whether it really dose not have any data in it.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.
I did, I am getting the warning message about dfloat to double precision loss..
I tried to implement a JOIN stage with datasets as input the error is still the same pipe is full.. and other fatal errors follow.
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

Update:
I tried to limit the input dataset rows to the join and it's working well.
Not sure if the problem is with the size of the input JOIN stage has to deal with..
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

When I tried to run a join with all the records from database source, its aborting saying Scratch space is full.
But when i actually try to find disk space immediately after the job aborts

Code: Select all

df -h /home/dstage/Ascential/DataStage/Scratch
it gives me 85% is used and after sometime it comes down to 66%.
However I have 35689234 rows in one dataset which has 3 columns of datatypes (NUMBER and VARCHAR2(128 Byte)) and 139056 in the other which has only one column of NUMBER.
I am not sure if this is causing the diskspace full..
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

I have a problem where Ascential allocates the full X bytes of space for a varhcar(x) column. This causes a tremendous amount space to be used to store nothing. The fix in my case was to change the varchar(x) to just a varchar. That is, to modify the Ascential schema and make the column an unbounded varchar. I wonder if this would work for you as well?
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

Ultramundane wrote:That is, to modify the Ascential schema and make the column an unbounded varchar.
Can you explain how to go about changing schema..
And also when I load the metadata into ascential odbc stage it converts to Nvarchar..
Ultramundane
Participant
Posts: 407
Joined: Mon Jun 27, 2005 8:54 am
Location: Walker, Michigan
Contact:

Post by Ultramundane »

Can you open your source stage and go to the output column definitions. There you should see a column that is defined as NVARCHAR 128 if I remember correctly. Can you delete the 128 in the length field? You may have to put in 0 to get it to delete. Validate that the change proprated through and try to rerun your job.
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

Ultramundane wrote:Can you open your source stage and go to the output column definitions. There you should see a column that is defined as NVARCHAR 128 if I remember correctly. Can you delete the 128 in the length field? You may have to put in 0 to get it to delete. Validate that the change proprated through and try to rerun your job.
Thanks a lot Ultramundane :) ,yes thats the culprit, NVARCHAR 128 taking up all space, when I put 0 in the shema and ran it , job finshes in 4 min but when I put 128 job takes 15 min before aborting finally and also I was constantly monitoring the
disk space , job aborts when it uses up all 30G of diskspace available.
Just wondering what is the common amount disk space (scratch as well as dataset).. I know it varies depending on the requirement but is 30G very small? As I have to inform my adminstrator about it.
kris007
Charter Member
Charter Member
Posts: 1102
Joined: Tue Jan 24, 2006 5:38 pm
Location: Riverside, RI

Post by kris007 »

when I try to replace dataset stage with sequential file stage.. I am getting the following fatal error:
APT_CombinedOperatorController,0: U_TRUNCATED_CHAR_FOUND encountered.
. any idea why this might be..
Post Reply