Warning: "dfloat" to result type decimal
Moderators: chulett, rschirm, roy
Check with you odbc source whether it really dose not have any data in it.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
-
- Participant
- Posts: 36
- Joined: Thu Sep 01, 2005 5:44 am
- Location: Canada
I wonder why the links during processing of records always shows as n rows/sec , when actually it is the total records that has been processed. Why is the /Sec required anyways because we will be more interested in knowing if all the records have been processed or not and not at what speed it is processing? That if anybody is interested can be calculated based on the total time taken to run the job and the total number of records processed....
I did, I am getting the warning message about dfloat to double precision loss..kumar_s wrote:Check with you odbc source whether it really dose not have any data in it.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.
I tried to implement a JOIN stage with datasets as input the error is still the same pipe is full.. and other fatal errors follow.
When I tried to run a join with all the records from database source, its aborting saying Scratch space is full.
But when i actually try to find disk space immediately after the job aborts
it gives me 85% is used and after sometime it comes down to 66%.
However I have 35689234 rows in one dataset which has 3 columns of datatypes (NUMBER and VARCHAR2(128 Byte)) and 139056 in the other which has only one column of NUMBER.
I am not sure if this is causing the diskspace full..
But when i actually try to find disk space immediately after the job aborts
Code: Select all
df -h /home/dstage/Ascential/DataStage/Scratch
However I have 35689234 rows in one dataset which has 3 columns of datatypes (NUMBER and VARCHAR2(128 Byte)) and 139056 in the other which has only one column of NUMBER.
I am not sure if this is causing the diskspace full..
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
I have a problem where Ascential allocates the full X bytes of space for a varhcar(x) column. This causes a tremendous amount space to be used to store nothing. The fix in my case was to change the varchar(x) to just a varchar. That is, to modify the Ascential schema and make the column an unbounded varchar. I wonder if this would work for you as well?
-
- Participant
- Posts: 407
- Joined: Mon Jun 27, 2005 8:54 am
- Location: Walker, Michigan
- Contact:
Can you open your source stage and go to the output column definitions. There you should see a column that is defined as NVARCHAR 128 if I remember correctly. Can you delete the 128 in the length field? You may have to put in 0 to get it to delete. Validate that the change proprated through and try to rerun your job.
Thanks a lot Ultramundane ,yes thats the culprit, NVARCHAR 128 taking up all space, when I put 0 in the shema and ran it , job finshes in 4 min but when I put 128 job takes 15 min before aborting finally and also I was constantly monitoring theUltramundane wrote:Can you open your source stage and go to the output column definitions. There you should see a column that is defined as NVARCHAR 128 if I remember correctly. Can you delete the 128 in the length field? You may have to put in 0 to get it to delete. Validate that the change proprated through and try to rerun your job.
disk space , job aborts when it uses up all 30G of diskspace available.
Just wondering what is the common amount disk space (scratch as well as dataset).. I know it varies depending on the requirement but is 30G very small? As I have to inform my adminstrator about it.