Page 2 of 2

Posted: Wed May 24, 2006 1:03 am
by kumar_s
Check with you odbc source whether it really dose not have any data in it.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.

Posted: Wed May 24, 2006 2:55 am
by rony_daniel
I wonder why the links during processing of records always shows as n rows/sec , when actually it is the total records that has been processed. Why is the /Sec required anyways because we will be more interested in knowing if all the records have been processed or not and not at what speed it is processing? :shock: That if anybody is interested can be calculated based on the total time taken to run the job and the total number of records processed....

Posted: Wed May 24, 2006 7:29 am
by kumar_s
Not always. Actually the staticts shows both n rows as well as n rows / sec.
Both full quantity as well as the rate at which the rows been processed.

Posted: Wed May 24, 2006 9:20 am
by kris007
kumar_s wrote:Check with you odbc source whether it really dose not have any data in it.
If the issue is with metadata mismatch, you would get a error/warning logged in. Check it seperately in a new job, by diverting the odbc to a sequential file or dataset and make sure you get the expected output.
I did, I am getting the warning message about dfloat to double precision loss..
I tried to implement a JOIN stage with datasets as input the error is still the same pipe is full.. and other fatal errors follow.

Posted: Wed May 24, 2006 9:47 am
by kris007
Update:
I tried to limit the input dataset rows to the join and it's working well.
Not sure if the problem is with the size of the input JOIN stage has to deal with..

Posted: Wed May 24, 2006 12:29 pm
by kris007
When I tried to run a join with all the records from database source, its aborting saying Scratch space is full.
But when i actually try to find disk space immediately after the job aborts

Code: Select all

df -h /home/dstage/Ascential/DataStage/Scratch
it gives me 85% is used and after sometime it comes down to 66%.
However I have 35689234 rows in one dataset which has 3 columns of datatypes (NUMBER and VARCHAR2(128 Byte)) and 139056 in the other which has only one column of NUMBER.
I am not sure if this is causing the diskspace full..

Posted: Wed May 24, 2006 12:36 pm
by Ultramundane
I have a problem where Ascential allocates the full X bytes of space for a varhcar(x) column. This causes a tremendous amount space to be used to store nothing. The fix in my case was to change the varchar(x) to just a varchar. That is, to modify the Ascential schema and make the column an unbounded varchar. I wonder if this would work for you as well?

Posted: Wed May 24, 2006 12:51 pm
by kris007
Ultramundane wrote:That is, to modify the Ascential schema and make the column an unbounded varchar.
Can you explain how to go about changing schema..
And also when I load the metadata into ascential odbc stage it converts to Nvarchar..

Posted: Wed May 24, 2006 1:51 pm
by Ultramundane
Can you open your source stage and go to the output column definitions. There you should see a column that is defined as NVARCHAR 128 if I remember correctly. Can you delete the 128 in the length field? You may have to put in 0 to get it to delete. Validate that the change proprated through and try to rerun your job.

Posted: Wed May 24, 2006 3:48 pm
by kris007
Ultramundane wrote:Can you open your source stage and go to the output column definitions. There you should see a column that is defined as NVARCHAR 128 if I remember correctly. Can you delete the 128 in the length field? You may have to put in 0 to get it to delete. Validate that the change proprated through and try to rerun your job.
Thanks a lot Ultramundane :) ,yes thats the culprit, NVARCHAR 128 taking up all space, when I put 0 in the shema and ran it , job finshes in 4 min but when I put 128 job takes 15 min before aborting finally and also I was constantly monitoring the
disk space , job aborts when it uses up all 30G of diskspace available.
Just wondering what is the common amount disk space (scratch as well as dataset).. I know it varies depending on the requirement but is 30G very small? As I have to inform my adminstrator about it.

Posted: Wed May 24, 2006 4:30 pm
by kris007
when I try to replace dataset stage with sequential file stage.. I am getting the following fatal error:
APT_CombinedOperatorController,0: U_TRUNCATED_CHAR_FOUND encountered.
. any idea why this might be..