Page 1 of 1

Function 'get_next_output_row' failed

Posted: Tue Apr 19, 2011 12:49 am
by bhanuvlakshmi
I am using the merge stage and left outer join in it and the design is as follows.
Merge-->Transformer--->Aggrigator--> ODBC stage.When ran i am getting the following error .The i/p files are having lakhs of rows. The error is "Function 'get_next_output_row' failed".
Also please advice While the job is running how to monitor our disk space usage.

Posted: Tue Apr 19, 2011 5:36 am
by chulett
How many lakhs? How wide are the rows? The Merge stage uses hashed files under the covers and probably blew past the ~2GB barrier they inherently have.

Posted: Tue Apr 19, 2011 6:51 am
by bhanuvlakshmi
The stage uses the sequential files and each of the file is having 54 columns each and having approx 16 lakhs of records.

Posted: Tue Apr 19, 2011 7:02 am
by chandra.shekhar@tcs.com
Use Db2/oracle connector in place of odbc.
(watever ur target is)
it works in parallel

Posted: Tue Apr 19, 2011 7:13 am
by zulfi123786
Aggregator is the one which i presume is causing the issue which as said creates a hashed file and then operates on it.

Check if you could avoid using the aggregator and try to put the same logic in transformer, see if that works but you need to sort the data as a pre-requisite. Be cautious as to split the data involving only the columns used in aggregation into one link and then sort them as you are having huge data a sort would blow up your disk space. and ones the aggregation is done in transformer look up this data with the main stream, a kind of fork join.

Posted: Tue Apr 19, 2011 9:00 am
by chulett
zulfi123786 wrote:Aggregator is the one which i presume is causing the issue which as said creates a hashed file and then operates on it.
Sorry, I've edited my earlier post to be clearer - it is the Merge stage that uses hashed files and is what is failing, not the Aggregator.

Posted: Thu May 05, 2011 5:23 am
by bhanuvlakshmi
Hi I am still facing the same problem.with less amount of data it is working fine ,when tried with 7 lakhs of data it is giving the following logs and errors

"Invalid row termination character configuration.
Function 'input_str_to_row' failed
Function 'hashToRow' failed
Function 'get_next_output_row' failed
Error occurred while deleting temporary file."
Please help me in this.

we were having server memory issues previously and was corrected again the same error is occuring

Posted: Thu May 05, 2011 6:21 am
by chulett
If you have "too much data" for the Merge stage to handle (and it sounds like you do) you are going to have to change your job design, take an alternate approach. For example, store one of the data sources in a reference hashed file and use the other as your stream input, with the 'left outer join' being realized by not checking the success of the lookup.