Function 'get_next_output_row' failed

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
bhanuvlakshmi
Participant
Posts: 23
Joined: Fri Oct 22, 2010 7:08 am

Function 'get_next_output_row' failed

Post by bhanuvlakshmi »

I am using the merge stage and left outer join in it and the design is as follows.
Merge-->Transformer--->Aggrigator--> ODBC stage.When ran i am getting the following error .The i/p files are having lakhs of rows. The error is "Function 'get_next_output_row' failed".
Also please advice While the job is running how to monitor our disk space usage.
Thanks & Regards,
Bhanu
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

How many lakhs? How wide are the rows? The Merge stage uses hashed files under the covers and probably blew past the ~2GB barrier they inherently have.
Last edited by chulett on Tue Apr 19, 2011 8:57 am, edited 1 time in total.
-craig

"You can never have too many knives" -- Logan Nine Fingers
bhanuvlakshmi
Participant
Posts: 23
Joined: Fri Oct 22, 2010 7:08 am

Post by bhanuvlakshmi »

The stage uses the sequential files and each of the file is having 54 columns each and having approx 16 lakhs of records.
Thanks & Regards,
Bhanu
chandra.shekhar@tcs.com
Premium Member
Premium Member
Posts: 353
Joined: Mon Jan 17, 2011 5:03 am
Location: Mumbai, India

Post by chandra.shekhar@tcs.com »

Use Db2/oracle connector in place of odbc.
(watever ur target is)
it works in parallel
Thanx and Regards,
ETL User
zulfi123786
Premium Member
Premium Member
Posts: 730
Joined: Tue Nov 04, 2008 10:14 am
Location: Bangalore

Post by zulfi123786 »

Aggregator is the one which i presume is causing the issue which as said creates a hashed file and then operates on it.

Check if you could avoid using the aggregator and try to put the same logic in transformer, see if that works but you need to sort the data as a pre-requisite. Be cautious as to split the data involving only the columns used in aggregation into one link and then sort them as you are having huge data a sort would blow up your disk space. and ones the aggregation is done in transformer look up this data with the main stream, a kind of fork join.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

zulfi123786 wrote:Aggregator is the one which i presume is causing the issue which as said creates a hashed file and then operates on it.
Sorry, I've edited my earlier post to be clearer - it is the Merge stage that uses hashed files and is what is failing, not the Aggregator.
-craig

"You can never have too many knives" -- Logan Nine Fingers
bhanuvlakshmi
Participant
Posts: 23
Joined: Fri Oct 22, 2010 7:08 am

Post by bhanuvlakshmi »

Hi I am still facing the same problem.with less amount of data it is working fine ,when tried with 7 lakhs of data it is giving the following logs and errors

"Invalid row termination character configuration.
Function 'input_str_to_row' failed
Function 'hashToRow' failed
Function 'get_next_output_row' failed
Error occurred while deleting temporary file."
Please help me in this.

we were having server memory issues previously and was corrected again the same error is occuring
Thanks & Regards,
Bhanu
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

If you have "too much data" for the Merge stage to handle (and it sounds like you do) you are going to have to change your job design, take an alternate approach. For example, store one of the data sources in a reference hashed file and use the other as your stream input, with the 'left outer join' being realized by not checking the success of the lookup.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply