Funnel did not consume all of the available data

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
Ananda
Participant
Posts: 29
Joined: Mon Sep 20, 2004 12:05 am

Funnel did not consume all of the available data

Post by Ananda »

I am facing an error which involves datasets, funnel stage and UDBLoadPX stage.

The job design is simple.

Data extracted from 5 dataset files is combined using a funnel. Funnel Type used is "Continous". Finally data is loaded into DB2 table using UDBLoad PX stage. Settings in UDBLoad PX stage is to REPLACE the existing data.

This has been running fine in DEVELOPMENT environment. When moved to QA, I am getting following error.

Occurred: 2:19:16 AM On date: 3/10/2010 Type: Fatal
Event: UDB_MSTR_PFL_ROWID_UPD,0: Fatal Error: Fatal: Internal Error: Function 'get_stage_prop' failed (...)

Occurred: 2:19:16 AM On date: 3/10/2010 Type: Fatal
Event: Funnel_155,2: APT_IOPort::receiveAcks(0,force): read on [fd 20: L200.133.7.30:11017, R11.148.7.30:55285] failed - 104 (Connection reset by peer)

Occurred: 2:19:16 AM On date: 3/10/2010 Type: Warning
Event: Funnel_155,2: An operator downstream from node3[op5,p2], parallel Funnel_155 did not consume all of the available data.

Occurred: 2:19:16 AM On date: 3/10/2010 Type: Fatal
Event: Funnel_155,1: APT_IOPort::receiveAcks(0,force): read on [fd 20: L205.148.7.30:11016, R204.148.7.30:45837] failed - 104 (Connection reset by peer)

Occurred: 2:19:16 AM On date: 3/10/2010 Type: Warning
Event: Funnel_155,1: An operator downstream from node2[op5,p1], parallel Funnel_155 did not consume all of the available data.

Occurred: 2:19:16 AM On date: 3/10/2010 Type: Fatal
Event: node_node1: Player 2 terminated unexpectedly.

Occurred: 2:19:21 AM On date: 3/10/2010 Type: Fatal
Event: main_program: APT_PMsectionLeader(1, node1), player 2 - Unexpected exit status 1.

Please be informed that I am using 3 nodes.

Is it because of many datasets passed to the funnel. Each dataset is less than a million records. If this is to do with the file size, then how to decide on size of multiple files being passed to the funnel.

Do I need to split this in 2 jobs. First job will handle 3 datasets and second job will handle the next 2 datasets.
If you don't fail now and again, it's a sign you're playing it safe.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

"Connection reset by peer" indicates that you are (were at the time) having network issues. You need to resolve these.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply