Page 1 of 1

Abnormal termination of stage (aggregator stage)

Posted: Wed Aug 11, 2010 6:18 pm
by JPalatianos
Hi,
We have a straightforward job that flows as follows:

ODBC==>Transformer===>Aggregator====>ODBC

When the job runs I receive the following warning:
Abnormal termination of stage cfsmrt_acty_to_cost_objct_fact..Cfsmrt_acty_to_cost_objct_fact_agr detected

I reset the job and see the following in the from Previous run(...)
From previous run
DataStage Job 8 Phantom 3296
Job Aborted after 1 errors logged.
Program "DSD.WriteLog": Line 253, Abort.
Attempting to Cleanup after ABORT raised in job cfsmrt_acty_to_cost_objct_fact.

DataStage Phantom Aborting with @ABORT.CODE = 1

and

From previous run
DataStage Job 8 Phantom 8788
Program "DSD.BCIPut": Line 221, Exception raised in GCI subroutine:
Access violation.

I figured I would test a bit for the developers and changed the last ODBC to a sequential file and teh job ran fine without warnings.
Thanks - - John

Posted: Wed Aug 11, 2010 8:02 pm
by kris007
See if this helps

Posted: Wed Aug 11, 2010 8:10 pm
by ray.wurlod
I suspect that this is an "out of memory" situation. Make sure your data are sorted on the grouping keys (include ORDER BY in the ODBC stage) and mention that the data are sorted in the Aggregator stage properties.

Posted: Thu Sep 02, 2010 8:09 pm
by JPalatianos
I did sort up front in the ODBC and indicated the Sort order in the Aggregator. I played around a bit and changed the job from ODBC==>Transformer===>Aggregator====>ODBC to
ODBC==>Transformer===>Aggregator====>IPC====>ODBC

and that works fine.

My question is why would this work and having the Inter Process row buffering enabled on the project level not do the same?

I have posed the same question to IBM in a PMR and have not heard back yet.
Thanks - - John

Posted: Thu Sep 02, 2010 8:41 pm
by ray.wurlod
Cue "Twilight Zone" theme...

An IPC stage (theoretically at least) is no more than a visual manifestation of inter-process row buffering - it's only there to allow you to use buffer sizes and timeouts different from the job default. So why one of your jobs works and the other doesn't is a delightful mystery.

Posted: Thu Sep 02, 2010 9:10 pm
by chulett
I wonder if simply putting a Transformer between the two (rather than the IPC stage) would have done the trick as well?

Posted: Thu Sep 09, 2010 6:25 am
by JPalatianos
Just received a response from IBM and wanted to share:

Hello John,

I posted your question on our internal forum and received the following:
There is no documented difference between these approaches that I am aware of, beyond the fact that if you paste IPCs you can decide where to create the additional processes and for example you can do these between passive stages, which is not possible with Inter-process. This is explained in the Server job guide. That might be the difference, or maybe they are using different memory settings on the IPC vs the Inter-process. They can also monitor the number of processes using ps to see how many processes the job create in each version.