Hi,
We have a straightforward job that flows as follows:
ODBC==>Transformer===>Aggregator====>ODBC
When the job runs I receive the following warning:
Abnormal termination of stage cfsmrt_acty_to_cost_objct_fact..Cfsmrt_acty_to_cost_objct_fact_agr detected
I reset the job and see the following in the from Previous run(...)
From previous run
DataStage Job 8 Phantom 3296
Job Aborted after 1 errors logged.
Program "DSD.WriteLog": Line 253, Abort.
Attempting to Cleanup after ABORT raised in job cfsmrt_acty_to_cost_objct_fact.
DataStage Phantom Aborting with @ABORT.CODE = 1
and
From previous run
DataStage Job 8 Phantom 8788
Program "DSD.BCIPut": Line 221, Exception raised in GCI subroutine:
Access violation.
I figured I would test a bit for the developers and changed the last ODBC to a sequential file and teh job ran fine without warnings.
Thanks - - John
Abnormal termination of stage (aggregator stage)
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 306
- Joined: Wed Jun 21, 2006 11:41 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
I suspect that this is an "out of memory" situation. Make sure your data are sorted on the grouping keys (include ORDER BY in the ODBC stage) and mention that the data are sorted in the Aggregator stage properties.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 306
- Joined: Wed Jun 21, 2006 11:41 am
I did sort up front in the ODBC and indicated the Sort order in the Aggregator. I played around a bit and changed the job from ODBC==>Transformer===>Aggregator====>ODBC to
ODBC==>Transformer===>Aggregator====>IPC====>ODBC
and that works fine.
My question is why would this work and having the Inter Process row buffering enabled on the project level not do the same?
I have posed the same question to IBM in a PMR and have not heard back yet.
Thanks - - John
ODBC==>Transformer===>Aggregator====>IPC====>ODBC
and that works fine.
My question is why would this work and having the Inter Process row buffering enabled on the project level not do the same?
I have posed the same question to IBM in a PMR and have not heard back yet.
Thanks - - John
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
Cue "Twilight Zone" theme...
An IPC stage (theoretically at least) is no more than a visual manifestation of inter-process row buffering - it's only there to allow you to use buffer sizes and timeouts different from the job default. So why one of your jobs works and the other doesn't is a delightful mystery.
An IPC stage (theoretically at least) is no more than a visual manifestation of inter-process row buffering - it's only there to allow you to use buffer sizes and timeouts different from the job default. So why one of your jobs works and the other doesn't is a delightful mystery.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
-
- Premium Member
- Posts: 306
- Joined: Wed Jun 21, 2006 11:41 am
Just received a response from IBM and wanted to share:
Hello John,
I posted your question on our internal forum and received the following:
There is no documented difference between these approaches that I am aware of, beyond the fact that if you paste IPCs you can decide where to create the additional processes and for example you can do these between passive stages, which is not possible with Inter-process. This is explained in the Server job guide. That might be the difference, or maybe they are using different memory settings on the IPC vs the Inter-process. They can also monitor the number of processes using ps to see how many processes the job create in each version.
Hello John,
I posted your question on our internal forum and received the following:
There is no documented difference between these approaches that I am aware of, beyond the fact that if you paste IPCs you can decide where to create the additional processes and for example you can do these between passive stages, which is not possible with Inter-process. This is explained in the Server job guide. That might be the difference, or maybe they are using different memory settings on the IPC vs the Inter-process. They can also monitor the number of processes using ps to see how many processes the job create in each version.