XML job fails for more records

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
suja.somu
Participant
Posts: 79
Joined: Thu Feb 07, 2013 10:51 pm

XML job fails for more records

Post by suja.somu »

XML job has multiple source links with HJOIN steps being performed on each source links. The job ran fine for 2000 records in XML. The XML schema is complex with over 250 elements and multiple complex groups which are unbounded.

The job fails for more than 2000 records in target with the below errors.

XML_22,0: 2014-03-07 14:58:37,790 Error [com.ibm.e2.applications.runner.E2AssemblyRunner] [] java.util.concurrent.RejectedExecutionException

XML_22,0: Failure during execution of operator logic.

node_node2: Player 15 terminated unexpectedly.

buffer(4),0: Error in writeBlock - could not write 130537


Perf-tuning steps:

Changed the HJOIN to perform in-memory join.
Changed the HEAP size and STACK size in the job


Please guide me steps to solve this issue.
eostic
Premium Member
Premium Member
Posts: 3838
Joined: Mon Oct 17, 2005 9:34 am

Post by eostic »

If the Stage is otherwise working perfectly, but having just volume issues, I would contact your support provider. That sounds like one that should be reviewed...

Ernie
Ernie Ostic

blogit!
<a href="https://dsrealtime.wordpress.com/2015/0 ... ere/">Open IGC is Here!</a>
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Although it seems unlikely for just over 2000 you may not have enough memory to perform all your HJOINS in memory. What happens if you disable this setting?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
suja.somu
Participant
Posts: 79
Joined: Thu Feb 07, 2013 10:51 pm

Post by suja.somu »

Thanks for the reply Ray.

There are only 2 options for Hjoin stage join types:
1. Disk Based
2. In-memory.

Earlier I had the DISK BASED join, job did abort frequently. IBM had suggested to use IN-MEMORY join in all documentation , so I changed to In-Memory and this could help to process only up to 2000 records as output.

I monitored the CPU usage and memory usage while the job ran, both didn't go to 100%. it was quiet normal usage.


I tried to enable the APT_DUMP_SCORE , but it does not give any detailed step in the job log.

Could not trace where the issue is. Please help.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

As Ernie noted, this should be going back to your support provider.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Post Reply