Hi,
I've searched extensively but none of the discussions seem to match my case. I keep getting "parallel job reports failure (code 139)" erratically. When I view the performance stats within the designer, the job shows all green though it actually aborted with the error. Here's a snapshot of the job
Any help appreciated
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn
Life is really simple, but we insist on making it complicated.
Learn patience! This is an all-volunteer site where people post as and when they can. Yesterday, for example, I had a breakfast meeting with management, a busy day at work, and a training session at IBM after work. Not much time for DSXchange.
The issue many times is that the job monitor is interfering with job initialization. You can test whether this is your issue by setting APT_NO_JOBMON=1 to disable job monitoring. Then you have the option to disable the time monitor only by setting APT_MONITOR_TIME=5 or you can set APT_DISABLE_FASTALLOC=1 to resolve this error.
There are other possibilities - for example jobs containing Netezza stages, that are less likely in your case.
Search is your friend.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Hi Ray,
Not being impatient. Waited for a day and found that the post was getting lost in the archives. My apologies if I came across 'pushy'.
I started breaking the job in parts to determine which stage is causing the issue. Problem is that the the error is so erratic, it occurs once in 3-4 days on an average. So the iterations may take a while. Will definitely find the solution and add to the already existing ocean of solutions for this error!
Thank you
Jerome
Data Integration Consultant at AWS
Connect With Me On LinkedIn
Life is really simple, but we insist on making it complicated.
it could be related to a problem with your ODBC.ini file setup.
If you are using an ODBC setup then you may want to test the connection to ensure it is good.
it could also be related to the length of the filename. reduce the filename length and try again
This error is basically comes when Datastage is unable to write the job log into its file. So try to take a copy of current job and run. Hope this work !