Fatal: Parallel job reports failure (code 139)

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
smuppidi
Premium Member
Premium Member
Posts: 11
Joined: Fri Mar 17, 2006 9:00 am

Fatal: Parallel job reports failure (code 139)

Post by smuppidi »

Hello All,

I am trying to extract data from Teradata Enterprise Satge and i am getting the following error:

info: Contents of phantom output file =>
RT_SC356/OshExecuter.sh [20]: 1404 Memory fault

Fatal: Parallel job reports failure (code 139)

Simple SQLs are running ok but SQLs with a subselect are aborting with that message.There is a post out there but no clear direction.
Any insight on this would be helpful.

Thanks,
Satish.
roy
Participant
Posts: 2598
Joined: Wed Jul 30, 2003 2:05 am
Location: Israel

Post by roy »

Hi,
Have you contacted your support provider?
:( I don't have a TD to test saginst :(

Did you verify your configuration file has the correct memory definitions?
Ask your sysadmins to compare memory segments that are defined in the uvconfig file against the ones used in the machine; there is a chance something is wrong in that direction.

Please post the solution.

IHTH,
Roy R.
Time is money but when you don't have money time is all you can afford.

Search before posting:)

Join the DataStagers team effort at:
http://www.worldcommunitygrid.org
Image
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Parallel job reports failure (code 139)... Hum... Got the same once....
What is the length of the query you execute?
Once i got the same with DB2 database. I tried to execute the qure in command line it executes well. When I try to do the same thought Execute command activity it gave out the same error. Later with some trail and error we found, it is due the the length of the query used, We just roughly eliminated the extraspace. It worked out. didnt noticed the exact length.
But I wont say this can be the only reason. Because, due this this, SIGKILL was raised and hend the job was aborted in my case.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
smuppidi
Premium Member
Premium Member
Posts: 11
Joined: Fri Mar 17, 2006 9:00 am

Post by smuppidi »

Thanks all for your suggestions. I got that fixed. It was due to fault with internal datasets that datastage creates while extracting the source data from Teradata. Admin people took care of that. But this code may also come in some cases due to extra spaces in query.

Satish.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

"It was due to fault with internal datasets that datastage creates "
Do you mean virtual dataset?
May I konw what where the steps taken to overcome this?
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
smuppidi
Premium Member
Premium Member
Posts: 11
Joined: Fri Mar 17, 2006 9:00 am

Post by smuppidi »

Kumar, it seems the Datasets were dropped and recreated.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

smuppidi wrote:Kumar, it seems the Datasets were dropped and recreated.
Thanks for the response.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Virtual Data Sets must be re-created. They only exist while the job is running. Are we talking persistent Data Sets here? Was the problem therefore one of design, where you were relying upon existence of a Data Set that was being overwritten? There's nothing you can do in a DataStage job that will actually drop a Data Set.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply