Parallel job reports failure (code 139)

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
prajish_ap
Participant
Posts: 11
Joined: Tue Nov 21, 2006 3:08 am
Location: Pune

Parallel job reports failure (code 139)

Post by prajish_ap »

My job carries out a simple task of loading a sequential file from an ODBC Stage.
It is getting aborted giving the error message as

Contents of phantom output file =>
RT_SC256/OshExecuter.sh: line 20: 16908 Segmentation fault $APT_ORCHHOME/bin/osh "$@" -f $oshscript >$oshpipe 2>&1

Parallel job reports failure (code 139)

I have tried the stop and start the server option ...but that isn't helping solve the issue :(


Thanks in advance!
JoshGeorge
Participant
Posts: 612
Joined: Thu May 03, 2007 4:59 am
Location: Melbourne

Post by JoshGeorge »

Segmentation fault can be because of many reasons like - running out of memory or there may be more than the program can handle ... For more refer THIS link.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
prajish_ap
Participant
Posts: 11
Joined: Tue Nov 21, 2006 3:08 am
Location: Pune

Post by prajish_ap »

JoshGeorge wrote:Segmentation fault can be because of many reasons like - running out of memory or there may be more than the program can handle ... For more refer THIS link.
The data being transferred here into the Sequential file is hardly 33MB in size...I don't think that the job is getting aborted because of running out of memory. Also, the job has been performing fine earlier with similar amount of data.
JoshGeorge
Participant
Posts: 612
Joined: Thu May 03, 2007 4:59 am
Location: Melbourne

Post by JoshGeorge »

So what changed in your environment? Did you chek to confirm and then concluded this is not a memory issue but something else?
prajish_ap wrote:Also, the job has been performing fine earlier with similar amount of data.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
prajish_ap
Participant
Posts: 11
Joined: Tue Nov 21, 2006 3:08 am
Location: Pune

Post by prajish_ap »

JoshGeorge wrote:So what changed in your environment? Did you chek to confirm and then concluded this is not a memory issue but something else?
prajish_ap wrote:Also, the job has been performing fine earlier with similar amount of data.
No change has been made to the environement...

Could you please could you please explain what you mean by change in environment?
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

ANYTHING that has changed. For example are you now running under a different user ID? Has the user's ulimit changed? What stage types are there in your job? What other tasks are happening at the same time? (These are just examples. You need to be the detective.)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply