My job carries out a simple task of loading a sequential file from an ODBC Stage.
It is getting aborted giving the error message as
Contents of phantom output file =>
RT_SC256/OshExecuter.sh: line 20: 16908 Segmentation fault $APT_ORCHHOME/bin/osh "$@" -f $oshscript >$oshpipe 2>&1
Parallel job reports failure (code 139)
I have tried the stop and start the server option ...but that isn't helping solve the issue
Thanks in advance!
Parallel job reports failure (code 139)
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 11
- Joined: Tue Nov 21, 2006 3:08 am
- Location: Pune
-
- Participant
- Posts: 612
- Joined: Thu May 03, 2007 4:59 am
- Location: Melbourne
Segmentation fault can be because of many reasons like - running out of memory or there may be more than the program can handle ... For more refer THIS link.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
-
- Participant
- Posts: 11
- Joined: Tue Nov 21, 2006 3:08 am
- Location: Pune
The data being transferred here into the Sequential file is hardly 33MB in size...I don't think that the job is getting aborted because of running out of memory. Also, the job has been performing fine earlier with similar amount of data.JoshGeorge wrote:Segmentation fault can be because of many reasons like - running out of memory or there may be more than the program can handle ... For more refer THIS link.
-
- Participant
- Posts: 612
- Joined: Thu May 03, 2007 4:59 am
- Location: Melbourne
So what changed in your environment? Did you chek to confirm and then concluded this is not a memory issue but something else?
prajish_ap wrote:Also, the job has been performing fine earlier with similar amount of data.
Joshy George
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
<a href="http://www.linkedin.com/in/joshygeorge1" ><img src="http://www.linkedin.com/img/webpromo/bt ... _80x15.gif" width="80" height="15" border="0"></a>
-
- Participant
- Posts: 11
- Joined: Tue Nov 21, 2006 3:08 am
- Location: Pune
No change has been made to the environement...JoshGeorge wrote:So what changed in your environment? Did you chek to confirm and then concluded this is not a memory issue but something else?
prajish_ap wrote:Also, the job has been performing fine earlier with similar amount of data.
Could you please could you please explain what you mean by change in environment?
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
ANYTHING that has changed. For example are you now running under a different user ID? Has the user's ulimit changed? What stage types are there in your job? What other tasks are happening at the same time? (These are just examples. You need to be the detective.)
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.