Segmentation fault (core dumped)

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
durgaps
Participant
Posts: 74
Joined: Sat Jul 08, 2006 4:09 am
Location: Melbourne, Australia
Contact:

Segmentation fault (core dumped)

Post by durgaps »

Job Design:

ODBC
^
|
|
SEQ_FILE ===> TRANS ===>LOOKUP ===> TRANS ===> SEQ_FILE


This job was running fine a couple of days back and now it is aborting every 3rd or 4th run with the following error messages:
CLI : [TID: B75F42A0]:[Tue Mar 20 15:14:32 2007] rdacutl.c:496: *** RDA_cliPrepareSQL: can't execute PREPARE stmt_id_1002 STMT_OPTIONS 'op=2' FROM :H
\
Contents of phantom output file =>
RT_SC191/OshExecuter.sh: line 20: 1138 Segmentation fault (core dumped) $APT_ORCHHOME/bin/osh "$@" -f $oshscript >$oshpipe 2>&1

Any idea about Segmentation fault error ?

I did a search and found a couple of posts indicating to a hashed file corruption problem. In that case, how do we find out which Hashed file is corrupted and to how to fix it?

Is there any other reason why this could happen?

Thanks,
Durga Prasad
durgaps
Participant
Posts: 74
Joined: Sat Jul 08, 2006 4:09 am
Location: Melbourne, Australia
Contact:

Post by durgaps »

There were formatting issues with the job design in the prev. post. Plz chk the following.

Job Design:

ODBC
|
LOOKUP ===> TRANS ===> SEQ_FILE
^
|
TRANS
^
|
SEQ_FILE
Durga Prasad
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

There are no hashed files in parallel jobs.

A segmentation fault is an attempt to access an address in memory that either does not exist or is one that you do not own. It can, for example, be caused by trying to shoehorn 15 characters into a Char(12) column, or sometimes by trying to allocate null to a non-null column. In parallel jobs, processing large enough volumes, it might indicate that you have exhausted virtual memory.

In your case, it appears to have been when opening the ODBC stage, which includes requesting the database to prepare the SQL statement, that the problem has occurred. So perhaps it was the database's query optimizer that demanded too much memory.

You need to perform a more detailed diagnosis using simpler design.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
ag_ram
Premium Member
Premium Member
Posts: 524
Joined: Wed Feb 28, 2007 3:51 am

Post by ag_ram »

durgaps wrote:There were formatting issues with the job design in the prev. post. Plz chk the following.

Job Design:

ODBC
|
LOOKUP ===> TRANS ===> SEQ_FILE
^
|
TRANS
^
|
SEQ_FILE
Is it possible that the Objects created for the transformer be purged and the job be recompiled and run ..Iam assuming here that Transformers will have C++ Object files / exe created .
I suppose it should be possible to clear these objects ,reset and run it all over again
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

That is easy. In Designer, instead of doing a regular compile, choose Force Compile from the File menu. It forces re-doing of the Transformer stage: generation of C++ source, plus compiling and linking thereof.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply