Page 1 of 1

Segmentation fault (core dumped)

Posted: Mon Mar 19, 2007 11:38 pm
by durgaps
Job Design:

ODBC
^
|
|
SEQ_FILE ===> TRANS ===>LOOKUP ===> TRANS ===> SEQ_FILE


This job was running fine a couple of days back and now it is aborting every 3rd or 4th run with the following error messages:
CLI : [TID: B75F42A0]:[Tue Mar 20 15:14:32 2007] rdacutl.c:496: *** RDA_cliPrepareSQL: can't execute PREPARE stmt_id_1002 STMT_OPTIONS 'op=2' FROM :H
\
Contents of phantom output file =>
RT_SC191/OshExecuter.sh: line 20: 1138 Segmentation fault (core dumped) $APT_ORCHHOME/bin/osh "$@" -f $oshscript >$oshpipe 2>&1

Any idea about Segmentation fault error ?

I did a search and found a couple of posts indicating to a hashed file corruption problem. In that case, how do we find out which Hashed file is corrupted and to how to fix it?

Is there any other reason why this could happen?

Thanks,

Posted: Tue Mar 20, 2007 12:11 am
by durgaps
There were formatting issues with the job design in the prev. post. Plz chk the following.

Job Design:

ODBC
|
LOOKUP ===> TRANS ===> SEQ_FILE
^
|
TRANS
^
|
SEQ_FILE

Posted: Tue Mar 20, 2007 6:37 am
by ray.wurlod
There are no hashed files in parallel jobs.

A segmentation fault is an attempt to access an address in memory that either does not exist or is one that you do not own. It can, for example, be caused by trying to shoehorn 15 characters into a Char(12) column, or sometimes by trying to allocate null to a non-null column. In parallel jobs, processing large enough volumes, it might indicate that you have exhausted virtual memory.

In your case, it appears to have been when opening the ODBC stage, which includes requesting the database to prepare the SQL statement, that the problem has occurred. So perhaps it was the database's query optimizer that demanded too much memory.

You need to perform a more detailed diagnosis using simpler design.

Posted: Tue Mar 20, 2007 10:13 pm
by ag_ram
durgaps wrote:There were formatting issues with the job design in the prev. post. Plz chk the following.

Job Design:

ODBC
|
LOOKUP ===> TRANS ===> SEQ_FILE
^
|
TRANS
^
|
SEQ_FILE
Is it possible that the Objects created for the transformer be purged and the job be recompiled and run ..Iam assuming here that Transformers will have C++ Object files / exe created .
I suppose it should be possible to clear these objects ,reset and run it all over again

Posted: Tue Mar 20, 2007 10:31 pm
by ray.wurlod
That is easy. In Designer, instead of doing a regular compile, choose Force Compile from the File menu. It forces re-doing of the Transformer stage: generation of C++ source, plus compiling and linking thereof.