Page 1 of 2

SIGSEGV error occured during loading the data

Posted: Mon Aug 31, 2009 12:42 am
by AmeyJoshi14
Hi,

We are facing the problem while loading the data into Oracle table.The job is aborting due to "SIGSEGV" error.We are using following option in the target table:

Write Method=Load
Write Mode=Truncate

We are facing these problem for a particular Oracle database server(rest for all other servers this option is working fine).The strange thing is that when use Write Method=Upsert , the data is loaded into the table,but some how the above mentioned option is not working.

We have open the ticket with IBM, they have asked us to run the job on single node,but even if we cahnge the configuration file to 1node,still we are getting SIGSEGV error.Also they have asked us to send them the log file,but the job is not geneerating any log file(.ctl file is empty).
I have also tried to run the job using the following parameters:

Code: Select all

APT_MONTIOR_SIZE=100000
APT_DISABLE_COMBINATION=True
APT_EXCECUTION_MODE=One process
But no luck... :(

Please Note:I ahve searched for the above mentioned error and even gone through every post ..but again no luck ...

Posted: Mon Aug 31, 2009 1:12 am
by ArndW
Apart from running in one node, can you try using "APT_ORA_WRITE_FILES" to keep the oracle loader files and to see if that can be run outside of DataStage without a problem?

Posted: Mon Aug 31, 2009 1:42 am
by AmeyJoshi14
ArndW wrote:Apart from running in one node, can you try using "APT_ORA_WRITE_FILES" to keep the oracle loader files and to see if that can be run outside of DataStage without a problem? ...
Thanks for the reply!!

I have used the parameter
APT_ORA_WRITE_FILES=/home/dir/abc/sqlldr.txt

Now the problem is that the job is running for almost 20 min for only single record(selecting only single record from the source).Still it is in running state :(

Posted: Mon Aug 31, 2009 1:53 am
by sanjay
Amey

are you able to run sqlldr through command line because internally it calls sqlldr .

Thanks
Sanjay
AmeyJoshi14 wrote:
ArndW wrote:Apart from running in one node, can you try using "APT_ORA_WRITE_FILES" to keep the oracle loader files and to see if that can be run outside of DataStage without a problem? ...
Thanks for the reply!!

I have used the parameter
APT_ORA_WRITE_FILES=/home/dir/abc/sqlldr.txt

Now the problem is that the job is running for almost 20 min for only single record(selecting only single record from the source).Still it is in running state :(

Posted: Mon Aug 31, 2009 2:13 am
by laknar
If source is a file. definitely this will be the file issue.

can you please post the source file format and job design. And do you have any constraints in transformer stage.

Posted: Mon Aug 31, 2009 3:24 am
by AmeyJoshi14
laknar wrote:If source is a file. definitely this will be the file issue.

can you please post the source file format and job design. And do you have any constraints in transformer stage.
Thanks for the reply!!
But the job is pretty simple.Our source is Oracle ....
We are testing the whether we are able to load from one server to another server.The problem is that we are unable to use Load option for the specific oracle database server.

Regards,
Amey

Posted: Mon Aug 31, 2009 3:29 am
by AmeyJoshi14
sanjay wrote:Amey

are you able to run sqlldr through command line because internally it calls sqlldr .

Thanks
Sanjay
AmeyJoshi14 wrote:
ArndW wrote:Apart from running in one node, can you try using "APT_ORA_WRITE_FILES" to keep the oracle loader files and to see if that can be run outside of DataStage without a problem? ...
Thanks for the reply!!

I have used the parameter
APT_ORA_WRITE_FILES=/home/dir/abc/sqlldr.txt

Now the problem is that the job is running for almost 20 min for only single record(selecting only single record from the source).Still it is in running state :(
Thanks for your reply!!

Actually even if I use the Write Mode=Append,job is running for almost more than 30 min for single record. :(
Through command prompt i have tried to load the data ...and data is inserted into the particular table.(with same User Id).
Really not able to understand why datastage is not able to load the data.. :cry:

Regards,
Amey!

Posted: Mon Aug 31, 2009 4:30 am
by ArndW
During that 30 minute wait time you should have a DBA monitor your session and find out what is happening.

Posted: Mon Aug 31, 2009 4:31 am
by sanjay
Amey

is it creating control file and sql loader log in scratch area .if yes please verify control file is in correct format


Thanks
Sajay

AmeyJoshi14 wrote:
sanjay wrote:Amey

are you able to run sqlldr through command line because internally it calls sqlldr .

Thanks
Sanjay
AmeyJoshi14 wrote: Thanks for the reply!!

I have used the parameter
APT_ORA_WRITE_FILES=/home/dir/abc/sqlldr.txt

Now the problem is that the job is running for almost 20 min for only single record(selecting only single record from the source).Still it is in running state :(
Thanks for your reply!!

Actually even if I use the Write Mode=Append,job is running for almost more than 30 min for single record. :(
Through command prompt i have tried to load the data ...and data is inserted into the particular table.(with same User Id).
Really not able to understand why datastage is not able to load the data.. :cry:

Regards,
Amey!

Posted: Mon Aug 31, 2009 4:36 am
by sanjay
Amey

Try 'APT_DISABLED_COMBINATION' to true in job to check whether u get actual error instead of SIGSEGV error

Thanks
Sanjay

sanjay wrote:Amey

is it creating control file and sql loader log in scratch area .if yes please verify control file is in correct format


Thanks
Sajay

AmeyJoshi14 wrote:
sanjay wrote:Amey

are you able to run sqlldr through command line because internally it calls sqlldr .

Thanks
Sanjay
Thanks for your reply!!

Actually even if I use the Write Mode=Append,job is running for almost more than 30 min for single record. :(
Through command prompt i have tried to load the data ...and data is inserted into the particular table.(with same User Id).
Really not able to understand why datastage is not able to load the data.. :cry:

Regards,
Amey!

Posted: Mon Aug 31, 2009 4:55 am
by AmeyJoshi14
Hi,

I actually used this parameter,but still it is giving me SIGSEGV error. :(
Also i tried to run the job with the Write Mode=Append, for this also it is giving same error message(SIGSEGV). :(

Thanks for the reply!!

Posted: Wed Sep 02, 2009 12:23 am
by AmeyJoshi14
Hi,

We are still facing the same issue...
The problem is really strange,for only one particular server we are not able to use the Write Method=Load ,Write Mode=Truncate ,but for the rest of the servers we can use this option... :?
Infact when we use the Write Method=Upsert,records are getting inserted into the table. :roll:
As stated earlier we have open the ticket with IBM.They advised us to run the job in the single node,which we did but again the job is getting abort with the error message as :

Code: Select all

ora_ent_tgt,0: Operator terminated abnormally: received signal SIGSEGV
Where in ora_ent_tgt is target table.
Is there any specific parameter or any privilges which we want to run/set before running the job?Also in the XMETA table we ahve set the quota time as unlimited.

Code: Select all

select * from dba_ts_quotas where tablespace_name='XMETA'
Please note: The particular user who is running the job has the insert privilges.. :)
Again thanks all for the time and effort :)

Best Regards,
Amey

Posted: Wed Sep 02, 2009 12:25 am
by AmeyJoshi14
Hi,

We are still facing the same issue...
The problem is really strange,for only one particular server we are not able to use the Write Method=Load ,Write Mode=Truncate ,but for the rest of the servers we can use this option... :?
Infact when we use the Write Method=Upsert,records are getting inserted into the table. :roll:
As stated earlier we have open the ticket with IBM.They advised us to run the job in the single node,which we did but again the job is getting abort with the error message as :

Code: Select all

ora_ent_tgt,0: Operator terminated abnormally: received signal SIGSEGV
Where in ora_ent_tgt is target table.
Is there any specific parameter or any privilges which we want to run/set before running the job?Also in the XMETA table we ahve set the quota time as unlimited.

Code: Select all

select * from dba_ts_quotas where tablespace_name='XMETA'
Please note: The particular user who is running the job has the insert privilges.. :)
Again thanks all for the time and effort :)

Best Regards,
Amey

Posted: Wed Sep 02, 2009 2:25 am
by Sainath.Srinivasan
When you say this is happening in one server, is this only table that is affected or all tables in that particular database ?

Try the following
1.) Set a constraint before the SQL Loader so no rows are written and run the job.
2.) Set a trace on the database (if possible for the ETL oracle user id) and check its sessions.

Posted: Wed Sep 02, 2009 3:58 am
by AmeyJoshi14
Sainath.Srinivasan wrote:When you say this is happening in one server, is this only table that is affected or all tables in that particular database ?
This is happing for all the tables in that paticular server :(
That is we are not able to use Load and truncate option for any of tables in the particular database :x

I also use the Delete option,its also working...the problem is only with the Load and Truncate option.... :?

Also in the source table while fetching the records I am selecting the no rows(through where clause) ,but for this also the job is aborting with the 'SIGSEGV' error... :x :x

Thanks for the reply!!

Best Regards,