Page 1 of 2

Issue with Upsert option

Posted: Sun Jun 06, 2010 5:06 am
by devidotcom
Hi All,

We have a simple job design that reads records from a dataset does some transformation in the transformer stage and insert them to the Oracle table using oracle enterprise stage.

We have an upsert option set with "LOG ERRORS" statement given in the Oracle Enterprise stage to capture the error records that fail the constraints set on the table.

This table has 5 not null constraints and 2 referential integrity constraint with a total of 13 columns in it.

The source record count is approx 100000. The job insert close to 3000 records and then aborts with the following message:

ora_TableName_load,1: Operator terminated abnormally: received signal SIGBUS
main_program: APT_PMsectionLeader(1, node1), player 2 - Unexpected exit status 1.
APT_PMsectionLeader(2, node2), player 2 - Unexpected exit status 1.

We tried changing the array size but that did not help. Currently the array size is 2000.

We even tried separating the flows into two with insert in one flow and update in the other in the same join after doing a lookup with the target table. But yet had the same problem.

Please advice

Thanks

Posted: Sun Jun 06, 2010 5:44 am
by chulett
Any fun columns like a CLOB in your table? Did you try with an array size of 1? What exact (all 4 segments) Oracle client version are you running?

Posted: Sun Jun 06, 2010 11:09 am
by devidotcom
Yes, I tried with array size 1, I yet face the same issue.
We are using Oracle 11g. When we import the metadata I see Unicode datatypes, are these CLOB columns not sure.

Please advice on how do we resolve this issue, any suggestions.

Posted: Sun Jun 06, 2010 4:25 pm
by chulett
Unicode is a whole 'nother kettle of fish. CLOB is CLOB or LOB or BLOB or LONG in Oracle and the metadata would come in as LongVarchar. How did you import the metadata, through which utility? If it wasn't the Orchestrate based one, try that one and use the metadata it creates.

11g isn't officially supported, so tracking this down might be... problematical. You didn't answer my client version question - it would be something like 11.x.x.x, check with your DBA if unsure. It may be an important bit of information someone will need further down the line.

Posted: Mon Jun 07, 2010 12:38 am
by devidotcom
Sorry the Oracle version is 10.2.0.3.0.

We have migrated the metadata using Orchestrate based one.
We yet have issues.

Please advice.

Posted: Mon Jun 07, 2010 6:12 am
by devidotcom
Could there be any problem in the table.. not sure why we are facing issues with all the tables.

When the Write Method = Load then all the records gets successfully inserted. Its issues with Upsert that I face...

Posted: Mon Jun 07, 2010 6:16 am
by chulett
Have we just gone from one table to 'issues with all the tables'? And are we really talking about 10g rather than 11g? :?

Posted: Mon Jun 07, 2010 6:18 am
by devidotcom
Yeah... Looks like its with tables that don't have unique keys defined on them.. and that loads huge data.

Yeah.. we are talking about 10g.. :(

Posted: Mon Jun 07, 2010 6:27 am
by chulett
Updates based on non-indexed fields will do full table scans, so it is always in your best interest to avoid that - especially for anything with the word 'huge' associated with it. Me, I would never design a job using Upsert in a situation like that unless (maybe) you used 'Insert then Update' knowing that the vast majority of the records would be inserts.

Seeing as how a 'Load' works, the implication is there are no updates being done anyway, so why the need for Upsert in this case? Why not stick with load? :?

Posted: Mon Jun 07, 2010 8:47 am
by devidotcom
We are doing inserts only with an update SQL of Where condition as 1=2. We want to have Upsert mode as we need to capture records in an error table that would silently get dropped using LOG ERRORS clause.

There is another flow that handles the updates. Issue here is with the insert flow with upsert flow only having Insert then update option set.

Posted: Mon Jun 07, 2010 9:13 am
by chulett
If you are certain that your bogus 'update' never fires, then it's time to involve your official support provider. Only other thing I could suggest would be to see if there is an updated Oracle client version available as I've seen things like this from Oracle bugs that (usually) get fixed in later releases.

Posted: Mon Jun 07, 2010 9:30 am
by devidotcom
Searching some posts I tried the option to set APT_ORAUPSERT_COMMIT_ROW_INTERVAL variable to 1 and the job ran to success. I did even try setting it to 0 and it aborted,

Can you please let me know what is happening.. :P

Posted: Mon Jun 07, 2010 10:57 am
by devidotcom
By default these values were set as

APT_ORAUPSERT_COMMIT_ROW_INTERVAL = 5000
APT_ORAUPSERT_COMMIT_TIME_INTERVAL = 2

Please help me understand why did the job run to success after setting APT_ORAUPSERT_COMMIT_ROW_INTERVAL =1.

Posted: Mon Jun 07, 2010 1:10 pm
by chulett
Not much to say except that means every row is committed individually which is about the slowest way to load data on the planet. Involve your DBA, have them monitor the database, particularly the undo/rollback usage and space using the old values. See if that sheds any light on this.

Posted: Thu Jun 10, 2010 2:02 am
by devidotcom
This issue is yet open.. can anyone help. Now values i used for the environment variables did not work for jobs which had record count >100000.

Please help.