Issue with Upsert option

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Issue with Upsert option

Post by devidotcom »

Hi All,

We have a simple job design that reads records from a dataset does some transformation in the transformer stage and insert them to the Oracle table using oracle enterprise stage.

We have an upsert option set with "LOG ERRORS" statement given in the Oracle Enterprise stage to capture the error records that fail the constraints set on the table.

This table has 5 not null constraints and 2 referential integrity constraint with a total of 13 columns in it.

The source record count is approx 100000. The job insert close to 3000 records and then aborts with the following message:

ora_TableName_load,1: Operator terminated abnormally: received signal SIGBUS
main_program: APT_PMsectionLeader(1, node1), player 2 - Unexpected exit status 1.
APT_PMsectionLeader(2, node2), player 2 - Unexpected exit status 1.

We tried changing the array size but that did not help. Currently the array size is 2000.

We even tried separating the flows into two with insert in one flow and update in the other in the same join after doing a lookup with the target table. But yet had the same problem.

Please advice

Thanks
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Any fun columns like a CLOB in your table? Did you try with an array size of 1? What exact (all 4 segments) Oracle client version are you running?
-craig

"You can never have too many knives" -- Logan Nine Fingers
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

Yes, I tried with array size 1, I yet face the same issue.
We are using Oracle 11g. When we import the metadata I see Unicode datatypes, are these CLOB columns not sure.

Please advice on how do we resolve this issue, any suggestions.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Unicode is a whole 'nother kettle of fish. CLOB is CLOB or LOB or BLOB or LONG in Oracle and the metadata would come in as LongVarchar. How did you import the metadata, through which utility? If it wasn't the Orchestrate based one, try that one and use the metadata it creates.

11g isn't officially supported, so tracking this down might be... problematical. You didn't answer my client version question - it would be something like 11.x.x.x, check with your DBA if unsure. It may be an important bit of information someone will need further down the line.
-craig

"You can never have too many knives" -- Logan Nine Fingers
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

Sorry the Oracle version is 10.2.0.3.0.

We have migrated the metadata using Orchestrate based one.
We yet have issues.

Please advice.
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

Could there be any problem in the table.. not sure why we are facing issues with all the tables.

When the Write Method = Load then all the records gets successfully inserted. Its issues with Upsert that I face...
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Have we just gone from one table to 'issues with all the tables'? And are we really talking about 10g rather than 11g? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

Yeah... Looks like its with tables that don't have unique keys defined on them.. and that loads huge data.

Yeah.. we are talking about 10g.. :(
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Updates based on non-indexed fields will do full table scans, so it is always in your best interest to avoid that - especially for anything with the word 'huge' associated with it. Me, I would never design a job using Upsert in a situation like that unless (maybe) you used 'Insert then Update' knowing that the vast majority of the records would be inserts.

Seeing as how a 'Load' works, the implication is there are no updates being done anyway, so why the need for Upsert in this case? Why not stick with load? :?
-craig

"You can never have too many knives" -- Logan Nine Fingers
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

We are doing inserts only with an update SQL of Where condition as 1=2. We want to have Upsert mode as we need to capture records in an error table that would silently get dropped using LOG ERRORS clause.

There is another flow that handles the updates. Issue here is with the insert flow with upsert flow only having Insert then update option set.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

If you are certain that your bogus 'update' never fires, then it's time to involve your official support provider. Only other thing I could suggest would be to see if there is an updated Oracle client version available as I've seen things like this from Oracle bugs that (usually) get fixed in later releases.
-craig

"You can never have too many knives" -- Logan Nine Fingers
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

Searching some posts I tried the option to set APT_ORAUPSERT_COMMIT_ROW_INTERVAL variable to 1 and the job ran to success. I did even try setting it to 0 and it aborted,

Can you please let me know what is happening.. :P
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

By default these values were set as

APT_ORAUPSERT_COMMIT_ROW_INTERVAL = 5000
APT_ORAUPSERT_COMMIT_TIME_INTERVAL = 2

Please help me understand why did the job run to success after setting APT_ORAUPSERT_COMMIT_ROW_INTERVAL =1.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Not much to say except that means every row is committed individually which is about the slowest way to load data on the planet. Involve your DBA, have them monitor the database, particularly the undo/rollback usage and space using the old values. See if that sheds any light on this.
-craig

"You can never have too many knives" -- Logan Nine Fingers
devidotcom
Participant
Posts: 247
Joined: Thu Apr 27, 2006 6:38 am
Location: Hyderabad

Post by devidotcom »

This issue is yet open.. can anyone help. Now values i used for the environment variables did not work for jobs which had record count >100000.

Please help.
Post Reply