Oracle deadlock when trying to upsert.

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Oracle deadlock when trying to upsert.

Post by mekrreddy »

Hello,

Couple of our jobs aborted in prod with oracle deadlock, when trying to upsert. also got this message in log: oci_invalid_handle
The jobs are running fine in Dev. We are using 4 node config file in production. partition is Auto.

Please give your views on this.

Thanks in advance...
reddy.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Typical situation where dev data does not reveal an issue that production data does. Does it work fine with a 1 node config file?
-craig

"You can never have too many knives" -- Logan Nine Fingers
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Post by mekrreddy »

thanks hullet,

yes, even code run successfully in systest.

We have huge tables, 50mn records on one table , so we did not test with one node config file yet,
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Post by mekrreddy »

When we run the same job with 1 node and 2 node config files, runs successfully!!! but some tables are huge, we can't afford to run with 1 or 2 node, but what causing oracle deadlock? using 4 nodes.?? with auto partition.


Thanks in advance..
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Your parallel processes are deadlocking. This is where you would need to override the auto partitioning and control it so it doesn't deadlock. Probably hash partitioned over the keys involved would help, I'd have a chat with your DBA and confirm.
-craig

"You can never have too many knives" -- Logan Nine Fingers
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

If you have an uncommitted insert followed by another update on the same key then there is your issue. When the first one commits the second changes from an insert to an update. You may need to rethink your logic.
Mamu Kim
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Post by mekrreddy »

Thanks Hullet and Duke

we changed the partition from auto -> Hash, no success!!
changed UPSERT ROW COMMIT INTERVAL from 5000 to 1000, no success.

Interestingly we run the job against same data(prod) in testing environment, and testing database, its works fine!!(does not work in prod)

is this completely an application issue? or DB issue as well..

please share your thoughts.. :(
keshav0307
Premium Member
Premium Member
Posts: 783
Joined: Mon Jan 16, 2006 10:17 pm
Location: Sydney, Australia

Post by keshav0307 »

have you hash partitioned based on the unique index on the table?
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Post by mekrreddy »

thanks Keshav,

yes we hash partitioned on the unique key..
no success.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Talk to your DBA, make sure he lets you in on when multiple processes writing to the same table can deadlock. Adjust the job accordingly.
-craig

"You can never have too many knives" -- Logan Nine Fingers
mekrreddy
Participant
Posts: 88
Joined: Wed Oct 08, 2008 11:12 am

Thank you all!!!

Post by mekrreddy »

The issue is resolved, DBA had to change INITTRANS Value from 1(Default), to 8. There was no issue from Datastage.

Thanks all.
Post Reply