Oracle Connector Deadlock Issue

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
balu536
Premium Member
Premium Member
Posts: 103
Joined: Tue Dec 02, 2008 5:01 am

Oracle Connector Deadlock Issue

Post by balu536 »

Hi,
My Datastage job loads data to Oracle table with below properties

Write Mode: Update then Insert
Table Action : Append
Record Count : 100000
Array Size: 2000

Today the job failed (Update Query failed) with Deadlock issue
"The OCI function OCIStmtExecute returned status 1. Error code: 60, Error message: ORA-00060: deadlock detected while waiting for resource."

"The connector is not able to resume the write operation because multiple arrays of data were written to the database on one or more input links under the transaction that was active at the time of failover."

Is this because of Record Count and Array size being different. For Update operations, do we need to keep them same?

From: http://pic.dhe.ibm.com/infocenter/iisin ... 4050W.html


Please suggest.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Improper partitioning, I would think. More than one node? If so, how are your target streams partitioned? See if hash partitioning over the key fields helps.
-craig

"You can never have too many knives" -- Logan Nine Fingers
balu536
Premium Member
Premium Member
Posts: 103
Joined: Tue Dec 02, 2008 5:01 am

Post by balu536 »

Job is running on Two nodes.

I did hash partitioning on Primary Key on the input link of Oracle Connector Stage and updated Record Count to 2000.

Still the Job fails:(
srinivas.g
Participant
Posts: 251
Joined: Mon Jun 09, 2008 5:52 am

Post by srinivas.g »

Did you check any other process is trying to update the same table?
Srinu Gadipudi
balu536
Premium Member
Premium Member
Posts: 103
Joined: Tue Dec 02, 2008 5:01 am

Post by balu536 »

No other user or process is using the table.
srinivas.g
Participant
Posts: 251
Joined: Mon Jun 09, 2008 5:52 am

Post by srinivas.g »

Ok. Put record count is 0 and try
Srinu Gadipudi
balu536
Premium Member
Premium Member
Posts: 103
Joined: Tue Dec 02, 2008 5:01 am

Post by balu536 »

I was also thinking to do the same (that wll work).
I'm dealing with 1.5Million data, so shouldn't be a problem for now.

Thanks Srinivas.
balu536
Premium Member
Premium Member
Posts: 103
Joined: Tue Dec 02, 2008 5:01 am

Post by balu536 »

Since i have very less data, ran the job on One node and this might work for me.

Craig, Any suggestion on above?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Running it on one node would certainly rule out partitioning issues if they continue to occur.
-craig

"You can never have too many knives" -- Logan Nine Fingers
balu536
Premium Member
Premium Member
Posts: 103
Joined: Tue Dec 02, 2008 5:01 am

Post by balu536 »

Thanks.
Implemented the same and deployed the code to Production.

Hope it don't abort again.. :)
Post Reply