deadlock error = job status OK

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
jlock23
Participant
Posts: 18
Joined: Wed May 11, 2005 9:16 am

deadlock error = job status OK

Post by jlock23 »

Hi, I'm hoping that someone can help here, I'm completely stumped. I have a two part problem.

1) We are updating a database in our parallel job. When the update ODBC stage is set to parallel mode, I get fatal deadlock errors. When the update ODBC stage is set to sequential mode, no error. Why would this happen? Each row update is unique (no overlapping updates).

2) When we recieve the fatal deadlock error, the job completes with a status = 1. This is NOT ok. Since the job status is fine, we don't know it has not successfully updated the database unless we open the log every time to visually check. Any idea why the status is okay and not aborted or with warnings?

This is the Fatal error that we recieve:
APT_CombinedOperatorController,0: [DataDirect][ODBC SQL Server Driver][SQL Server]Transaction (Process ID 211) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Thanks in advance!!!


* editing to add *
The job is reading from a sequential file and then updating the database with the primary key column in the where clause.
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Are your data partitioned on the primary key? If not, one partition may still be holding a lock while another partition is trying to acquire it, and vice versa (classic deadlock situation), particularly if your rows per transaction is greater than one.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
crouse
Charter Member
Charter Member
Posts: 204
Joined: Sun Oct 05, 2003 12:59 pm
Contact:

Post by crouse »

Ask your DBA for the trace file that has info about the deadlock. When we have the issue, we're encountering a lot of sessions competing for the same area where the primary key is managed in the database. In Oracle you can increase INITRANS for the PK from the default (2) when the table is created. INITRANS are like handles on the index. Roughly, each session needs one in order to avoid the deadlocks. Anyways, it's a place to start.
Craig Rouse
Griffin Resouces, Inc
www.griffinresources.com
jlock23
Participant
Posts: 18
Joined: Wed May 11, 2005 9:16 am

Post by jlock23 »

I appreciate the tips on the database checks. Our DBA doesn't see anything in error or deadlocking.

What I am MORE concerned about is the second part of my question. "When we recieve the fatal deadlock error, the job completes with a status = 1. This is NOT ok. Since the job status is fine, we don't know it has not successfully updated the database unless we open the log every time to visually check. Any idea why the status is okay and not aborted or with warnings?"

How can a job have a FATAL error, and finish with JobStatus="OK". IT's NOT OK! Can anyone help with this part of the problem?


(Sorry ray.wurlod if you did address that question...I can't view premium content so I'm not sure what you said).
kld05
Charter Member
Charter Member
Posts: 36
Joined: Fri Apr 28, 2006 8:12 am

Post by kld05 »

Sorry for reviving an old thread but was this resolved? I encountered the same scenario last night where a deadlock was killed by the RDBMS with a successful job status.
kld05
Charter Member
Charter Member
Posts: 36
Joined: Fri Apr 28, 2006 8:12 am

Post by kld05 »

Sorry for reviving an old thread but was this resolved? I encountered the same scenario last night where a deadlock was killed by the RDBMS with a successful job status.
Post Reply