deadlock while using Change capture in update

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
surionline
Premium Member
Premium Member
Posts: 4
Joined: Tue Oct 19, 2010 6:29 am

deadlock while using Change capture in update

Post by surionline »

We are using Change data capture + transformer(using constraints to split based on change code) to implement insert/update logic. But in the update link, if a parallel odbc connector stage is used, transaction deadlock message is displayed in the log and job aborts with partial data update.eg:- out of the 10 records to be updated, only 2 will get updated.
If a sequential odbc enteprise stage or sequential drs stage is used, job gets finished with all the records getting updated correctly. The job is running on 4 nodes.
Please advise why the transaction deadlock is happening when the parallel odbc connector stage is used along with change capture.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

I would look at how you are reading your data - each database has different syntax for ensuring that you select without taking locks. In addition, you need to ensure that no 2 nodes can update the same records - the easiest solution here is to partition on the database key. Another easy, but inefficient, method is to reduce your commit size to 1.
surionline
Premium Member
Premium Member
Posts: 4
Joined: Tue Oct 19, 2010 6:29 am

Post by surionline »

ArndW wrote:I would look at how you are reading your data - each database has different syntax for ensuring that you select without taking locks. In addition, you need to ensure that no 2 nodes can update the same records - the easiest solution here is to partition on the database key. Another easy, but inefficient, method is to reduce your commit size to 1.
-Database is SQL Server, we tried with NO LOCK while reading, no luck
-no duplicate key records going in the update link, so no 2 nodes can update the same records
-tried with hash partition, no luck
-tried with transaction = 1, no luck

Is there any database or datastage project settings to be changed?
sajidkp
Participant
Posts: 114
Joined: Thu Apr 30, 2009 12:17 am
Location: New Delhi

Post by sajidkp »

Try changing property of target stage (ODBC) to Sequentail from Parellel. Ther can be performance reducion
Regards,
Sajid KP
Post Reply