Page 1 of 1

update/insert deadlock

Posted: Tue Dec 23, 2014 6:38 am
by dsuser7
Hi,

I have a job which updates and inserts a table [Update if record exists else insert]. This table has no primary keys defined on it.

This job worked fine in lower environments with fewer records (few hundreds), but failed in production with deadlock error on update statement, where the volume was 80k.

Changes made to the job now:

1. Changed target odbc stage to Hash partitioning.
2. Changed transaction and array size to 500 from 2000.

These changes caused the job to abort in lower environments with deadlock error.

Any suggestions on how to correct the design?

Note
Fatal Error:
ODBC function "SQLExecute" reported: SQLSTATE = : Native Error Code = 1205: Msg = [IBM(DataDirect OEM)][ODBC SQL Server Legacy Driver][SQL Server]Transaction (Process ID 71) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Thanks.

Posted: Tue Dec 23, 2014 7:10 am
by priyadarshikunal
Do you have any duplicates coming from source? Does the job work fine where executed on single node?

Posted: Tue Dec 23, 2014 7:12 am
by dsuser7
There are no duplicates in the source data, we verified that.
The job is run on 2 nodes.

Posted: Tue Dec 23, 2014 7:13 am
by priyadarshikunal
the second part of the question was "Does the job work fine where executed on single node?"

Posted: Tue Dec 23, 2014 9:22 am
by dsuser7
The job successfully updates when run on single node.

Posted: Tue Dec 23, 2014 10:59 am
by boxtoby
We had exactly this problem.

Either run the job on one node, or partition by the columns you are using as keys for the update.

The problems is almost certainly caused by duplicate update keys.

Regards,
Bob.

Posted: Tue Dec 23, 2014 12:57 pm
by rkashyap
This is one of the issues, that we are facing on DataStage 9.1.2 (Solaris) while writing to Oracle 11.2.0.3 on three Node RAC. (APT Config = 8 Nodes). Our support provider could not replicate it in their environment and believe that this to be an Oracle database issue.

Same as Bob Oxtoby has mentioned, workaround for this issue is either run on a 1 node configuration or set the target Oracle Connector stage to process sequentially.

Another related issue, is that "Insert" of Update/Insert write mode aborts with "Unique Constraint error. Above mentioned workaround works for this issue also. Another workaround for this (for inserts to Oracle 11 target table) is to provide ignore_row_on_dupkey_index hint.

Please do raise a PMR for this issue.

Posted: Wed Dec 31, 2014 9:33 am
by dsuser7
This issue seems to be resolved for now.

We made the foll changes:

1. used hash partion
2. defined keys on the target table
3. changed Transaction and array size from 2000 to 500

Thanks!