update/insert deadlock

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
dsuser7
Participant
Posts: 41
Joined: Sat Nov 29, 2008 2:21 am

update/insert deadlock

Post by dsuser7 »

Hi,

I have a job which updates and inserts a table [Update if record exists else insert]. This table has no primary keys defined on it.

This job worked fine in lower environments with fewer records (few hundreds), but failed in production with deadlock error on update statement, where the volume was 80k.

Changes made to the job now:

1. Changed target odbc stage to Hash partitioning.
2. Changed transaction and array size to 500 from 2000.

These changes caused the job to abort in lower environments with deadlock error.

Any suggestions on how to correct the design?

Note
Fatal Error:
ODBC function "SQLExecute" reported: SQLSTATE = : Native Error Code = 1205: Msg = [IBM(DataDirect OEM)][ODBC SQL Server Legacy Driver][SQL Server]Transaction (Process ID 71) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Thanks.
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

Do you have any duplicates coming from source? Does the job work fine where executed on single node?
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
dsuser7
Participant
Posts: 41
Joined: Sat Nov 29, 2008 2:21 am

Post by dsuser7 »

There are no duplicates in the source data, we verified that.
The job is run on 2 nodes.
priyadarshikunal
Premium Member
Premium Member
Posts: 1735
Joined: Thu Mar 01, 2007 5:44 am
Location: Troy, MI

Post by priyadarshikunal »

the second part of the question was "Does the job work fine where executed on single node?"
Priyadarshi Kunal

Genius may have its limitations, but stupidity is not thus handicapped. :wink:
dsuser7
Participant
Posts: 41
Joined: Sat Nov 29, 2008 2:21 am

Post by dsuser7 »

The job successfully updates when run on single node.
boxtoby
Premium Member
Premium Member
Posts: 138
Joined: Mon Mar 13, 2006 5:11 pm
Location: UK

Post by boxtoby »

We had exactly this problem.

Either run the job on one node, or partition by the columns you are using as keys for the update.

The problems is almost certainly caused by duplicate update keys.

Regards,
Bob.
Bob Oxtoby
rkashyap
Premium Member
Premium Member
Posts: 532
Joined: Fri Dec 02, 2011 12:02 pm
Location: Richmond VA

Post by rkashyap »

This is one of the issues, that we are facing on DataStage 9.1.2 (Solaris) while writing to Oracle 11.2.0.3 on three Node RAC. (APT Config = 8 Nodes). Our support provider could not replicate it in their environment and believe that this to be an Oracle database issue.

Same as Bob Oxtoby has mentioned, workaround for this issue is either run on a 1 node configuration or set the target Oracle Connector stage to process sequentially.

Another related issue, is that "Insert" of Update/Insert write mode aborts with "Unique Constraint error. Above mentioned workaround works for this issue also. Another workaround for this (for inserts to Oracle 11 target table) is to provide ignore_row_on_dupkey_index hint.

Please do raise a PMR for this issue.
dsuser7
Participant
Posts: 41
Joined: Sat Nov 29, 2008 2:21 am

Post by dsuser7 »

This issue seems to be resolved for now.

We made the foll changes:

1. used hash partion
2. defined keys on the target table
3. changed Transaction and array size from 2000 to 500

Thanks!
Post Reply