Page 1 of 1

locking table when update and insert in to the same table

Posted: Wed Aug 25, 2010 3:18 am
by yimwai
How to avoid locking table when update and insert into one table at the same time .
The result data should be divided into two parts(two parts have no shared data) :
insert one part into table,and update the other into the same table.
i used transformer stage to get the result data and divide them into two parts through two lines to DB2 enterprise stages,
then the table is locked and the job keeps running forever...
how to handle this situation
can I make the DB2 enterprise stages(from one transformer) running in order or other good ideas

Posted: Wed Aug 25, 2010 3:30 am
by ETLJOB
Do you find any reason for the "UPSERT" option, not to work in your case?

Note: Post your questions in appropriate forum.

Posted: Wed Aug 25, 2010 4:51 am
by HariK
what are the write methods you are using in the target stages?

Posted: Wed Aug 25, 2010 6:52 am
by yimwai
ETLJOB wrote:Do you find any reason for the "UPSERT" option, not to work in your case?

Note: Post your questions in appropriate forum.
ooh,that may be a good solution ,i never use upsert.
you mean that I can divide result data by checking the input data in "where" statement of "user_defined sql"??
such as "WHERE ORCHESTRATE.DIF_COD=1"

Posted: Wed Aug 25, 2010 9:28 pm
by yimwai
I'm so sorry ,i think i described a wrong stuation .the fact is the two parts of result data has shared data.
and the upsert only do one action for one record and i aslo donnot know how to divide them in DE stage

Posted: Thu Aug 26, 2010 7:20 pm
by kwwilliams
You can't bulletproof your ability to lock a table when performing to simultaneous bulk actions at the same time. This is for many reasons. The table is going to potentially try to write an update and insert into pages at the same time. This would be especially true if you have a clustered index on the table.

The indexes are being updated at the same time you are performing inserts or updates. If you happened to be updating a key column (shouldn't happen often but I have seen it).

Finally DB2 would begin to escalate the locks all the way to the table level depending upon how many locks are being held (from row to block then all the way to the table level). Your DBA should be able to help you understand the locking that is occurring while your job is running. And regardless of what they tell you, they can change some parameters to allow more locks in the database before the lock is escalated. If they tell you otherwise google db2 lock escalation you should find some useful information for your confrontation ... I mean conversation. :wink:

In the end I would drop one of the two links to a dataset and then process the actions sequentially. It's probably not worth the effort to get thsi working and there is no guarantee that it won't revert to its previous behavior in the future.