deadlock error
Moderators: chulett, rschirm, roy
deadlock error
I have a Job.source and target both sql server
source--->transformer--->target
when i run the job it gets aborted with the error message :
Transaction (Process ID 65) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
sometimes it would run with some rows at target and abort and sometimes with no rows at all. Am using the UPSERT(update then insert) at the target.
please give some insight.
Thanks
source--->transformer--->target
when i run the job it gets aborted with the error message :
Transaction (Process ID 65) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
sometimes it would run with some rows at target and abort and sometimes with no rows at all. Am using the UPSERT(update then insert) at the target.
please give some insight.
Thanks
-
- Participant
- Posts: 4
- Joined: Mon Oct 19, 2009 2:57 am
Re: deadlock error
Re-partition before target stage. You may have duplicates in data...
Thanks
Thanks
Re: deadlock error
Thanks for the reply have been waiting for someone to help out.
Currently am using the AUTO for partitioning in transformer, but I tried with HASH, as it was suggested on one of the posts, but still it gets aborted, with the mentioned error. Also, tried with disabling the RCP but no result.
Currently am using the AUTO for partitioning in transformer, but I tried with HASH, as it was suggested on one of the posts, but still it gets aborted, with the mentioned error. Also, tried with disabling the RCP but no result.
I would expect that DataStage - if You didn't do it manually - set the array-size and the commit-level of the target-stage to 1 on the target-stage, because this is necessary for insert/update operations.
So I would not expect the deadlock to be caused by two rows being written by this same job. There may be uncommited write-operations by other processes though, that do not allow you to update your target-table. This could be of the result of an aborted job, but it may also be completely unrelated to any activities within DataStage.
Check with Your DBA to find out wether there are any locks on the target table.
Just for certainty: Replace the target-stage by a peek to see the values that are being written out to your target-stage. Make sure Your partitioning is correctly corresponding to the update-keys of Your target-stage.
So I would not expect the deadlock to be caused by two rows being written by this same job. There may be uncommited write-operations by other processes though, that do not allow you to update your target-table. This could be of the result of an aborted job, but it may also be completely unrelated to any activities within DataStage.
Check with Your DBA to find out wether there are any locks on the target table.
Just for certainty: Replace the target-stage by a peek to see the values that are being written out to your target-stage. Make sure Your partitioning is correctly corresponding to the update-keys of Your target-stage.
"It is not the lucky ones are grateful.
There are the grateful those are happy." Francis Bacon
There are the grateful those are happy." Francis Bacon
As the error indicates, there is a deadlock (conflict) with another process which is updating the same block of data within the database. This usually indicates that your data is not partitioned properly.
Have you partitioned (and optionally sorted) your data on the table primary key?
Regards,
Have you partitioned (and optionally sorted) your data on the table primary key?
Regards,
- james wiles
All generalizations are false, including this one - Mark Twain.
All generalizations are false, including this one - Mark Twain.
Re: deadlock error
When you use WRITE method you will get this issue and in upsert rare.
In the target STAGE-->Advance--> Execution mode to Sequential.
You will not get this issue again. Which version of Datastage and SQL Server you are using?
Datastage 8.1 will support SQL Server 2005 or lesser version, whereas Datastage 8.5 / 8.7 support SQL Server 2008 (Not R2) or lesser version.
DS User
In the target STAGE-->Advance--> Execution mode to Sequential.
You will not get this issue again. Which version of Datastage and SQL Server you are using?
Datastage 8.1 will support SQL Server 2005 or lesser version, whereas Datastage 8.5 / 8.7 support SQL Server 2008 (Not R2) or lesser version.
DS User
Have you got a suitable index matching against the update key set?
I don't know much about sql server but we had similar on DB2 which appears to be fixed by an index on the update keys. (I'm putting this down to table scan locks which I believe is something SQL Server also does when no suitable index is available)
I don't know much about sql server but we had similar on DB2 which appears to be fixed by an index on the update keys. (I'm putting this down to table scan locks which I believe is something SQL Server also does when no suitable index is available)
Dead lock cause
I Faced this issue and resolved.
"When multiple jobs are running and trying to write to the same table ". We get this error
"When multiple jobs are running and trying to write to the same table ". We get this error
Re: deadlock error
Hi Sura,
Thanks for the insight...I am using the UPSERT(update then insert), as advised by you I did change the execution mode to sequential in the target stage, keeping the transformer as parallel. Ran the job and it was successful, but still I dont understand the reason for this occurance.
am using datastage 8.5 and sql server 2008
Thanks for the insight...I am using the UPSERT(update then insert), as advised by you I did change the execution mode to sequential in the target stage, keeping the transformer as parallel. Ran the job and it was successful, but still I dont understand the reason for this occurance.
am using datastage 8.5 and sql server 2008
Re: deadlock error
Earlier you can't use more than on SQL Server EE stage at the same job to load the data. For that IBM provided a fix in DS 8.5 and that is resolved, still having this issue and the problem is, it may not happen for all the run. It is a bug. If you can, please raise a PMR with IBM.
DS User
DS User