Page 1 of 2

deadlock error

Posted: Mon Jan 30, 2012 9:10 am
by sush26283
I have a Job.source and target both sql server
source--->transformer--->target

when i run the job it gets aborted with the error message :

Transaction (Process ID 65) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

sometimes it would run with some rows at target and abort and sometimes with no rows at all. Am using the UPSERT(update then insert) at the target.

please give some insight.

Thanks

Re: deadlock error

Posted: Mon Jan 30, 2012 10:18 am
by rajeshananthapalpu
Re-partition before target stage. You may have duplicates in data...

Thanks

Re: deadlock error

Posted: Mon Jan 30, 2012 10:23 am
by sush26283
Thanks for the reply have been waiting for someone to help out.
Currently am using the AUTO for partitioning in transformer, but I tried with HASH, as it was suggested on one of the posts, but still it gets aborted, with the mentioned error. Also, tried with disabling the RCP but no result. :(

Posted: Mon Jan 30, 2012 10:47 am
by BI-RMA
I would expect that DataStage - if You didn't do it manually - set the array-size and the commit-level of the target-stage to 1 on the target-stage, because this is necessary for insert/update operations.

So I would not expect the deadlock to be caused by two rows being written by this same job. There may be uncommited write-operations by other processes though, that do not allow you to update your target-table. This could be of the result of an aborted job, but it may also be completely unrelated to any activities within DataStage.

Check with Your DBA to find out wether there are any locks on the target table.

Just for certainty: Replace the target-stage by a peek to see the values that are being written out to your target-stage. Make sure Your partitioning is correctly corresponding to the update-keys of Your target-stage.

Posted: Mon Jan 30, 2012 10:56 am
by jwiles
As the error indicates, there is a deadlock (conflict) with another process which is updating the same block of data within the database. This usually indicates that your data is not partitioned properly.

Have you partitioned (and optionally sorted) your data on the table primary key?

Regards,

Posted: Mon Jan 30, 2012 12:19 pm
by sush26283
I tried partioning on with HASH, but the job failed...but moments ago I ran the job in Sequential Mode and it ran successfully with no warnings or error..

Posted: Mon Jan 30, 2012 12:51 pm
by sush26283
I again tried with HASH and sort, but the job still fails, but this time it failed at the last step, after reading all the data.

Posted: Mon Jan 30, 2012 2:02 pm
by pandeesh
Wit hash partitioning try to peek your data, so that you can identify what's the problem.

Posted: Mon Jan 30, 2012 2:31 pm
by sush26283
i replaced the output stage with the peek stage it shows all records were trasferred from transformer to peek stage, with no errors or warnings

Re: deadlock error

Posted: Mon Jan 30, 2012 5:10 pm
by SURA
When you use WRITE method you will get this issue and in upsert rare.

In the target STAGE-->Advance--> Execution mode to Sequential.

You will not get this issue again. Which version of Datastage and SQL Server you are using?

Datastage 8.1 will support SQL Server 2005 or lesser version, whereas Datastage 8.5 / 8.7 support SQL Server 2008 (Not R2) or lesser version.

DS User

Posted: Mon Jan 30, 2012 8:52 pm
by Kryt0n
Have you got a suitable index matching against the update key set?

I don't know much about sql server but we had similar on DB2 which appears to be fixed by an index on the update keys. (I'm putting this down to table scan locks which I believe is something SQL Server also does when no suitable index is available)

Dead lock cause

Posted: Tue Jan 31, 2012 4:48 am
by prdpreddy
I Faced this issue and resolved.

"When multiple jobs are running and trying to write to the same table ". We get this error

Re: deadlock error

Posted: Tue Jan 31, 2012 8:04 am
by sush26283
Hi Sura,
Thanks for the insight...I am using the UPSERT(update then insert), as advised by you I did change the execution mode to sequential in the target stage, keeping the transformer as parallel. Ran the job and it was successful, but still I dont understand the reason for this occurance.

am using datastage 8.5 and sql server 2008

Re: deadlock error

Posted: Tue Jan 31, 2012 4:27 pm
by SURA
Earlier you can't use more than on SQL Server EE stage at the same job to load the data. For that IBM provided a fix in DS 8.5 and that is resolved, still having this issue and the problem is, it may not happen for all the run. It is a bug. If you can, please raise a PMR with IBM.

DS User

Posted: Tue Jan 31, 2012 5:11 pm
by ds_avatar
We did experience the same error ... the target table was opened by one of our team member in edit mode (Toad), hence the records were not inserted into the table by DS job :)

Hope this shouldn't be an issue in your case.