Job Aborting due to dead lock
Moderators: chulett, rschirm, roy
Job Aborting due to dead lock
Hi All
my job will update the records and my Source and Target is same Table(and my database is SQL Server 2005)
i put the Hash keys also
sometimes it will work fine on the other day it get aborted.
and it will run for more than 1 hour and at finally it get aborted(Actually it should update 59000 records ,it will abort after 53000 records are updated).
i changed the Hash Keys also,but the result is same !
can any one suggest on this.
my job will update the records and my Source and Target is same Table(and my database is SQL Server 2005)
i put the Hash keys also
sometimes it will work fine on the other day it get aborted.
and it will run for more than 1 hour and at finally it get aborted(Actually it should update 59000 records ,it will abort after 53000 records are updated).
i changed the Hash Keys also,but the result is same !
can any one suggest on this.
Jaypee
What exactly do you mean by "hash keys" - the partitioning of the source and target data? How many nodes is your configuration file and if you run it with a 1-node configuration have you had the job abort?
Also, could you post the exact error message you are seeing?
Have you specified a commit frequency and are you using the default user user-written SQL in either the read and/or write stages?
Basically, your READ shouldn't be locking anything for the WRITE to get bothered about - perhaps other processes are using the same table.
Also, could you post the exact error message you are seeing?
Have you specified a commit frequency and are you using the default user user-written SQL in either the read and/or write stages?
Basically, your READ shouldn't be locking anything for the WRITE to get bothered about - perhaps other processes are using the same table.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
-
- Participant
- Posts: 23
- Joined: Fri Oct 22, 2010 7:08 am
Re: Job Aborting due to dead lock
please once again check the update query in the database stage used
Thanks & Regards,
Bhanu
Bhanu
Hi
my update query is
it is working fine in the database(and i am using ODBC stage)
and i am selecting the columns with nolock(user-defined query)
and the HASH keys are like the columns
Statecode,SPECIALTY_PRODUCT_INDIC etc,
i am using 2-node Config file
(how to run in single node if i had 2-node config file and i don't have admin rights)
my update query is
UPDATE ds_wk_PMGProvXref
SET ds_wk_PMGProvXref.HospitalGRP = imp_ProviderPrimaryHospital.Hosp_Grp_Num,
ds_wk_PMGProvXref.HospitalFAC = imp_ProviderPrimaryHospital.Hosp_Num
from ds_wk_PMGProvXref with(nolock)
INNER JOIN imp_ProviderPrimaryHospital with(nolock) ON
( ds_wk_PMGProvXref.SPECIALTY_PRODUCT_INDIC = imp_ProviderPrimaryHospital.PROBASE_PRODUCT_TYPE)
AND ( ds_wk_PMGProvXref.EnrollmentFacility = imp_ProviderPrimaryHospital.PRVDR_FAC)
AND ( ds_wk_PMGProvXref.EnrollmentGroup = imp_ProviderPrimaryHospital.PRVDR_GRP)
AND ( ds_wk_PMGProvXref.StateCode = imp_ProviderPrimaryHospital.StateCode)
WHERE ((( ds_wk_PMGProvXref.StateCode) In ('CA','OK','TX','OR')) AND
(( ds_wk_PMGProvXref.PCP_INDIC)='Y') AND (( imp_ProviderPrimaryHospital.PRIMARY_HOSPITAL_INDICATOR)='Y'));
it is working fine in the database(and i am using ODBC stage)
and i am selecting the columns with nolock(user-defined query)
and the HASH keys are like the columns
Statecode,SPECIALTY_PRODUCT_INDIC etc,
i am using 2-node Config file
(how to run in single node if i had 2-node config file and i don't have admin rights)
Jaypee
please explain it more clearly. Did you assign these keys as Hash Keys in datastage?and the HASH keys are like the columns
Statecode,SPECIALTY_PRODUCT_INDIC etc,
Describe your job design as well and partitioning(if any) wherever mentioned.
you don't need admin rights for creation of new configuration file. Create a new configuration file having one node through Designer only and run your job using this configuration file with the help of $APT_CONFIG_FILE environment variable.(how to run in single node if i had 2-node config file and i don't have admin rights)
If you don't have write permission to the default configuration file directory, save the 1-node file anywhere on the machine (i.e. "/tmp/1node.cfg") and then reference that location in your runtime parameters.
<a href=http://www.worldcommunitygrid.org/team/ ... TZ9H4CGVP1 target="WCGWin">
</a>
</a>
Hi all
my issue is i had an update SQl as below
UPDATE ds_wk_PMGProvXref
SET ds_wk_PMGProvXref.HospitalGRP = imp_ProviderPrimaryHospital.Hosp_Grp_Num,
ds_wk_PMGProvXref.HospitalFAC = imp_ProviderPrimaryHospital.Hosp_Num
from ds_wk_PMGProvXref with(nolock)
INNER JOIN imp_ProviderPrimaryHospital with(nolock) ON
( ds_wk_PMGProvXref. SPECIALTY_PRODUCT_INDIC = imp_ProviderPrimaryHospital.PROBASE_PRODUCT_TYPE)
AND ( ds_wk_PMGProvXref. EnrollmentFacility = imp_ProviderPrimaryHospital.PRVDR_FAC)
AND ( ds_wk_PMGProvXref. EnrollmentGroup = imp_ProviderPrimaryHospital.PRVDR_GRP)
AND ( ds_wk_PMGProvXref. StateCode = imp_ProviderPrimaryHospital.StateCode)
WHERE ((( ds_wk_PMGProvXref.StateCode) In ('CA','OK','TX','OR')) AND
(( ds_wk_PMGProvXref.PCP_INDIC)='Y') AND (( imp_ProviderPrimaryHospital.PRIMARY_HOSPITAL_INDICATOR)='Y'));
and i am achieving this by the job
my source table is wk_PMGProvxref and target is also same.(database SQL Server),
i am updating 2 columns
HospitalGRP and HospitalFAC based on some join condition as above queryand i made HASH partition for the columns Statecode ,Enrollment GRP etc but the job get aborted
and following error message
how to create config file in designer,and i am developing the jobs in Dev environment ,what about if it move in to Production.
Thanks in Advance
my issue is i had an update SQl as below
UPDATE ds_wk_PMGProvXref
SET ds_wk_PMGProvXref.HospitalGRP = imp_ProviderPrimaryHospital.Hosp_Grp_Num,
ds_wk_PMGProvXref.HospitalFAC = imp_ProviderPrimaryHospital.Hosp_Num
from ds_wk_PMGProvXref with(nolock)
INNER JOIN imp_ProviderPrimaryHospital with(nolock) ON
( ds_wk_PMGProvXref. SPECIALTY_PRODUCT_INDIC = imp_ProviderPrimaryHospital.PROBASE_PRODUCT_TYPE)
AND ( ds_wk_PMGProvXref. EnrollmentFacility = imp_ProviderPrimaryHospital.PRVDR_FAC)
AND ( ds_wk_PMGProvXref. EnrollmentGroup = imp_ProviderPrimaryHospital.PRVDR_GRP)
AND ( ds_wk_PMGProvXref. StateCode = imp_ProviderPrimaryHospital.StateCode)
WHERE ((( ds_wk_PMGProvXref.StateCode) In ('CA','OK','TX','OR')) AND
(( ds_wk_PMGProvXref.PCP_INDIC)='Y') AND (( imp_ProviderPrimaryHospital.PRIMARY_HOSPITAL_INDICATOR)='Y'));
and i am achieving this by the job
my source table is wk_PMGProvxref and target is also same.(database SQL Server),
i am updating 2 columns
HospitalGRP and HospitalFAC based on some join condition as above queryand i made HASH partition for the columns Statecode ,Enrollment GRP etc but the job get aborted
and following error message
and if we can done it by using 1 node config fileODBC_Enterprise_76,1: Fatal Error: [IBM(DataDirect OEM)][ODBC SQL Server Driver][SQL Server]Transaction (Process ID 65) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transa
how to create config file in designer,and i am developing the jobs in Dev environment ,what about if it move in to Production.
Thanks in Advance
Jaypee
-
- Premium Member
- Posts: 892
- Joined: Thu Oct 16, 2003 5:18 am