Transaction handling in Parallel jobs

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Transaction handling in Parallel jobs

Post by Amuktamalyada »

Hi everyone,

I have a parallel job that has failed due to a deadlock on one of the Oracle tables into which the job does inserts.

The approach I adapted was to see if there was any concurrent process that does any modification to the table and found out to be none.
But, I am not sure about the transaction commit process in Parallel jobs which could be another reason for the deadlocks.

Please advice if I am heading the right way in analyzing my issue.

Thank you,
Mukthi
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

What led you to believe that the table was locked? or your process was deadlocked? If you job hangs then I suggest getting together with your dba to see if the connection even reaches the table and whats happenening at the database level. There might be live threads out there with your id which might have put the table in a locked state.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Post by Amuktamalyada »

I came to know about the deadlock from the job log in production.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

If this is DataStage log then it would be nice to post the exact error message.
If you suspect its the commit level then there are two environment variables that you can set to control commit levels. Namely
APT_ORAUPSERT_COMMIT_ROW_INTERVAL
APT_ORAUPSERT_COMMIT_TIME_INTERVAL
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Post by Amuktamalyada »

Thank you Sir,

The error log goes as below.

insert is: INSERT
INTO
abcd
(a,b,c.d...)
VALUES ( 1,2,3,4...) sqlcode is: -60 esql complaint: ORA-00060: deadlock detected while waiting for resource.

I am glad to know more about the env vars.

Thanks again,
Mukti
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

I highly doubt its a commit level issue. It was just the timing at which your job ran. The table was locked by another process/session. Run the job again and see what happens.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Post by Amuktamalyada »

I did not find any concurrent process running on that table and hence was I intended to check for the transaction commits.

-Mukthi
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Does the job itself have an active SELECT on the same table?
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Post by Amuktamalyada »

Yes Ray, the job has select in itself which occurs when it looks up with 5 Oracle tables then the output from the lkp gets inserted into the table abcd(Table that is getting locked).

Thank you,
Mukthi
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

If you have an active SELECT cursor on table abcd, then table abcd will be locked against INSERT operations. Reduce your transaction size to 1 row, and/or set to auto-commit. Even better, select the data into a staging area (a persistent Data Set or Lookup File Set) in a prior job to avoid a self-deadlock situation.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Post by Amuktamalyada »

Ray, the table abcd(the one that is getting locked) does not have a select in this job other than an insert(which is failing).There are 5 other Oracle tables that have a select performed on them.

Thank you,
Mukthi
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

How many open cursors are allowed by your id? There might be a limit to the number of open connections allowed by your id. Your DBA will be able to tell you how many connections are allowed by your id.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Amuktamalyada
Participant
Posts: 19
Joined: Mon May 01, 2006 11:27 am

Post by Amuktamalyada »

Hi all,

I am not sure if this can be called a resolution. However, the solution came by changing the process flow of the active stages from parallel to sequential.

Thanks,
Mukthi
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Your target database might not be partitioned. Multiple inserts to a table should be fine too. Did you try commit level of 1?
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

Does you table have partition in Database?
What is the partition method used to partition data to 5 streams?
Could you explain the job design a bit more?
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
Post Reply