Oracle table was locked when stopping a job in director

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Yuan_Edward
Participant
Posts: 73
Joined: Tue May 10, 2005 6:21 pm
Location: Sydney

Oracle table was locked when stopping a job in director

Post by Yuan_Edward »

My job was updating an oracle 9i table, it run very slow and had taken a long time. So I stopped the running job in deirector.

But as a result, the table was locked. I expected DataStage could commit or rollback the database transaction even when the job was stopped.

Is it the case or is there anything wrong with my job? Any help will be appreciated.
Edward Yuan
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

What method were you using to load the data to Oracle? Normally the table will not have an exclusive lock taken against it. And a normal termination will commit or rollback automagically.

Were you using the bulk loader?
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

That rollback could take time. And stopping a job doesn't always stop the Oracle session.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

IMHO, when you absolutely have to stop a job which is accessing a database, dont use the director, give your dba a call and ask him to kill that thread accessing the database. This will send a "connection lost" signal to datastage which will then abort right away. As Craig mentioned, at times, the connection still exists and sometimes it leads to runaway processes. Those runaway processes can be running for a long time if not cleaned.
As far as the commit level goes, that depends upon the transaction size you have specified.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Yuan_Edward
Participant
Posts: 73
Joined: Tue May 10, 2005 6:21 pm
Location: Sydney

Post by Yuan_Edward »

Thanks for all the replies.

I was using Insert new rows or update existing rows, not quite sure what type of lock was used :oops: but seems it was an exclusive lock.

The transaction size I specified was 1000, so it shouldn't run for such a long after stopping...I tried to wait for several hours but finally had to ask for help from the DBA to kill the session.

My understanding is that DS can't stop the Oracle updating session! The session was requested by DS and then managed by Oracle. But the Oracle session should fail anyway when it couldn't send back the response to the DS aborted job??? or it became a zombie!

Just try to find a way to stop the job without locking Oracle tables. And don't want to call the DBA everyday :D .Any ideas?
Edward Yuan
kumar_s
Charter Member
Charter Member
Posts: 5245
Joined: Thu Jun 16, 2005 11:00 pm

Post by kumar_s »

By reducing your trascation size to 1 will always ensure the commit after each record.
But, as mentioned, if the connection left unhandled, which hold the exclusvive lock to the table, which may require, the process to be killed.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

It is far better to release the lock (DBA task) than to kill the process. The DBA still has to release the lock, and won't appreciate you for it.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

You are setting the transaction size to 1000 with the update action of "insert new or update existing"?
Arent you getting any warning message that says, that the size is set to 1 ?
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Yuan_Edward
Participant
Posts: 73
Joined: Tue May 10, 2005 6:21 pm
Location: Sydney

Post by Yuan_Edward »

DSguru2B wrote:You are setting the transaction size to 1000 with the update action of "insert new or update existing"?
Arent you getting any warning message that says, that the size is set to 1 ?
No, this is not an issue. And I don't think setting the transaction size to 1 is good for performance?

I got the problem during dev of my job. I think I can reduce the input stream by selecting the first 100 records so I could wait until the job completes instead of stopping the job in director.

Thanks again for all the kind responses.
Edward Yuan
loveojha2
Participant
Posts: 362
Joined: Thu May 26, 2005 12:59 am

Post by loveojha2 »

No, this is not an issue. And I don't think setting the transaction size to 1 is good for performance?
With the update action Insert new rows or update existing rows the commit frequency gets automatically set to 1.
Success consists of getting up just one more time than you fall.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

No, I'm afraid it doesn't - at least not in the OCI stages. You are probably thinking of Transaction Grouping which does force it to 1.
-craig

"You can never have too many knives" -- Logan Nine Fingers
Yuan_Edward
Participant
Posts: 73
Joined: Tue May 10, 2005 6:21 pm
Location: Sydney

Post by Yuan_Edward »

chulett wrote:No, I'm afraid it doesn't - at least not in the OCI stages. You are probably thinking of Transaction Grouping which does force it to 1.
I agree. Otherwise, it's not possible to make all the updates/inserts as a single transaction in the OCI stage with the option Insert new row or update existing rows.
Edward Yuan
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Even i was under that impression, as thats how the DRS stage works. Well, guess the OCI stage works differently.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Again... I don't see any evidence that the DRS stage works that way either. Unless it happens at runtime, no amount of changing the Update action forces a change in the Transaction Size to 1. :?

It might be smart in some cases to do that when RI is involved in the equation, but you're certainly not forced to do it.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

I dont think there is any evidence, because i was thinking about Array size in my head and saying Transaction size. Its actually the Array size that absolutely has to be set to 1 in "insert/update" or "update/insert" sql action, and if set higher, DS will set it to 1 during runtime.
:oops:
I stand corrected
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
Post Reply