Page 1 of 2

Oracle table was locked when stopping a job in director

Posted: Mon Jul 03, 2006 11:56 pm
by Yuan_Edward
My job was updating an oracle 9i table, it run very slow and had taken a long time. So I stopped the running job in deirector.

But as a result, the table was locked. I expected DataStage could commit or rollback the database transaction even when the job was stopped.

Is it the case or is there anything wrong with my job? Any help will be appreciated.

Posted: Tue Jul 04, 2006 3:25 am
by ArndW
What method were you using to load the data to Oracle? Normally the table will not have an exclusive lock taken against it. And a normal termination will commit or rollback automagically.

Were you using the bulk loader?

Posted: Tue Jul 04, 2006 6:23 am
by chulett
That rollback could take time. And stopping a job doesn't always stop the Oracle session.

Posted: Tue Jul 04, 2006 10:06 am
by DSguru2B
IMHO, when you absolutely have to stop a job which is accessing a database, dont use the director, give your dba a call and ask him to kill that thread accessing the database. This will send a "connection lost" signal to datastage which will then abort right away. As Craig mentioned, at times, the connection still exists and sometimes it leads to runaway processes. Those runaway processes can be running for a long time if not cleaned.
As far as the commit level goes, that depends upon the transaction size you have specified.

Posted: Tue Jul 04, 2006 5:10 pm
by Yuan_Edward
Thanks for all the replies.

I was using Insert new rows or update existing rows, not quite sure what type of lock was used :oops: but seems it was an exclusive lock.

The transaction size I specified was 1000, so it shouldn't run for such a long after stopping...I tried to wait for several hours but finally had to ask for help from the DBA to kill the session.

My understanding is that DS can't stop the Oracle updating session! The session was requested by DS and then managed by Oracle. But the Oracle session should fail anyway when it couldn't send back the response to the DS aborted job??? or it became a zombie!

Just try to find a way to stop the job without locking Oracle tables. And don't want to call the DBA everyday :D .Any ideas?

Posted: Tue Jul 04, 2006 11:31 pm
by kumar_s
By reducing your trascation size to 1 will always ensure the commit after each record.
But, as mentioned, if the connection left unhandled, which hold the exclusvive lock to the table, which may require, the process to be killed.

Posted: Wed Jul 05, 2006 6:07 am
by ray.wurlod
It is far better to release the lock (DBA task) than to kill the process. The DBA still has to release the lock, and won't appreciate you for it.

Posted: Wed Jul 05, 2006 6:39 am
by DSguru2B
You are setting the transaction size to 1000 with the update action of "insert new or update existing"?
Arent you getting any warning message that says, that the size is set to 1 ?

Posted: Wed Jul 05, 2006 5:27 pm
by Yuan_Edward
DSguru2B wrote:You are setting the transaction size to 1000 with the update action of "insert new or update existing"?
Arent you getting any warning message that says, that the size is set to 1 ?
No, this is not an issue. And I don't think setting the transaction size to 1 is good for performance?

I got the problem during dev of my job. I think I can reduce the input stream by selecting the first 100 records so I could wait until the job completes instead of stopping the job in director.

Thanks again for all the kind responses.

Posted: Wed Jul 05, 2006 9:25 pm
by loveojha2
No, this is not an issue. And I don't think setting the transaction size to 1 is good for performance?
With the update action Insert new rows or update existing rows the commit frequency gets automatically set to 1.

Posted: Wed Jul 05, 2006 11:17 pm
by chulett
No, I'm afraid it doesn't - at least not in the OCI stages. You are probably thinking of Transaction Grouping which does force it to 1.

Posted: Wed Jul 05, 2006 11:47 pm
by Yuan_Edward
chulett wrote:No, I'm afraid it doesn't - at least not in the OCI stages. You are probably thinking of Transaction Grouping which does force it to 1.
I agree. Otherwise, it's not possible to make all the updates/inserts as a single transaction in the OCI stage with the option Insert new row or update existing rows.

Posted: Thu Jul 06, 2006 6:47 am
by DSguru2B
Even i was under that impression, as thats how the DRS stage works. Well, guess the OCI stage works differently.

Posted: Thu Jul 06, 2006 7:00 am
by chulett
Again... I don't see any evidence that the DRS stage works that way either. Unless it happens at runtime, no amount of changing the Update action forces a change in the Transaction Size to 1. :?

It might be smart in some cases to do that when RI is involved in the equation, but you're certainly not forced to do it.

Posted: Thu Jul 06, 2006 7:10 am
by DSguru2B
I dont think there is any evidence, because i was thinking about Array size in my head and saying Transaction size. Its actually the Array size that absolutely has to be set to 1 in "insert/update" or "update/insert" sql action, and if set higher, DS will set it to 1 during runtime.
:oops:
I stand corrected