Oracle table was locked when stopping a job in director
Moderators: chulett, rschirm, roy
-
- Participant
- Posts: 73
- Joined: Tue May 10, 2005 6:21 pm
- Location: Sydney
Oracle table was locked when stopping a job in director
My job was updating an oracle 9i table, it run very slow and had taken a long time. So I stopped the running job in deirector.
But as a result, the table was locked. I expected DataStage could commit or rollback the database transaction even when the job was stopped.
Is it the case or is there anything wrong with my job? Any help will be appreciated.
But as a result, the table was locked. I expected DataStage could commit or rollback the database transaction even when the job was stopped.
Is it the case or is there anything wrong with my job? Any help will be appreciated.
Edward Yuan
IMHO, when you absolutely have to stop a job which is accessing a database, dont use the director, give your dba a call and ask him to kill that thread accessing the database. This will send a "connection lost" signal to datastage which will then abort right away. As Craig mentioned, at times, the connection still exists and sometimes it leads to runaway processes. Those runaway processes can be running for a long time if not cleaned.
As far as the commit level goes, that depends upon the transaction size you have specified.
As far as the commit level goes, that depends upon the transaction size you have specified.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
-
- Participant
- Posts: 73
- Joined: Tue May 10, 2005 6:21 pm
- Location: Sydney
Thanks for all the replies.
I was using Insert new rows or update existing rows, not quite sure what type of lock was used
but seems it was an exclusive lock.
The transaction size I specified was 1000, so it shouldn't run for such a long after stopping...I tried to wait for several hours but finally had to ask for help from the DBA to kill the session.
My understanding is that DS can't stop the Oracle updating session! The session was requested by DS and then managed by Oracle. But the Oracle session should fail anyway when it couldn't send back the response to the DS aborted job??? or it became a zombie!
Just try to find a way to stop the job without locking Oracle tables. And don't want to call the DBA everyday :D .Any ideas?
I was using Insert new rows or update existing rows, not quite sure what type of lock was used
![Embarassed :oops:](./images/smilies/icon_redface.gif)
The transaction size I specified was 1000, so it shouldn't run for such a long after stopping...I tried to wait for several hours but finally had to ask for help from the DBA to kill the session.
My understanding is that DS can't stop the Oracle updating session! The session was requested by DS and then managed by Oracle. But the Oracle session should fail anyway when it couldn't send back the response to the DS aborted job??? or it became a zombie!
Just try to find a way to stop the job without locking Oracle tables. And don't want to call the DBA everyday :D .Any ideas?
Edward Yuan
By reducing your trascation size to 1 will always ensure the commit after each record.
But, as mentioned, if the connection left unhandled, which hold the exclusvive lock to the table, which may require, the process to be killed.
But, as mentioned, if the connection left unhandled, which hold the exclusvive lock to the table, which may require, the process to be killed.
Impossible doesn't mean 'it is not possible' actually means... 'NOBODY HAS DONE IT SO FAR'
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 73
- Joined: Tue May 10, 2005 6:21 pm
- Location: Sydney
No, this is not an issue. And I don't think setting the transaction size to 1 is good for performance?DSguru2B wrote:You are setting the transaction size to 1000 with the update action of "insert new or update existing"?
Arent you getting any warning message that says, that the size is set to 1 ?
I got the problem during dev of my job. I think I can reduce the input stream by selecting the first 100 records so I could wait until the job completes instead of stopping the job in director.
Thanks again for all the kind responses.
Edward Yuan
-
- Participant
- Posts: 73
- Joined: Tue May 10, 2005 6:21 pm
- Location: Sydney
I agree. Otherwise, it's not possible to make all the updates/inserts as a single transaction in the OCI stage with the option Insert new row or update existing rows.chulett wrote:No, I'm afraid it doesn't - at least not in the OCI stages. You are probably thinking of Transaction Grouping which does force it to 1.
Edward Yuan
Again... I don't see any evidence that the DRS stage works that way either. Unless it happens at runtime, no amount of changing the Update action forces a change in the Transaction Size to 1. ![Confused :?](./images/smilies/icon_confused.gif)
It might be smart in some cases to do that when RI is involved in the equation, but you're certainly not forced to do it.
![Confused :?](./images/smilies/icon_confused.gif)
It might be smart in some cases to do that when RI is involved in the equation, but you're certainly not forced to do it.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
I dont think there is any evidence, because i was thinking about Array size in my head and saying Transaction size. Its actually the Array size that absolutely has to be set to 1 in "insert/update" or "update/insert" sql action, and if set higher, DS will set it to 1 during runtime.
I stand corrected
![Embarassed :oops:](./images/smilies/icon_redface.gif)
I stand corrected
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.