I've job which updates a table using Oracle stage. A sequential file will have the list of keys to be updated and another with new records to be inserted. I insert the data using SQLLDR into the same table using after-job subroutine. It is basically marking some records as inactive using update stage and insert new records.
I believe that the Oracle stage will have 'closed' and any locks released by the time you get to your after-job subroutine. You should be able to verify this from the logs by looking for a 'Finishing...' message from the stage before your after-job subroutine is called and the error is logged.
Talk to your DBA, perhaps they can help track down who/what is locking the table. You could also try using a conventional load rather than a direct path load as it should work even if other things are going on in the table at the same time.
Is this a repeatable error? Meaning each time you run the job the end result is the same? If so then I could be wrong about how things work in the PX world.
-craig
"You can never have too many knives" -- Logan Nine Fingers
There are two reasons when this error can occur.
1. May be you are trying to run two parallel query to do same operation on same table, without using PARALLEL= TRUE.
2. You may have already done some DML activity , which needs a COMMIT to be done on the table. When commit is not done for a table after DML changes, table gets locked.
That was my biggest error. I missed to check the option "Only run job subroutine on successful completion". I checked and it went through fine.
I can't believe i missed that!!!
And apparently missed telling us this was only a problem when the job itself failed before you got to the after-job subroutine if that's all it took to 'fix' it.
-craig
"You can never have too many knives" -- Logan Nine Fingers
No Craig, Job completed successfully. Problem was after-job subroutine was called before job completed successfully. Job was actually updating the table, which has put a lock on that. At that time, subroutine was attempting to insert. Problem was solved when i check the option "Only run job subroutine on successful completion" which caused the lock to be released before sub routine was called.
No, that's not how it works. That option does not mean "wait for the job to finish before running the subroutine" it means "do not run this subroutine if the job does not complete successfully". It always waits for all activity on the canvas to complete before anything 'after job' runs.
Something else is going on. Perhaps not all nodes completed before it ran, but if that's the case that's a bug that needs to be reported.
-craig
"You can never have too many knives" -- Logan Nine Fingers
The job is running fine even i uncheck that option. I'm absolutely sure that the job failed multiple times when i posted this topic...I tested with the same data and same scenario. Its working fine. I've no explanation on why it failed before!!!
Tried to explain - it's not a problem with your job. You ran afoul of another process, someone or something else working in the table at the same time you are and locking you out from making your changes.
Work with your DBA when it happens again, they should be able to tell you where the lock is coming from.
-craig
"You can never have too many knives" -- Logan Nine Fingers