Performance problem using lookup
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 252
- Joined: Mon Sep 19, 2005 10:28 pm
- Location: Melbourne, Australia
- Contact:
It sounds as though you believe it is not a problem with Oracle, because the same job without the lookup is fast - fair enough. Prove it to yourself (and to us) by tracing the session and checking the elapsed time in TK*Prof.
Add the following to the Before-SQL:
Now run the job to completion and check the Oracle Performance Tuning Manualfor steps to run TK*Prof. Once you have run TK*Prof, scroll straight to the bottom of the output file and sum the two "Elapsed Time" figures for Recursive SQL and Non-Recursive SQL respectively. This is your time spent in Oracle.
The rest of the time is spent in context switches from DS to Oracle (check the total of non-recursive Parse/Exec/Fetch operations - it should be roughly n/m, where n is the number of rows, and m is the array size.
If the number of context switches is as-expected, and the Elapsed Time for Recursive + Non-Recursive is just a few minutes, then we can be pretty certain that the problem lies in the lookup.
Have you tried manually deleting and rebuilding the hashed file? Sometimes they can get their knickers in a twist.
Add the following to the Before-SQL:
Code: Select all
alter session set sql_trace = true
The rest of the time is spent in context switches from DS to Oracle (check the total of non-recursive Parse/Exec/Fetch operations - it should be roughly n/m, where n is the number of rows, and m is the array size.
If the number of context switches is as-expected, and the Elapsed Time for Recursive + Non-Recursive is just a few minutes, then we can be pretty certain that the problem lies in the lookup.
Have you tried manually deleting and rebuilding the hashed file? Sometimes they can get their knickers in a twist.
Ross Leishman
Writing to a hashed file could slow a job over time, reading from one won't.
I'm sure this is just another manifestation of 'back pressure' as Ken calls it. In this case, probably from holding a large number of rollback segments over the course of the load. First suggestion for this job design would be to commit over the course of the load - try a Transaction Size setting of 100000 (as a SWAG) and see if that helps even things out. Since you drop and recreate the target table, restarting in the event of a failure won't be an issue.
However, as these are inserts only into an empty table, that is an ideal candidate for a bulk load. Why not give that a shot? Simple enough to swap stages out and go with an Automatic Mode bulk load with minimal changes to your job. The difference should be dramatic.
I'm sure this is just another manifestation of 'back pressure' as Ken calls it. In this case, probably from holding a large number of rollback segments over the course of the load. First suggestion for this job design would be to commit over the course of the load - try a Transaction Size setting of 100000 (as a SWAG) and see if that helps even things out. Since you drop and recreate the target table, restarting in the event of a failure won't be an issue.
![Idea :idea:](./images/smilies/icon_idea.gif)
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
The target table is being dropped and recreated each run. That means the table is having to grow, unless there's a minimum extents specified. It's fairly easy to remove the hashed lookup and dummy in some values for the affected columns. The overhead of a 70 row hashed file in no way will degrade performance that much, even without read caching.
This is almost guaranteed to be the database side. The OP hasn't clarified what the table looks like: lots of columns? lots of big varchar2 columns? etc. The odds are that the database is not handling the 2+ million uncommitted rows well. I would suggested setting the commit to something smaller. So the data runs fairly quick at the beginning, then the job stalls as all rows have been sent but the big commit is then running. Of course the job is going to just sit there, commits aren't necessarily instant. That expectation needs to change.
This is almost guaranteed to be the database side. The OP hasn't clarified what the table looks like: lots of columns? lots of big varchar2 columns? etc. The odds are that the database is not handling the 2+ million uncommitted rows well. I would suggested setting the commit to something smaller. So the data runs fairly quick at the beginning, then the job stalls as all rows have been sent but the big commit is then running. Of course the job is going to just sit there, commits aren't necessarily instant. That expectation needs to change.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Thanx to all for helping me......... As suggested by Hulett I increated "Transaction Size setting from 0 to 50000" and then more like 60000 and upto 99999.......... What i am experiencing is that the job run well and it loads NOW 2.4 million records out of 2.7 million in 10 minutes .....but again the progress starts going down and rows/sec starting from 6000+ to decreases to 100+.....and last 3 million records load in 1 hour..
Beside all this the plus point is "I am learning a lot from all of you"..
Thanks again.
Regards.
Beside all this the plus point is "I am learning a lot from all of you"..
Thanks again.
Regards.