Page 2 of 2

Posted: Mon Jan 15, 2007 1:43 am
by kumar_s
I could see your latest post that, Index is not crated. But Primary key index will be automatically be created when you have a primary key constraint in it.

Posted: Mon Jan 15, 2007 5:36 am
by rleishman
It sounds as though you believe it is not a problem with Oracle, because the same job without the lookup is fast - fair enough. Prove it to yourself (and to us) by tracing the session and checking the elapsed time in TK*Prof.

Add the following to the Before-SQL:

Code: Select all

alter session set sql_trace = true
Now run the job to completion and check the Oracle Performance Tuning Manualfor steps to run TK*Prof. Once you have run TK*Prof, scroll straight to the bottom of the output file and sum the two "Elapsed Time" figures for Recursive SQL and Non-Recursive SQL respectively. This is your time spent in Oracle.

The rest of the time is spent in context switches from DS to Oracle (check the total of non-recursive Parse/Exec/Fetch operations - it should be roughly n/m, where n is the number of rows, and m is the array size.

If the number of context switches is as-expected, and the Elapsed Time for Recursive + Non-Recursive is just a few minutes, then we can be pretty certain that the problem lies in the lookup.

Have you tried manually deleting and rebuilding the hashed file? Sometimes they can get their knickers in a twist.

Posted: Mon Jan 15, 2007 7:46 am
by chulett
Writing to a hashed file could slow a job over time, reading from one won't.

I'm sure this is just another manifestation of 'back pressure' as Ken calls it. In this case, probably from holding a large number of rollback segments over the course of the load. First suggestion for this job design would be to commit over the course of the load - try a Transaction Size setting of 100000 (as a SWAG) and see if that helps even things out. Since you drop and recreate the target table, restarting in the event of a failure won't be an issue.

:idea: However, as these are inserts only into an empty table, that is an ideal candidate for a bulk load. Why not give that a shot? Simple enough to swap stages out and go with an Automatic Mode bulk load with minimal changes to your job. The difference should be dramatic.

Posted: Mon Jan 15, 2007 7:53 am
by kcbland
The target table is being dropped and recreated each run. That means the table is having to grow, unless there's a minimum extents specified. It's fairly easy to remove the hashed lookup and dummy in some values for the affected columns. The overhead of a 70 row hashed file in no way will degrade performance that much, even without read caching.

This is almost guaranteed to be the database side. The OP hasn't clarified what the table looks like: lots of columns? lots of big varchar2 columns? etc. The odds are that the database is not handling the 2+ million uncommitted rows well. I would suggested setting the commit to something smaller. So the data runs fairly quick at the beginning, then the job stalls as all rows have been sent but the big commit is then running. Of course the job is going to just sit there, commits aren't necessarily instant. That expectation needs to change.

Posted: Mon Jan 15, 2007 7:54 am
by chulett
Look at that, mention his name and up he pops. :wink:

Posted: Mon Jan 15, 2007 11:20 pm
by userasif
Thanx to all for helping me......... As suggested by Hulett I increated "Transaction Size setting from 0 to 50000" and then more like 60000 and upto 99999.......... What i am experiencing is that the job run well and it loads NOW 2.4 million records out of 2.7 million in 10 minutes .....but again the progress starts going down and rows/sec starting from 6000+ to decreases to 100+.....and last 3 million records load in 1 hour..


Beside all this the plus point is "I am learning a lot from all of you"..

Thanks again.

Regards.

Posted: Tue Jan 16, 2007 5:36 am
by rleishman
So you change something in the OCI and that dramatically alters the characteristic of the problem.

Suggest you return to my post above and run TK*Prof. Looks like Oracle is involved.

Posted: Tue Jan 16, 2007 7:58 am
by DSguru2B
Try a smaller commit size. Maybe 20K. If nothing changes, get your dba involved to trace whats actually going on at the database end.

Posted: Tue Jan 16, 2007 7:58 am
by chulett
It's all Oracle. After you play the tkprof games, another suggestion (again) would be to replace the OCI stage with the bulker loader. It really should have been your first choice.