Oracle 8 taking 4 hrs to load !
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 353
- Joined: Wed Apr 06, 2005 8:45 am
Oracle 8 taking 4 hrs to load !
Hi,
I have a simple server job that does no look ups or complex functions, it has 40 columns and mostly they are varchar 10 to varchar 20.
The extraction from a table takes 30 mins but it keeps loading for more than 4 hrs at 100rows/sec.
Is there any thing I can do ?
Thanks,
I have a simple server job that does no look ups or complex functions, it has 40 columns and mostly they are varchar 10 to varchar 20.
The extraction from a table takes 30 mins but it keeps loading for more than 4 hrs at 100rows/sec.
Is there any thing I can do ?
Thanks,
-
- Premium Member
- Posts: 353
- Joined: Wed Apr 06, 2005 8:45 am
Talk to your DBA first and see if you even have permissions to drop and rebuild them. You'd need scripts to do that for you in the long run, scripts that could be run before and after job or even before and after stage. Some of the considerations for this isn't the number of rows you are loading but rather the number of records in the target table, as index rebuild times can be significant on large tables.
First though, explain what you mean by 'load'. What is your Update action - are these strictly inserts or are there updates involved as well? While dropping indexes can help with straight index loads, that can kill updates.
Increasing the Array Size can definitely help. Transaction Size may or may not and can just complicate your recovery/restart scenarios.
If you are just doing inserts, try this experiment first. Work with your DBA. Have him drop the indexes on the table. Run your job and see if it performs better. Then have your DBA rebuild the indexes. See if the total time - job run plus index drop/rebuilds - is significantly better. If so, only then consider building that functionality into your job.
First though, explain what you mean by 'load'. What is your Update action - are these strictly inserts or are there updates involved as well? While dropping indexes can help with straight index loads, that can kill updates.
Increasing the Array Size can definitely help. Transaction Size may or may not and can just complicate your recovery/restart scenarios.
If you are just doing inserts, try this experiment first. Work with your DBA. Have him drop the indexes on the table. Run your job and see if it performs better. Then have your DBA rebuild the indexes. See if the total time - job run plus index drop/rebuilds - is significantly better. If so, only then consider building that functionality into your job.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 353
- Joined: Wed Apr 06, 2005 8:45 am
That's what I was afraid of. The "X then Y" update actions are the slowest performing actions you can use, especially if you pick the wrong one. To run the 'else' action, the first action must fail so you are doing double work for those. To me, those two update actions are to be avoided at all costs, other than in very specific applications with very small datasets. But that's just me.
Be aware that, if you drop the indexes, your inserts may never fail so the updates may not happen. You may create duplicates that would keep your indexes from being rebuilt. Without the indexes, your updates could take even longer.
Far better, from a performance standpoint, to take the time to determine which records should be inserts and which should be updates and use two seperate links.
Be aware that, if you drop the indexes, your inserts may never fail so the updates may not happen. You may create duplicates that would keep your indexes from being rebuilt. Without the indexes, your updates could take even longer.
Far better, from a performance standpoint, to take the time to determine which records should be inserts and which should be updates and use two seperate links.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 353
- Joined: Wed Apr 06, 2005 8:45 am
Yup, pretty standard practice. If done properly, the hash will improve performance. Build the hash based on the records to be processed, not everything in the target for starters.
You may even be able to 'kick it up a notch' by only using the OCI stage for the updates and bulk loading (via sqlldr) the inserts.
You may even be able to 'kick it up a notch' by only using the OCI stage for the updates and bulk loading (via sqlldr) the inserts.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
It's better if you identfy the records need to be inserted and records which needs to be updated. Then u may use sqlldr for Inserting the records with Append mode and can use ORACI stages to update the rest of the records with option update existing records only.
This will definately give u a better throughput.
This will definately give u a better throughput.
----------------
Rgds,
Anupam
----------------
The future is not something we enter. The future is something we create.
Rgds,
Anupam
----------------
The future is not something we enter. The future is something we create.