9Million rec in Oracle tbl, Update or Insert 7rows/sec!
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 353
- Joined: Wed Apr 06, 2005 8:45 am
9Million rec in Oracle tbl, Update or Insert 7rows/sec!
Hi,
I have a table with 9 million records(Initial load) , and when we run the daily it is incremental and I have like 250K records every day, but it is writing at 7 recs/sec.
Is there any way around this , because I need to run this load more than once a day , probably 6 times !
Any help !
Thanks
I have a table with 9 million records(Initial load) , and when we run the daily it is incremental and I have like 250K records every day, but it is writing at 7 recs/sec.
Is there any way around this , because I need to run this load more than once a day , probably 6 times !
Any help !
Thanks
-
- Premium Member
- Posts: 353
- Joined: Wed Apr 06, 2005 8:45 am
-
- Participant
- Posts: 54607
- Joined: Wed Oct 23, 2002 10:52 pm
- Location: Sydney, Australia
- Contact:
-
- Participant
- Posts: 3337
- Joined: Mon Jan 17, 2005 4:49 am
- Location: United Kingdom
The solution will depend upon the level of change you are preared to do and the authorisation level you are granted.
If it is a full table reload, you can download the full table and repopulate it later.
You can partition using something like batch id or load id
You can load into a temp table an then use SQL to merge them.
If it is a full table reload, you can download the full table and repopulate it later.
You can partition using something like batch id or load id
You can load into a temp table an then use SQL to merge them.
Re: 9Million rec in Oracle tbl, Update or Insert 7rows/sec!
Hi,
You may use partitioner stage to split file (round-robin) and pump data in parallel. We started from 800 rows, sending via 5 connections we got 4000 - but still very slow for us . We did some test to find the maximum: it looks like over 6 upload branches we don't get any improvements.
Probably you need also a very good dba - we don't have one ... oracle gets into trouble when we have large updates - we used update then insert. With only inserts we got an aggregate performance of 8000 per second - still slow ... no indexes only a pk
Any other idea ?
You may use partitioner stage to split file (round-robin) and pump data in parallel. We started from 800 rows, sending via 5 connections we got 4000 - but still very slow for us . We did some test to find the maximum: it looks like over 6 upload branches we don't get any improvements.
Probably you need also a very good dba - we don't have one ... oracle gets into trouble when we have large updates - we used update then insert. With only inserts we got an aggregate performance of 8000 per second - still slow ... no indexes only a pk
Any other idea ?
vinaymanchinila wrote:Hi,
I have a table with 9 million records(Initial load) , and when we run the daily it is incremental and I have like 250K records every day, but it is writing at 7 recs/sec.
Is there any way around this , because I need to run this load more than once a day , probably 6 times !
Any help !
Thanks
As mdan pointed out, splitting the stream into 3/4 inserts and updates would be fast. In essence thats what you have done by seperating inserts/update streams.vinaymanchinila wrote:No, but i built a hash onthe target and using two links to write to it, one just for update and the other just for insert, it islot better this way now.
Thanks,
Checkout the job monitor and see exactly at what stage you are seeing the drop in rows/second .. and if your logic permits.. try partitioning stream before that stage .. it could help.
Another option would be sperate all updates to anew job if their percentage is low (grab them in a seq file .. ) and use sql*loaded as ray pinted out for all inserts.