Performance from 86 rows to 1 row per second
Moderators: chulett, rschirm, roy
-
- Premium Member
- Posts: 34
- Joined: Fri Jan 19, 2007 9:09 am
- Location: Breda, The Netherlands
Performance from 86 rows to 1 row per second
When starting a job it starts to run at a 86 rows per second.
When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
The job doesn't give warnings so the problem isn't the size of the logfile.
The table does contain 1 varcahar2(4000) field so maybe that requires some specific setting but i am searching the ideal ones.
When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
The job doesn't give warnings so the problem isn't the size of the logfile.
The table does contain 1 varcahar2(4000) field so maybe that requires some specific setting but i am searching the ideal ones.
-
- Participant
- Posts: 83
- Joined: Tue Dec 19, 2006 8:38 am
Re: Performance from 86 rows to 1 row per second
Hi Rverharen;rverharen wrote:When starting a job it starts to run at a 86 rows per second.
When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
The job doesn't give warnings so the problem isn't the size of the logfile.
The table does contain 1 varcahar2(4000) field so maybe that requires some specific setting but i am searching the ideal ones.
I think first check whether this happens due to the transforms that you used in the transfmer.For that try loading the data to a flat file.
If that job is fast.then Play around with the Array size and Row per transactions size.
Thanks
George
-
- Premium Member
- Posts: 34
- Joined: Fri Jan 19, 2007 9:09 am
- Location: Breda, The Netherlands
WoMaWil wrote:do you have an index on the table.
Why do you use only on field in Oracle and not many.
What do you means in such a kind of scenario by "Update"
I do have indexes on the table and the table contains more columns (only one of them is varchar2 (4000)).
Inserting in the table just becomes very slow as more and more rows go in.
Updates takes place on other fields then the primary key (the key is used to update the right record).
-
- Premium Member
- Posts: 4
- Joined: Thu Feb 26, 2004 3:08 pm
Oracle jobs can slow significantly if they are loading indexes in the background if you are loading a lot of rows. You might consider disabling the index(s) before the job starts and rebuilding the index(s) after the job ends. You can also sort the rows in index order before loading for some improvement.
Here is an example:
Disable Index(s) before the job
SELECT 'ALTER INDEX ' (owner) (index name) ' UNUSABLE '
Rebuild Index(s) after the job
SELECT 'ALTER INDEX ' (owner) (index name) ' REBUILD '
Analyze Index(s) after the job
SELECT 'ALTER INDEX ' (owner) (index name) ' COMPUTE STATISTICS '
Check with your DBA for more details. Thanks.
Here is an example:
Disable Index(s) before the job
SELECT 'ALTER INDEX ' (owner) (index name) ' UNUSABLE '
Rebuild Index(s) after the job
SELECT 'ALTER INDEX ' (owner) (index name) ' REBUILD '
Analyze Index(s) after the job
SELECT 'ALTER INDEX ' (owner) (index name) ' COMPUTE STATISTICS '
Check with your DBA for more details. Thanks.
Re: Performance from 86 rows to 1 row per second
If "inserting and updating" means you are using one of the 'combo' Update actions, then that is probably the heart of your problem. Never mind the fact that you may have to perform two actions instead of just one.rverharen wrote:When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
![Confused :?](./images/smilies/icon_confused.gif)
If so, it's typically not the 'insert' portion that gets people in trouble, it's the 'update'. The question is - Do you have indexes over the key columns used in your update sql? If not, that can cause each update to perform a full table scan and thus get slower and slower as the data volume increases.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
-
- Premium Member
- Posts: 34
- Joined: Fri Jan 19, 2007 9:09 am
- Location: Breda, The Netherlands
Thanks for your responses
Thanks everyone for your tips.
Writing to a file to see whether the problem was in the stage was very usefull as well as the tip to drop the indexes before and add them after the inserts and updates.
I also added a hash file to make sure the table I was looking up on had no conflict with the table I was writing to.
The job is working properly now.
Writing to a file to see whether the problem was in the stage was very usefull as well as the tip to drop the indexes before and add them after the inserts and updates.
I also added a hash file to make sure the table I was looking up on had no conflict with the table I was writing to.
The job is working properly now.
-
- Participant
- Posts: 83
- Joined: Tue Dec 19, 2006 8:38 am
Re: Thanks for your responses
Hi Rverharen,rverharen wrote:Thanks everyone for your tips.
Writing to a file to see whether the problem was in the stage was very usefull as well as the tip to drop the indexes before and add them after the inserts and updates.
I also added a hash file to make sure the table I was looking up on had no conflict with the table I was writing to.
The job is working properly now.
Good you have solved the problem,but we all love to know what exactly you did to solve the performance issue
Thanks
George Sebastian :D
-
- Premium Member
- Posts: 34
- Joined: Fri Jan 19, 2007 9:09 am
- Location: Breda, The Netherlands
Adding the hash file solved the problem
I added a hash file between the oracle lookup stage and the transformer.
The performance statistics in the jobdesigner made it look like that the problem was on inserting and updating the table (the same table as the lookup) but the problem was the lookup.
In the lookup stage i said to read only the committed records and all the ouput was committed in the end but apparently that's not a good combination as performance is concerned.
The performance statistics in the jobdesigner made it look like that the problem was on inserting and updating the table (the same table as the lookup) but the problem was the lookup.
In the lookup stage i said to read only the committed records and all the ouput was committed in the end but apparently that's not a good combination as performance is concerned.
Re: Adding the hash file solved the problem
Always best to give as much information as possible from the beginning. You hadn't mentioned any Oracle lookup that I recall.rverharen wrote:I added a hash file between the oracle lookup stage and the transformer.
So, you replaced an OCI lookup with a Hashed File lookup? Hashed files are the heart and soul of the Server product, proper use of them will always be a Best Practice.
![Wink :wink:](./images/smilies/icon_wink.gif)
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers