Page 1 of 1

Performance from 86 rows to 1 row per second

Posted: Wed May 30, 2007 4:02 am
by rverharen
When starting a job it starts to run at a 86 rows per second.
When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
The job doesn't give warnings so the problem isn't the size of the logfile.
The table does contain 1 varcahar2(4000) field so maybe that requires some specific setting but i am searching the ideal ones.

Posted: Wed May 30, 2007 5:55 am
by WoMaWil
do you have an index on the table.

Why do you use only on field in Oracle and not many.

What do you means in such a kind of scenario by "Update"

Re: Performance from 86 rows to 1 row per second

Posted: Wed May 30, 2007 6:02 am
by georgesebastian
rverharen wrote:When starting a job it starts to run at a 86 rows per second.
When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
The job doesn't give warnings so the problem isn't the size of the logfile.
The table does contain 1 varcahar2(4000) field so maybe that requires some specific setting but i am searching the ideal ones.
Hi Rverharen;

I think first check whether this happens due to the transforms that you used in the transfmer.For that try loading the data to a flat file.

If that job is fast.then Play around with the Array size and Row per transactions size.

Thanks
George

Posted: Wed May 30, 2007 6:05 am
by rverharen
WoMaWil wrote:do you have an index on the table.

Why do you use only on field in Oracle and not many.

What do you means in such a kind of scenario by "Update"

I do have indexes on the table and the table contains more columns (only one of them is varchar2 (4000)).
Inserting in the table just becomes very slow as more and more rows go in.

Updates takes place on other fields then the primary key (the key is used to update the right record).

Posted: Wed May 30, 2007 6:30 am
by MikeGoldrick
Oracle jobs can slow significantly if they are loading indexes in the background if you are loading a lot of rows. You might consider disabling the index(s) before the job starts and rebuilding the index(s) after the job ends. You can also sort the rows in index order before loading for some improvement.

Here is an example:
Disable Index(s) before the job
SELECT 'ALTER INDEX ' (owner) (index name) ' UNUSABLE '

Rebuild Index(s) after the job
SELECT 'ALTER INDEX ' (owner) (index name) ' REBUILD '

Analyze Index(s) after the job
SELECT 'ALTER INDEX ' (owner) (index name) ' COMPUTE STATISTICS '

Check with your DBA for more details. Thanks.

Re: Performance from 86 rows to 1 row per second

Posted: Wed May 30, 2007 6:36 am
by chulett
rverharen wrote:When inserting en updating more and more data to a oracle table the performance slowly decreases from 86 rows to eventually a 3/1 rows per second.
If "inserting and updating" means you are using one of the 'combo' Update actions, then that is probably the heart of your problem. Never mind the fact that you may have to perform two actions instead of just one. :?

If so, it's typically not the 'insert' portion that gets people in trouble, it's the 'update'. The question is - Do you have indexes over the key columns used in your update sql? If not, that can cause each update to perform a full table scan and thus get slower and slower as the data volume increases.

Posted: Wed May 30, 2007 6:37 am
by WoMaWil
Why don't you separate the update stream and the insert stream into two separate streams.

That might give you also a performance boost beside the other tipps.

Thanks for your responses

Posted: Wed May 30, 2007 7:58 am
by rverharen
Thanks everyone for your tips.
Writing to a file to see whether the problem was in the stage was very usefull as well as the tip to drop the indexes before and add them after the inserts and updates.
I also added a hash file to make sure the table I was looking up on had no conflict with the table I was writing to.
The job is working properly now.

Posted: Wed May 30, 2007 8:01 am
by DSguru2B
So what solved your problem? Did you seperate the update and insert streams or just dropped the indices and recreated them after the load?

Re: Thanks for your responses

Posted: Thu May 31, 2007 2:22 am
by georgesebastian
rverharen wrote:Thanks everyone for your tips.
Writing to a file to see whether the problem was in the stage was very usefull as well as the tip to drop the indexes before and add them after the inserts and updates.
I also added a hash file to make sure the table I was looking up on had no conflict with the table I was writing to.
The job is working properly now.
Hi Rverharen,

Good you have solved the problem,but we all love to know what exactly you did to solve the performance issue

Thanks
George Sebastian :D

Adding the hash file solved the problem

Posted: Thu May 31, 2007 3:43 am
by rverharen
I added a hash file between the oracle lookup stage and the transformer.
The performance statistics in the jobdesigner made it look like that the problem was on inserting and updating the table (the same table as the lookup) but the problem was the lookup.
In the lookup stage i said to read only the committed records and all the ouput was committed in the end but apparently that's not a good combination as performance is concerned.

Re: Adding the hash file solved the problem

Posted: Thu May 31, 2007 6:24 am
by chulett
rverharen wrote:I added a hash file between the oracle lookup stage and the transformer.
Always best to give as much information as possible from the beginning. You hadn't mentioned any Oracle lookup that I recall.

So, you replaced an OCI lookup with a Hashed File lookup? Hashed files are the heart and soul of the Server product, proper use of them will always be a Best Practice. :wink: