Datastage/Database performance

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
nkln@you
Premium Member
Premium Member
Posts: 271
Joined: Wed Nov 17, 2004 5:15 am
Location: US

Datastage/Database performance

Post by nkln@you »

There is one job, which does direct 1-1 mapping from source to target.

For 88855 records coming from source, time taken to insert into target is 5 min 52 secs.

We feel the time taken for insertion is lot. The number of columns in target is 110. There are 8 nounique indexes and 1 nonunique index on the table.

When I removed indexes and tried the time taken is 6 min 13 sec. I was surprised that the time increased after indexes are removed.

This is just an example. We feel the same problem in most cases. The environment we work around is in test environment database. So many users use this test server and many operations are going on.

Pls suggest me where the problem lies?

We think there is a problem at data base level. Are there any parmeters at database level to be set up for making performance better?

The m/c in which DS server installed 4 processors.

Request you to suggest some solution for this.
Aim high
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

Try redirecting the output into a seq file placed local to the DS server. In this runs without any impact from network or memory, it will give you the true DS performance.
ArndW
Participant
Posts: 16318
Joined: Tue Nov 16, 2004 9:08 am
Location: Germany
Contact:

Post by ArndW »

What Srini suggested will give a good idea of how fast you can read your data and write to a sequential file (you can also put a constraint in the job to write 0 records, and that gives you your read speed). If the read is slow you will need to tune your DB query.

What database are you writing to and which stage are you using? Is this a pure insert that can be done as a bulk load or a pure update or mix of the two? Each type of database write has different aspects that can slow it down and, once you identify the culprit, you can tune.
nkln@you
Premium Member
Premium Member
Posts: 271
Joined: Wed Nov 17, 2004 5:15 am
Location: US

Post by nkln@you »

Hi Srinivasan i will try it now,


Hi ArndW , the data base i am using is Oracle 9i, and the stage i am using for insert is ORACLE OCI 9i. This a pure insert and there are some sqls to be run in After SQL in the job i mentioned below.

But in other job it is a pure insert, there are no after sql's . Here the stage used is ORACLE OCI 9i. For 27,150 records it is taking 2 min 39 sec.

How to trace the culprit.
Aim high
kduke
Charter Member
Charter Member
Posts: 5227
Joined: Thu May 29, 2003 9:47 am
Location: Dallas, TX
Contact:

Post by kduke »

Rows per second are not a good indicator. It should be MB per second. You have 110 columns. Some of those could be large varchars.

If you disabled your indexes and not removed them then that will change the performance as well. You always need to ANALYZE TABLE in Oracle when removing indexes or adding them. We do it before each production run on all tables. Increases performance by 3 or 4 times but not always.
Mamu Kim
Transmogorifier
Participant
Posts: 9
Joined: Sat Feb 12, 2005 5:24 pm

How about Transaction size?

Post by Transmogorifier »

Have you tried tweaking the Transaction size and Array size?
This would only work if your reads are being performed at a good speed.
Take Sainath's advice of writing to a local seq. file and checking the speed
of the reads. If they are fine and the main botttleneck is the Oci9i stage, then
I suggest increasing the Transaction size and the array size in the Oci write
stage. I have found that it affects the write speed into oracle by a huge factor.

Cheers. :)
Transmogorifier
Participant
Posts: 9
Joined: Sat Feb 12, 2005 5:24 pm

How about Transaction size?

Post by Transmogorifier »

Have you tried tweaking the Transaction size and Array size?
This would only work if your reads are being performed at a good speed.
Take Sainath's advice of writing to a local seq. file and checking the speed
of the reads. If they are fine and the main botttleneck is the Oci9i stage, then
I suggest increasing the Transaction size and the array size in the Oci write
stage. I have found that it affects the write speed into oracle by a huge factor.

Cheers. :)
nkln@you
Premium Member
Premium Member
Posts: 271
Joined: Wed Nov 17, 2004 5:15 am
Location: US

Post by nkln@you »

HI srini ,I tried writing the data into a sequnetial file. It was very fast, but when i used OCI stage, it is slow as mentioned :(


Hi Transmogorifier
The job i mentioned is for increemntal. can we use tranaction size >0 in incremental jobs?
Aim high
nkln@you
Premium Member
Premium Member
Posts: 271
Joined: Wed Nov 17, 2004 5:15 am
Location: US

Post by nkln@you »

HI srini ,I tried writing the data into a sequnetial file. It was very fast, but when i used OCI stage, it is slow as mentioned :(


Hi Transmogorifier
The job i mentioned is for increemntal. can we use tranaction size >0 in incremental jobs?
Aim high
Sainath.Srinivasan
Participant
Posts: 3337
Joined: Mon Jan 17, 2005 4:49 am
Location: United Kingdom

Post by Sainath.Srinivasan »

What are you doing in the 'target load'? Are there lots of updates?

Use the seq file created as input and select few columns in the target. You may notice a significant change when you exclude some columns in the target load.

Also check whether your tablespace growth is set to automatic by the DBA. This may also be the reason for some delay.

Try this when the network traffic is low in order to obtain accurate results.
Transmogorifier
Participant
Posts: 9
Joined: Sat Feb 12, 2005 5:24 pm

Transaction size...

Post by Transmogorifier »

It will depend on how you are incrementally loading the tables and the priorities of your updates/inserts. If they are handled separately, then it is possible to use a transaction size >0 but if the updates depend on inserts being there first, then it may create a problem, but I'm not too sure on that. I hope someone from this forum can help us out regarding the details? It is my understandig that the transaction size is nothing but a buffer into which DS stores the data, and then generates single SQL statement for multiple rows insertion/updation instead of sending SQL queries for each and every insert/update. Multiple single transactions include a lot of network and database overheads, which slows the system performance.

Cheers. :)
nkln@you wrote:HI srini ,I tried writing the data into a sequnetial file. It was very fast, but when i used OCI stage, it is slow as mentioned :(


Hi Transmogorifier
The job i mentioned is for increemntal. can we use tranaction size >0 in incremental jobs?
Post Reply