Page 1 of 1

Tuning datastage

Posted: Wed Nov 22, 2006 12:14 am
by arun_nair24
Hi,
I am new to datastage, am an Oracle dba and have been asked to improve the datastage loading performance.

Wat all things can be considered for tuning datastage. I am running a job which selects data from one table, does a basic transformation( mapping) and loads it to another table in another schema.


What i noticed is when i start the job, initially the rows/sec (~1000 rows/sec) are pretty high and then they drop to around (~600 rows/sec).
What could be the reason? I have enabled row buffer cache and made it 1024 kb. THis is datastage 7.5.1. I monitored the database side, there everything seems normal
Any help would be highly appreciated.

Tuning Datastage

Posted: Wed Nov 22, 2006 1:04 am
by suresh.narasimha
Hi,

1.If you have a huge volume of data and there are no constraint on your target table you can go for a Bulk Stage.
2.You can use Link partitioner with multiple transformers depending on the number of CPUs of your server.
3.Select only the required columns at the source.
4.Performance also depends on the network traffic and the I/O hits on the server

So, if you take care of 1,2,3 and 4 is not in your hands.

Regards,
Suresh N

Posted: Wed Nov 22, 2006 2:35 am
by arun_nair24
Thanks for your reply, But i would not be able to use bulk loader as the design here is using an ODBC source/destination ( cant change that :(

Also, any idea regarding why the rows/sec are pretty high to start with and then they drop off?

Posted: Wed Nov 22, 2006 4:55 am
by aakashahuja
MOst of the times, while loading to a table, DS starts off with high row count and then later it drops. One reason to this could be due to filling of the log space. Modifying the commit interval could do the trick if this is the case.

Apart from these, basic database tuning tips, if kept in mind, often render good performance.

Also, you could modify the Array Size (and correspondingly the Transaction size variable). A proper value of these also improve performance.

Posted: Wed Nov 22, 2006 4:56 am
by aakashahuja
Most of the times, while loading to a table, DS starts off with high row count and then later it drops. One reason to this could be due to filling of the log space. Modifying the commit interval could do the trick if this is the case.

Apart from these, basic database tuning tips, if kept in mind, often render good performance.

Also, you could modify the Array Size (and correspondingly the Transaction size variable). A proper value of these also improve performance.

Hope it helps

Posted: Wed Nov 22, 2006 7:32 am
by ray.wurlod
A running DataStage job is simply another Oracle client application. It will either be doing inserts/updates, or will invoke sqlldr. Tune these as you would for any other client application. Beware that some DataStage developers will try to create 10 million row transactions, so make sure you have sufficient rollback segment space.