Page 1 of 1

performance improving while fetching data from oracle stage

Posted: Wed May 03, 2006 11:57 pm
by bhaskarjha
hi
how can in i improve the performance while fetching the data from oracle , using ODBC stage. in input i m having 13 lacks rows , and looking for a speed near to 1000 rows/ sec. if i attempt to view the data i am getting error that metadata mismatch and followed by an 'data source empty' message , inspite of having data in table.

Posted: Thu May 04, 2006 12:36 am
by ray.wurlod
First off, get rid of the metadata mismatch. Import the Oracle table definition and load that (just the columns that you need) into your DataStage job. Even that will improve the speed a little.

For the rest, the problem is what you are doing with the rows once they arrive. Chances are that the problem is not extraction from Oracle. To prove this, create a job that extracts from Oracle and writes to a text file, performing no transformation. Get performance metrics for this job. This is the upper limit; any processing you add to it will reduce throughput.

Add transformation, keeping the text file for output. Performance metrics will now reveal the impact of transformation processing.

Any other slowness is necessarily the load phase. Split this out; use sqlldr (or other bulk loader if the target is not Oracle) to load from the text file into the target table. This will be faster than INSERT statements.

Consider making the job multi-instance, each instance processing a similar proportion of the source data. This will not yield much gain on a single-CPU machine, but may well do so on a multi-CPU machine.

If it is still too slow, consult with your DBA about how to improve the load performance (temporarily removing constraints, indexes, etc.) You have exhausted the possibilities in DataStage by now.

Posted: Thu May 04, 2006 6:14 am
by chulett
Another thing to consider would be to switch to the native OCI stage rather than using ODBC.

Posted: Thu May 04, 2006 9:28 am
by pkomalla
You can also use a hash file.

Load required columns from the oracle table into hash file.Enable preload memory option in hash file which helps in performance.

As the number of rows is 130,000 using hash file is not a big problem.

Posted: Thu May 04, 2006 2:48 pm
by ray.wurlod
<peeve>It's "hashed" file, please get it right! A hash file is used for filing hash.</peeve>

"Hashed" refers to the fact that keys are located using a hashing algorithm (rather than a table scan or primary key index).