Page 1 of 1

Oracle OCI stage reading...

Posted: Wed Mar 14, 2007 8:31 am
by swades
Hi all,
Is there any positive output for reading oracle database table from OCI9I stage in server job by changing Array Size and Pre fetch memory settings,
if yes then what should be that?
reply me Thanks.

Posted: Wed Mar 14, 2007 8:33 am
by swades
I have like 2,000,000 records,right now i used user define query to read table,
which is join 4 different table and also it is reading 150 records per second.
Thanks.

Posted: Wed Mar 14, 2007 8:38 am
by DSguru2B
Increase the array size to say 500 or 1000. Prefetch memory to the same. See what happens.

Posted: Wed Mar 14, 2007 8:56 am
by kcbland
To demonstrate that DataStage is not your problem, put @FALSE in a Transformer stage output constraint immediately after the OCI stage. I'm fairly sure your query is not sending rows very quickly to DataStage.

Also, make sure you're looking at the database server and DataStage server cpu utilization (prstat, topas, top, glance, etc) to insure that your expectation of performance is in alignment with server resources.

Posted: Wed Mar 14, 2007 9:12 am
by swades
kcbland wrote:To demonstrate that DataStage is not your problem, put @FALSE in a Transformer stage output constraint immediately after the OCI stage. I'm fairly sure your query is not sending rows very quickly to DataStage.

Also, make sure you're looking at the database server and DataStage server cpu utilization (prstat, topas, top, glance, etc) to insure that your expectation of performance is in alignment with server resources.
Ken,Can you explain me more about it ,i did not get your point and also show me how can i do.?

Posted: Wed Mar 14, 2007 9:17 am
by DSguru2B
Have a stream coming out of OCI stage into the transformer and then from there to any flat file. Provide the constraint inside the transformer as @FALSE so that no rows get written to the flat file. Run your job. This way you will find out where is the bottleneck in the performance. The extract rows/sec should be able to tell you that. If its the same, then that means its network issue and your network is heavily loaded.
To confirm that, use one of the utilities provided by kcbland.

Posted: Wed Mar 14, 2007 9:19 am
by chulett
No need to remove the target OCI stage. Just do as Ken suggested and change your constraint(s) to @FALSE - literally. That will shut down the target half of your job and you can see how quickly your current settings can pull data out of your database.

Posted: Wed Mar 14, 2007 9:24 am
by DSguru2B
He never mentioned anything about any target OCI stage, did he? Thats why I advised to keep a flat file or anything for that matter.

Posted: Wed Mar 14, 2007 9:26 am
by chulett
Ok, true. Point was to not rip apart the job and change the target (whatever it is) but to simply add the constraint.

Posted: Wed Mar 14, 2007 9:37 am
by swades
surprising now as ken said i did ,it read 15000 record per second,
it take only 24 second to read all database table,
so what is solution now i need to apply for this job,it taking more time in real job.
Thanks.

Posted: Wed Mar 14, 2007 9:40 am
by DSguru2B
That means your extract is fine. The bottleneck is inside your job design. Now give us more info on what your doing, what transformations you are using, how many stages are there etc etc etc.

Posted: Wed Mar 14, 2007 9:42 am
by kcbland
Why is it surprising? The DataStage job can get the data from the database as fast as it can. However, what it does with it afterwards has consequences.

I suspect you have a lot of logic after the Transformer stage that slows down the receiving of the data. This is why we ALWAYS recommend simpler jobs.

OCI --> XFM --> SEQ
SEQ --> XFM --> HASH
SEQ -->XFMs/HASH lookups --> SEQ or HASH
SEQ --> XFM --> OCI


Each style of job will run fast because you're not mixing database operations with transformation operations. Both slow each other down more when in the same job than when in separate jobs.

Posted: Wed Mar 14, 2007 10:12 am
by swades
Hi all,
In my this job,after reading the database we have Xmer and in it 3 hashed file
given as look up , i checked from this 3 hashed file,2 of it having data in overflow file more then 2048.so it cause problem for reading or what?
reply me thanks.

Posted: Wed Mar 14, 2007 11:35 am
by kcbland
Look at the cpu utilization of your job. This will indicate if it's working as hard as it can. How about posting a "ls -lR" of the hashed files?