Oracle OCI stage reading...

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
swades
Premium Member
Premium Member
Posts: 323
Joined: Mon Dec 04, 2006 11:52 pm

Oracle OCI stage reading...

Post by swades »

Hi all,
Is there any positive output for reading oracle database table from OCI9I stage in server job by changing Array Size and Pre fetch memory settings,
if yes then what should be that?
reply me Thanks.
swades
Premium Member
Premium Member
Posts: 323
Joined: Mon Dec 04, 2006 11:52 pm

Post by swades »

I have like 2,000,000 records,right now i used user define query to read table,
which is join 4 different table and also it is reading 150 records per second.
Thanks.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Increase the array size to say 500 or 1000. Prefetch memory to the same. See what happens.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

To demonstrate that DataStage is not your problem, put @FALSE in a Transformer stage output constraint immediately after the OCI stage. I'm fairly sure your query is not sending rows very quickly to DataStage.

Also, make sure you're looking at the database server and DataStage server cpu utilization (prstat, topas, top, glance, etc) to insure that your expectation of performance is in alignment with server resources.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
swades
Premium Member
Premium Member
Posts: 323
Joined: Mon Dec 04, 2006 11:52 pm

Post by swades »

kcbland wrote:To demonstrate that DataStage is not your problem, put @FALSE in a Transformer stage output constraint immediately after the OCI stage. I'm fairly sure your query is not sending rows very quickly to DataStage.

Also, make sure you're looking at the database server and DataStage server cpu utilization (prstat, topas, top, glance, etc) to insure that your expectation of performance is in alignment with server resources.
Ken,Can you explain me more about it ,i did not get your point and also show me how can i do.?
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

Have a stream coming out of OCI stage into the transformer and then from there to any flat file. Provide the constraint inside the transformer as @FALSE so that no rows get written to the flat file. Run your job. This way you will find out where is the bottleneck in the performance. The extract rows/sec should be able to tell you that. If its the same, then that means its network issue and your network is heavily loaded.
To confirm that, use one of the utilities provided by kcbland.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

No need to remove the target OCI stage. Just do as Ken suggested and change your constraint(s) to @FALSE - literally. That will shut down the target half of your job and you can see how quickly your current settings can pull data out of your database.
-craig

"You can never have too many knives" -- Logan Nine Fingers
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

He never mentioned anything about any target OCI stage, did he? Thats why I advised to keep a flat file or anything for that matter.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

Ok, true. Point was to not rip apart the job and change the target (whatever it is) but to simply add the constraint.
-craig

"You can never have too many knives" -- Logan Nine Fingers
swades
Premium Member
Premium Member
Posts: 323
Joined: Mon Dec 04, 2006 11:52 pm

Post by swades »

surprising now as ken said i did ,it read 15000 record per second,
it take only 24 second to read all database table,
so what is solution now i need to apply for this job,it taking more time in real job.
Thanks.
DSguru2B
Charter Member
Charter Member
Posts: 6854
Joined: Wed Feb 09, 2005 3:44 pm
Location: Houston, TX

Post by DSguru2B »

That means your extract is fine. The bottleneck is inside your job design. Now give us more info on what your doing, what transformations you are using, how many stages are there etc etc etc.
Creativity is allowing yourself to make mistakes. Art is knowing which ones to keep.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Why is it surprising? The DataStage job can get the data from the database as fast as it can. However, what it does with it afterwards has consequences.

I suspect you have a lot of logic after the Transformer stage that slows down the receiving of the data. This is why we ALWAYS recommend simpler jobs.

OCI --> XFM --> SEQ
SEQ --> XFM --> HASH
SEQ -->XFMs/HASH lookups --> SEQ or HASH
SEQ --> XFM --> OCI


Each style of job will run fast because you're not mixing database operations with transformation operations. Both slow each other down more when in the same job than when in separate jobs.
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
swades
Premium Member
Premium Member
Posts: 323
Joined: Mon Dec 04, 2006 11:52 pm

Post by swades »

Hi all,
In my this job,after reading the database we have Xmer and in it 3 hashed file
given as look up , i checked from this 3 hashed file,2 of it having data in overflow file more then 2048.so it cause problem for reading or what?
reply me thanks.
kcbland
Participant
Posts: 5208
Joined: Wed Jan 15, 2003 8:56 am
Location: Lutz, FL
Contact:

Post by kcbland »

Look at the cpu utilization of your job. This will indicate if it's working as hard as it can. How about posting a "ls -lR" of the hashed files?
Kenneth Bland

Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
Post Reply