ARRAY SIZE AND TRANSACTION SIZE

Post questions here relative to DataStage Server Edition for such areas as Server job design, DS Basic, Routines, Job Sequences, etc.

Moderators: chulett, rschirm, roy

Post Reply
ajongba
Participant
Posts: 31
Joined: Tue Apr 03, 2007 4:00 am
Location: Mumbai

ARRAY SIZE AND TRANSACTION SIZE

Post by ajongba »

Hi all,
I am working on a project which loads millions of records.Previously i did not touch any part of the array size or the transaction size and though the row/sec was less it was gaining the rate.Now when we showed this to client he made some changes with the array and the transaction sizes..The result was that it began with almost 500 row/sec but the rate kept reducing as the time passed.
Can someone tell me what to do in such a situation???? my earlier process never crossed 200 row/sec.

Please help
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

:? That's not alot to go on. Only advice at this stage - set them back to their original values.

Way too many variables involved. We don't know anything about your network, job design, source, target or the nature of the transformations being done. We don't know what kind of indexes or constraints are on your target or even what 'load' means in this context. Hard to give any kind of specific advice without knowing specifics on the process.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ajongba
Participant
Posts: 31
Joined: Tue Apr 03, 2007 4:00 am
Location: Mumbai

Post by ajongba »

The job design is quite simple in nature.The source is an oracle9i and the same is with the target.As far as the the constrain is concern there are only the primary and foreign key constrains. The loading is like "insert without clearing the table" and in other job its like"clear the table then insert ".The transformation is simple one to one mapping.
I don't know if this will help but if you could be specific with the questions i might be able to answer more. :cry:
chulett
Charter Member
Charter Member
Posts: 43085
Joined: Tue Nov 12, 2002 4:34 pm
Location: Denver, CO

Post by chulett »

ajongba wrote:The loading is like "insert without clearing the table" and in other job its like"clear the table then insert ".
What "other job"? :?

An "OCI to OCI" design is the slowest way you could solve this particular problem, especially if all you have is a "simple one to one mapping". For pure inserts consider converting this to use a bulk loader, either externally invoking sqlldr or with an ORAOCIBL stage.
-craig

"You can never have too many knives" -- Logan Nine Fingers
ajongba
Participant
Posts: 31
Joined: Tue Apr 03, 2007 4:00 am
Location: Mumbai

Post by ajongba »

We are connecting the server which is in another place. The other job is the jobs which are created in the same project. After the changed made from the clients side it seems like they are able to run the job in 3000 row/sec now. The changes he made with the array size is 10000 and transaction size is 10000.Is it the reason why it is performing well in their side
Post Reply