Hi all,
I am working on a project which loads millions of records.Previously i did not touch any part of the array size or the transaction size and though the row/sec was less it was gaining the rate.Now when we showed this to client he made some changes with the array and the transaction sizes..The result was that it began with almost 500 row/sec but the rate kept reducing as the time passed.
Can someone tell me what to do in such a situation???? my earlier process never crossed 200 row/sec.
Please help
ARRAY SIZE AND TRANSACTION SIZE
Moderators: chulett, rschirm, roy
![Confused :?](./images/smilies/icon_confused.gif)
Way too many variables involved. We don't know anything about your network, job design, source, target or the nature of the transformations being done. We don't know what kind of indexes or constraints are on your target or even what 'load' means in this context. Hard to give any kind of specific advice without knowing specifics on the process.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
The job design is quite simple in nature.The source is an oracle9i and the same is with the target.As far as the the constrain is concern there are only the primary and foreign key constrains. The loading is like "insert without clearing the table" and in other job its like"clear the table then insert ".The transformation is simple one to one mapping.
I don't know if this will help but if you could be specific with the questions i might be able to answer more.![Crying or Very sad :cry:](./images/smilies/icon_cry.gif)
I don't know if this will help but if you could be specific with the questions i might be able to answer more.
![Crying or Very sad :cry:](./images/smilies/icon_cry.gif)
What "other job"?ajongba wrote:The loading is like "insert without clearing the table" and in other job its like"clear the table then insert ".
![Confused :?](./images/smilies/icon_confused.gif)
An "OCI to OCI" design is the slowest way you could solve this particular problem, especially if all you have is a "simple one to one mapping". For pure inserts consider converting this to use a bulk loader, either externally invoking sqlldr or with an ORAOCIBL stage.
-craig
"You can never have too many knives" -- Logan Nine Fingers
"You can never have too many knives" -- Logan Nine Fingers
We are connecting the server which is in another place. The other job is the jobs which are created in the same project. After the changed made from the clients side it seems like they are able to run the job in 3000 row/sec now. The changes he made with the array size is 10000 and transaction size is 10000.Is it the reason why it is performing well in their side