Dsnew wrote:I will implement this.
Do i have to check on Allow multiple instance?
How do you I give a name to an instance
Concatenating the sequential files after using the MOD function, will that not disturb the original sequence? Or does that not matter?
I have never done this before, can you please tell in detail..
1. Yes
2. Using a Sequencer or job control - just supply the instance name as Jobname.Instancename
3. What original sequence would that be? If you divide a source dataset into 5 round-robin assigned sets, there's no ordering. I suggest you reorder the data if necessary. If you have to process ordered data, it behooves you to partition in a round-robin sense that preserves ordering within partitioned groups. In other words, separate your source data into sets that keep "GROUPS" together, but still give a near-even distribution of data. You want to open more read-write processes between Oracle and DS to maximize data spooling, that's the goal.
Kenneth Bland
Rank: Sempai
Belt: First degree black
Fight name: Captain Hook
Signature knockout: right upper cut followed by left hook
Signature submission: Crucifix combined with leg triangle
I agree with the previous suggestions to use the Bulk Loader in your scenario, and have only the following observation regarding the "ORA-01555: snapshot too old" error.
It sounds like you are experiencing what is known as "Fetch across commits within an open cursor " and in this situation... "Committing less often (which will result in larger rollback segments) will REDUCE the probability of getting the ORA-01555. Thus, there is a trade-off between disk space and success in cursor processing."
Just out of curiosity, what was the setting for "Rows per Transaction" on the Oracle stage doing the 40 million inserts?
I am referring to the screen input item of the Oracle OCI Stage which is performing the insert operation. Look for an input box labelled "Rows per transaction" on the Transaction Handling tab, of the Input tab. This value determines the number of rows inserted before committing in Oracle.
The idea is that you can reduce the possiblity of getting the ORA-01555 error, by commiting less often (i.e. insert more rows before committing). In other words place a large value in the Rows per transaction.
You must balance the requirements for disk space that is required for holding the additional uncommitted inserts, and reducing the chance of hitting the ORA-01555 error.
I was curious if you had a low number (e.g. 1) inthe Rows per transaction, when you had the error.
This is only academic, as you are best advised to to the Bulk Loader method for this volume of input.