Page 2 of 2

Posted: Tue Apr 26, 2005 12:51 pm
by kcbland
Dsnew wrote:I will implement this.
Do i have to check on Allow multiple instance?
How do you I give a name to an instance
Concatenating the sequential files after using the MOD function, will that not disturb the original sequence? Or does that not matter?
I have never done this before, can you please tell in detail..
1. Yes
2. Using a Sequencer or job control - just supply the instance name as Jobname.Instancename
3. What original sequence would that be? If you divide a source dataset into 5 round-robin assigned sets, there's no ordering. I suggest you reorder the data if necessary. If you have to process ordered data, it behooves you to partition in a round-robin sense that preserves ordering within partitioned groups. In other words, separate your source data into sets that keep "GROUPS" together, but still give a near-even distribution of data. You want to open more read-write processes between Oracle and DS to maximize data spooling, that's the goal.

Posted: Tue Apr 26, 2005 12:56 pm
by Dsnew
Kenneth
Thanks for your advice and patience!
I wil go ahead and implement this method
Will get back to you if i have any issues

Posted: Wed Apr 27, 2005 4:50 am
by Sainath.Srinivasan
Is the whole 40m rows for each load? Maybe you can consider breaking them in smaller delta loads.

Posted: Wed Apr 27, 2005 11:00 am
by narasimha
Sainath,

Break if up using the MOD in the Source as Ken suggested or do you have any other ideas?
The 40 million rows, is a one time load.

Posted: Wed Apr 27, 2005 11:34 am
by katz
I agree with the previous suggestions to use the Bulk Loader in your scenario, and have only the following observation regarding the "ORA-01555: snapshot too old" error.

It sounds like you are experiencing what is known as "Fetch across commits within an open cursor " and in this situation... "Committing less often (which will result in larger rollback segments) will REDUCE the probability of getting the ORA-01555. Thus, there is a trade-off between disk space and success in cursor processing."

Just out of curiosity, what was the setting for "Rows per Transaction" on the Oracle stage doing the 40 million inserts?

Posted: Wed Apr 27, 2005 12:19 pm
by narasimha
You want to know the row/sec for loading the 40 million records?
It was 5215 row/sec and it took around 2 and a half hours

Posted: Fri Apr 29, 2005 11:24 am
by katz
Hi narasimha,

I am referring to the screen input item of the Oracle OCI Stage which is performing the insert operation. Look for an input box labelled "Rows per transaction" on the Transaction Handling tab, of the Input tab. This value determines the number of rows inserted before committing in Oracle.

The idea is that you can reduce the possiblity of getting the ORA-01555 error, by commiting less often (i.e. insert more rows before committing). In other words place a large value in the Rows per transaction.

You must balance the requirements for disk space that is required for holding the additional uncommitted inserts, and reducing the chance of hitting the ORA-01555 error.

I was curious if you had a low number (e.g. 1) inthe Rows per transaction, when you had the error.

This is only academic, as you are best advised to to the Bulk Loader method for this volume of input.

Best luck

Posted: Fri Apr 29, 2005 1:05 pm
by narasimha
I am using the Bulk Load method, Getting good results :lol:
Yeah in my previous OCI method my Rows per transaction was '0', I had not changed that!! :oops: