Page 1 of 1

Parameters for large data volume loads

Posted: Thu Oct 27, 2011 8:06 pm
by evee1
In some of my loads (via Oracle connector) I don;t want partial commits, i.e. I would like all the records to be commited or rolled back on job failure (this is how the current solution works using SQL loader).
I understand that I have to set the transaction record count to 0. At the moment, the Array size is set to the default 2000.

1) What happens if the load is large, let's say about 1 million or records, and the records are wide - 150 columns, total 1G of data) and the processed data does not fit into memory? Does it use scratch space? What I can do to improve the performace? What influence small/large array size will have?
My largest load is about 50G.

2) If the load it very small (few records), will setting Array size to a large number have detrimental effect on performance?

Posted: Fri Oct 28, 2011 9:46 am
by ray.wurlod
Get on very friendly terms with your DBA. You will need to ask for a REDO (rollback) segment large enough to handle a transaction of this size.

Posted: Sun Oct 30, 2011 9:05 am
by PaulVL
Are you just trying to get the fastest speed when loading Oracle (with no records dropped of course)?

Have you tried an Oracle bulk load?
You could (as Ray said) get buddy buddy with the DBA and turn off logging for that table.
You could bulk load into a temp table and then execute a macro on the Oracle side to sideways load it into the main table.
Understand how your Oracle table is set up, and always consult your DBA friend to get their insight/permission/tricks to load the table faster.