Parameters for large data volume loads
Posted: Thu Oct 27, 2011 8:06 pm
In some of my loads (via Oracle connector) I don;t want partial commits, i.e. I would like all the records to be commited or rolled back on job failure (this is how the current solution works using SQL loader).
I understand that I have to set the transaction record count to 0. At the moment, the Array size is set to the default 2000.
1) What happens if the load is large, let's say about 1 million or records, and the records are wide - 150 columns, total 1G of data) and the processed data does not fit into memory? Does it use scratch space? What I can do to improve the performace? What influence small/large array size will have?
My largest load is about 50G.
2) If the load it very small (few records), will setting Array size to a large number have detrimental effect on performance?
I understand that I have to set the transaction record count to 0. At the moment, the Array size is set to the default 2000.
1) What happens if the load is large, let's say about 1 million or records, and the records are wide - 150 columns, total 1G of data) and the processed data does not fit into memory? Does it use scratch space? What I can do to improve the performace? What influence small/large array size will have?
My largest load is about 50G.
2) If the load it very small (few records), will setting Array size to a large number have detrimental effect on performance?