Loading Huge volume of Data into Oracle table

Post questions here relative to DataStage Enterprise/PX Edition for such areas as Parallel job design, Parallel datasets, BuildOps, Wrappers, etc.

Moderators: chulett, rschirm, roy

Post Reply
goriparthi
Charter Member
Charter Member
Posts: 57
Joined: Fri Feb 24, 2006 7:44 am

Loading Huge volume of Data into Oracle table

Post by goriparthi »

Hi all,

we have a job that does a upsert , Job design is to join on XRF table and getting the surrogate key and then loading into oracle table.

Now the input data which comes through a set of flat files is about 180 million records (around 20 gb) and the retention capacity of target table is around 1.5 billion records.So once the table grows big how to handle splitting inserts and updates.

I want to know if someone has dealt with similar volumes and split the inserts and updates.

Any suggestions or approaches are appreciated.

Thanks
ray.wurlod
Participant
Posts: 54607
Joined: Wed Oct 23, 2002 10:52 pm
Location: Sydney, Australia
Contact:

Post by ray.wurlod »

Do a lookup against the primary key of the target table.

If that's too big, first extract a single column table made up of those primary keys and do a lookup against that.

But it shouldn't be necessary to do that, because the lookup against the primary key should be resolved in its index.
IBM Software Services Group
Any contribution to this forum is my own opinion and does not necessarily reflect any position that IBM may hold.
Post Reply