Page 1 of 1

Handling Huge files

Posted: Sun Apr 04, 2004 10:50 pm
by nilotpalr
Hi All,
I need to load files of 1 GB and upwards in Oracle 9i database. As of now I have one-to-one mapping. The datastage server job is taking around 40 minutes to do the same. When I use SQLLDR along with the control files (in a conventional path load) I take around 5 minutes.

Is possible to reach similar timings using datastage. Last I had workerd on version 5.2 in year 2002. Please advice if there are additional mechnism in version 7 (like parallel extender or parallel jobs - I am just guessing -:) ) that will help me to match the timings of SQLLDR.

Thanks in advance.

Nilotpal.

Posted: Sun Apr 04, 2004 11:54 pm
by ray.wurlod
No.

The sqlldr utility only does bulk loads. This is typically the final step in a DataStage job, which also performs extraction from source, cleansing, transformation, aggregation, and so on.

You're not comparing apples with apples here.

However, parallel extender (PX) does give you potentially huge throughput gains over server, provided that you can throw enough resources at it (multiple CPUs, whether in an SMP environment or an MPP environment, and lots and lots of memory).

You can do a lot of this with multi-instance server jobs, too, but PX handles data partitioning, pipelining and allocation of resources automatically, whereas you have to manage all these things explicitly if using server jobs.

Check out the Ascential website for a press release about the speed of PX.

Posted: Tue Apr 06, 2004 6:22 am
by MukundShastri
You can use orabulk stage alongwith sqlldr . Please keep in mind that Orabulk will only create the control file and not dat file.
Use next job to load the data in orginal dat file using control file generated in orabulk stage job before.

It works fast .

Thanks

Mukund Shastri